source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Full-body%20CT%20scan
|
A full-body scan is a scan of the patient's entire body as part of the diagnosis or treatment of illnesses. If computed tomography (CAT) scan technology is used, it is known as a full-body CT scan, though many medical imaging technologies can perform full-body scans.
Indications
Full-body CT scans allow a transparent view of the body. For polytrauma patients, aggressive use of full-body CT scanning improves early diagnosis of injury and improves survival rates,
with widespread adoption of the technique seen worldwide.
Full-body CT scans are not indicated in patients with minor or single system trauma, and should be avoided in such patients.
Many possible malignancies are discovered with a full-body scan, but these are almost always benign. These may not be related to any disease, and may be benign growths, scar tissue, or the remnants of previous infections. CT scanning for other reasons sometimes identifies these "incidentalomas".
However, the significance of radiation exposure as well as costs associated with these studies must be considered, especially in patients with low energy mechanisms of injury and absent physical examination findings consistent with major trauma.
A full-body scan has the potential to identify disease (e.g. cancer) in early stages, and early identification can improve the success of curative efforts. Controversy arises from the use of full-body scans in the screening of patients who have no signs or symptoms suggestive of a disease. As with any test that screens for disease, the risks of full-body CT scans need to be weighed against the benefit of identifying a treatable disease at an early stage.
An alternative to a full-body CT scan may be Magnetic resonance imaging (MRI) scans. MRI scans are generally more expensive than CT but do not expose the patient to ionizing radiation and are being evaluated for their potential value in screening.
Risks and complications
Compared to most other diagnostic imaging procedures, CT scans result
|
https://en.wikipedia.org/wiki/Tiger-BASIC
|
Tiger-BASIC is a high speed multitasking BASIC dialect (List of BASIC dialects) to program microcontrollers of the BASIC-Tiger family. Tiger-BASIC and the integrated development environment which goes with it, were developed by Wilke-Technology (Aachen, Germany).
External links
Wilke-Technology
BASIC programming language
Embedded systems
|
https://en.wikipedia.org/wiki/Compact%20embedding
|
In mathematics, the notion of being compactly embedded expresses the idea that one set or space is "well contained" inside another. There are versions of this concept appropriate to general topology and functional analysis.
Definition (topological spaces)
Let (X, T) be a topological space, and let V and W be subsets of X. We say that V is compactly embedded in W, and write V ⊂⊂ W, if
V ⊆ Cl(V) ⊆ Int(W), where Cl(V) denotes the closure of V, and Int(W) denotes the interior of W; and
Cl(V) is compact.
Definition (normed spaces)
Let X and Y be two normed vector spaces with norms ||•||X and ||•||Y respectively, and suppose that X ⊆ Y. We say that X is compactly embedded in Y, and write X ⊂⊂ Y, if
X is continuously embedded in Y; i.e., there is a constant C such that ||x||Y ≤ C||x||X for all x in X; and
The embedding of X into Y is a compact operator: any bounded set in X is totally bounded in Y, i.e. every sequence in such a bounded set has a subsequence that is Cauchy in the norm ||•||Y.
If Y is a Banach space, an equivalent definition is that the embedding operator (the identity) i : X → Y is a compact operator.
When applied to functional analysis, this version of compact embedding is usually used with Banach spaces of functions. Several of the Sobolev embedding theorems are compact embedding theorems. When an embedding is not compact, it may possess a related, but weaker, property of cocompactness.
|
https://en.wikipedia.org/wiki/Macron%20below
|
Macron below is a combining diacritical mark that is used in various orthographies.
A non-combining form is . It is not to be confused with , and . The difference between "macron below" and "low line" is that the latter results in an unbroken underline when it is run together: compare a̱ḇc̱ and a̲b̲c̲ (only the latter should look like abc).
Unicode
Macron below character
Unicode defines several characters for the macron below:
There are many similar marks covered elsewhere:
Spacing underscores, including
Combining underlines, including
;
International Phonetic Alphabet mark for retracted or backed articulation:
Precomposed characters
Various precomposed letters with a macron below are defined in Unicode:
Note that the Unicode character names of precomposed characters whose decompositions contain use "WITH LINE BELOW" rather than "WITH MACRON BELOW". Thus, decomposes to and .
The Vietnamese đồng currency sign resembles a lower case d with a stroke and macron below: but is neither a letter nor decomposable.
See also
Underscore
Macron
|
https://en.wikipedia.org/wiki/Spinhenge%40Home
|
Spinhenge@home was a volunteer computing project on the BOINC platform, which performs extensive numerical simulations concerning the physical characteristics of magnetic molecules. It is a project of the Bielefeld University of Applied Sciences, Department of Electrical Engineering and Computer Science, in cooperation with the University of Osnabrück and Ames Laboratory.
The project began beta testing on September 1, 2006 and used the Metropolis Monte Carlo algorithm to calculate and simulate spin dynamics in nanoscale molecular magnets.
On September 28, 2011, a hiatus was announced while the project team reviewed results and upgraded hardware. As of July 10, 2022 the hiatus continues and it is likely that the project has been closed down permanently.
See also
Spintronics
BOINC
List of volunteer computing projects
|
https://en.wikipedia.org/wiki/Evolutionary%20suicide
|
Evolutionary suicide is an evolutionary phenomenon in which the process of adaptation causes the population to become extinct. It provides an alternative explanation for extinction, which is due to misadaptation rather than failure to adapt.
For example, individuals might be selected to destroy own food (e.g. switch from eating mature plants to seedlings), and thereby deplete their food plant's population. Selection on individuals can theoretically produce adaptations that threaten the survival of the population. Much of the research on evolutionary suicide has used the mathematical modeling technique adaptive dynamics, in which genetic changes are studied together with population dynamics. This allows the model to predict how population density will change as a given a so called kamikaze mutant with a certain phenotypic trait invades the population. At first, a kamikaze mutant has an advantage in reproduction, but once it spreads throughout the population, the population collapses.
Evolutionary suicide has also been referred to as "Darwinian extinction", "runaway selection to self-extinction", and "evolutionary collapse". The idea is similar in concept to the tragedy of the commons and the tendency of the rate of profit to fall, namely that they are all examples of an accumulation of individual changes leading to a collective disaster such that it negates those individual changes.
Many adaptations have apparently negative effects on population dynamics, for example infanticide by male lions, or the production of toxins by bacteria. However, empirically establishing that an extinction event was unambiguously caused by the process of adaptation is not a trivial task.
See also
Fisherian runaway
Social trap
The Limits to Growth
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Plant%20Breeding%20Research
|
The Max Planck Institute for Plant Breeding Research was founded in Müncheberg, Germany in 1928 as part of the Kaiser-Wilhelm-Gesellschaft. The founding director, Erwin Baur, initiated breeding programmes with fruits and berries, and basic research on Antirrhinum majus and the domestication of lupins. After the Second World War, the institute moved west to Voldagsen, and was relocated to new buildings on the present site in Cologne in 1955.
·The modern era of the Institute began in 1978 with the appointment of Jeff Schell and the development of plant transformation technologies and plant molecular genetics. The focus on molecular genetics was extended in 1980 with the appointment of Heinz Saedler. The appointment in 1983 of Klaus Hahlbrock broadened the expertise of the Institute in the area of plant biochemistry, and the arrival of Francesco Salamini in 1985 added a focus on crop genetics. During the period 1978-1990, the Institute was greatly expanded and new buildings were constructed for the departments led by Schell, Hahlbrock and Salamini, in addition to a new lecture hall and the Max Delbrück Laboratory building that housed independent research groups over a period of 10 years.
A new generation of directors was appointed from 2000 with the approaching retirements of Klaus Hahlbrock and Jeff Schell. Paul Schulze-Lefert and George Coupland were appointed in 2000 and 2001, respectively, and Maarten Koornneef arrived three years later upon the retirement of Francesco Salamini. The new scientific departments brought a strong focus on utilising model species to understand the regulatory principles and molecular mechanisms underlying selected traits. The longer-term aim is to translate these discoveries to breeding programmes through the development of rational breeding concepts. The arrival of a new generation of Directors also required modernisation of the infrastructure. So far, this has involved complete refurbishment of the building that houses the Plant Dev
|
https://en.wikipedia.org/wiki/Notifiable%20disease
|
A notifiable disease is any disease that is required by law to be reported to government authorities. The collation of information allows the authorities to monitor the disease, and provides early warning of possible outbreaks. In the case of livestock diseases, there may also be the legal requirement to kill the infected livestock upon notification. Many governments have enacted regulations for reporting of both human and animal (generally livestock) diseases.
Global
Human
The World Health Organization's International Health Regulations 1969 require disease reporting to the organization in order to help with its global surveillance and advisory role. The current (1969) regulations are rather limited with a focus on reporting of three main diseases: cholera, yellow fever and plague. Smallpox was a contagious disease during the 18th-20th century. It was endemic until mass vaccination, after which WHO certified Smallpox to be eradicated. This marked the first human disease to be successfully eradicated.
The revised International Health Regulations 2005 broadens this scope and is no longer limited to the notification of specific diseases. Whilst it does identify a number of specific diseases, it also defines a limited set of criteria to assist in deciding whether an event is notifiable to WHO.
WHO states that "Notification is now based on the identification within a State Party’s territory of an "event that may constitute a public health emergency of international concern". This non-disease specific definition of notifiable events expands the scope of the IHR (2005) to include any novel or evolving risk to international public health, taking into account the context in which the event occurs. Such notifiable events can extend beyond communicable diseases and arise from any origin or source. This broad notification requirement aims at detecting, early on, all public health events that could have serious and international consequences, and preventing or containi
|
https://en.wikipedia.org/wiki/Max%20Planck%20Institute%20for%20Evolutionary%20Biology
|
The Max Planck Institute for Evolutionary Biology is a German institute for evolutionary biology. It is located in Plön, Schleswig-Holstein, Germany.
History
The institute was founded by German zoologist Otto Zacharias as Hydrobiologische Station zu Plön. Working in Italy in the 1880s, Zacharias was inspired by the highly recognised Stazione Zoologica in Naples, founded in 1870 by Anton Dohrn, to set up the first biological station for freshwater research in Germany. He secured financial support from the Prussian government and several private individuals to establish it on Großer Plöner See in 1891, as a private research institute.
As the director, Zacharias published research reports from 1893 on the Station's activities, which were recorded from 1905 in the Archives of Hydrobiology. In so-called "summer schools" Zacharias trained teachers and laity interested in working with the microscope.
It became part of the Max Planck Society in 1948, and was renamed in 1966 as the Max Planck Institute of Limnology.
It was renamed again as Max Planck Institute for Evolutionary Biology in 2007, marking a change of the research focus towards evolutionary biology.
Departments
Evolutionary Genetics (Diethard Tautz)
Evolutionary Theory (Arne Traulsen)
Microbial Population Biology (Paul Rainey)
Research groups
Behavioural Genomics ()
Biological Clocks (Tobias Kaiser)
Dynamics of Social Behavior (Christian Hilbe)
Evolutionary Cell Biology (Javier López Garrido)
Craniofacial Biology (Marketa Kaucka )
Environmental Genomics ()
Antibiotic Resistance Evolution (Hinrich Schulenburg)
Evolutionary Genomics (John Baines)
Evolutionary Immunogenomics (Tobias L. Lenz)
|
https://en.wikipedia.org/wiki/VSTa
|
Valencia's Simple Tasker (VSTa) is an operating system with a microkernel architecture, with all device drivers and file systems residing in userspace mode. It mostly complies with the Portable Operating System Interface (POSIX), except where such compliance interferes with extensibility and modularity. It is conceptually inspired by QNX and Plan 9 from Bell Labs. Written by Andy Valencia, and released under a GNU General Public License (GPL). As of 2020, the licensing for VSTa is Copyleft.
It was originally written to run on Intel 80386 hardware, and then was ported to several different platforms, e.g., Motorola 68030 based Amigas.
VSTa is no longer developed. A fork, named Flexible Microkernel Infrastructure/Operating System (FMI/OS), did not make a release.
User interface
The default graphical user interface provided as a tar-ball with the system was ManaGeR (MGR).
|
https://en.wikipedia.org/wiki/GNewSense
|
gNewSense was a Linux distribution, active from 2006 to 2016. It was based on Debian, and developed with sponsorship from the Free Software Foundation. Its goal was user-friendliness, but with all proprietary (e.g. binary blobs) and non-free software removed. The Free Software Foundation considered gNewSense to be composed entirely of free software.
gNewSense took a relatively strict stance against proprietary software. For example, any documentation that gave instructions on installing proprietary software was excluded.
gNewSense's last release was made in 2016 and it has not had a supported version since 2018. DistroWatch classifies gNewSense as "discontinued".
History
The project was launched by Brian Brazil and Paul O'Malley in 2006. gNewSense was originally based on Ubuntu. With the 1.0 release, the Free Software Foundation provided assistance to gNewSense.
With no releases in two years, on 8 August 2011, DistroWatch classified gNewSense as "dormant". By September 2012 DistroWatch had changed the status to "active" again, and on 6 August 2013, the first version directly based on Debian, gNewSense 3 "Parkes", was released.
There have been several indications that it may be restarted, including a website announcement in 2019, but the project has remained inactive, with no releases since 2016. DistroWatch returned it to "dormant" status again in 2019 and "discontinued" by 2022.
, the home page of the project's website displayed a blank page with a meme labelling the Free Software Foundation a cult. After a short time, the website then redirected to the home page of the PureOS website.
However, as of June 2021, it now redirects to the FSF's list of Free/Libre distros.
Technical aspects
By default gNewSense uses GNOME. The graphical user interface can be customized with the user's choice of X display manager, window managers, and other desktop environments available to install through its hosted repositories.
The Ubiquity installer allows installing to
|
https://en.wikipedia.org/wiki/Sand%20dune%20ecology
|
Sand dune ecology describes the biological and physico-chemical interactions that are a characteristic of sand dunes.
Sand dune systems are excellent places for biodiversity, partly because they are not very productive for agriculture, and partly because disturbed, stressful, and stable habitats are present in proximity to each other. Many of them are protected as nature reserves, and some are parts of larger conservation areas, incorporating other coastal habitats like salt marshes, mud flats, grasslands, scrub, and woodland.
Plant habitat
Sand dunes provide a range of habitats for a range of unusual, interesting and characteristic plants that can cope with disturbed habitats. In the UK these may include restharrow Ononis repens, sand spurge Euphorbia arenaria and ragwort Senecio vulgaris - such plants are termed ruderals.
Other very specialised plants are adapted to the accretion of sand, surviving the continual burial of their shoots by sending up very rapid vertical growth. Marram grass, Ammophila arenaria specialises in this, and is largely responsible for the formation and stabilisation of many dunes by binding sand grains together. The sand couch-grass Elytrigia juncea also performs this function on the seaward edge of the dunes, and is responsible, with some other pioneers like the sea rocket Cakile maritima, for initiating the process of dune building by trapping wind blown sand.
In accreting situations small mounds of vegetation or tide-washed debris form and tend to enlarge as the wind-speed drops in the lee of the mound, allowing blowing sand (picked up from the off-shore banks) to fall out of the air stream. The pioneering plants are physiologically adapted to withstand the problems of high salt contents in the air and soil, and are good examples of stress tolerators, as well as having some ruderal characteristics.
Inland side
On the inland side of dunes conditions are less severe, and links type grasslands develop with a range of grassland
|
https://en.wikipedia.org/wiki/Intute
|
Intute was a free Web service aimed at students, teachers, and researchers in UK further education and higher education. Intute provided access to online resources, via a large database of resources. Each resource was reviewed by an academic specialist in the subject, who wrote a short review of between 100 and 200 words, and described via various metadata fields (such as which subject discipline(s) it would be useful to) what type of resource it was, who created it, who its intended audience was, what time-period or geographical area the resource covered, and so on. As of July 2010, Intute provided 123,519 records. Funding was stopped in 2011, and the site closed.
A partial archive of the Intute library is maintained at XtLearn.net
History of Intute
Intute was formed in July 2006 after the merger of the eight semi-autonomous "hubs" that formed the Resource Discovery Network (RDN). These hubs each served particular academic disciplines:
Altis - Hospitality, leisure, sport and tourism
Artifact - Arts and creative industries
Biome - Health and life sciences
EEVL - Engineering, mathematics, and computing
GEsource - Geography and the environment
Humbul - Humanities
PSIgate - Physical sciences
SOSIG - Social sciences
The restructuring and rebranding was undertaken to create a service with a more uniform identity and appearance, better cross-searching facilities, and more focused technical and management teams. As part of the restructuring, the eight RDN hubs were initially reorganised into four subject groups. This process also incorporated the Virtual Training Suite , a series of continually updated, free online Internet training tutorials for over 65 subject areas.
The Intute service was geographically distributed, with staff based at several UK universities.
University of Birmingham
University of Bristol
Heriot-Watt University
University of Manchester
Manchester Metropolitan University
University of Nottingham
University of Oxford
University Col
|
https://en.wikipedia.org/wiki/Analog%20Expansion%20Bus
|
The Analog Expansion Bus is a hardware bus that was designed by Dialogic for interfacing DTI/124, D/4x, AMX and other voice response component boards which fit in an AT-expansion slot of a personal computer.
|
https://en.wikipedia.org/wiki/Time-bin%20encoding
|
Time-bin encoding is a technique used in quantum information science to encode a qubit of information on a photon. Quantum information science makes use of qubits as a basic resource similar to bits in classical computing. Qubits are any two-level quantum mechanical system; there are many different physical implementations of qubits, one of which is time-bin encoding.
While the time-bin encoding technique is very robust against decoherence, it does not allow easy interaction between the different qubits. As such, it is much more useful in quantum communication (such as quantum teleportation and quantum key distribution) than in quantum computation.
Construction of a time-bin encoded qubit
Time-bin encoding is done by having a single-photon go through a Mach–Zehnder interferometer (MZ), shown in black here. The photon coming from the left is guided through one of two paths (shown in blue and red); the guiding can be made by optical fiber or simply in free space using mirrors and polarising cubes. One of the two paths is longer than the other. The difference in path length must be longer than the coherence length of the photon to make sure the path taken can be unambiguously distinguished. The interferometer has to keep a stable phase, which means that the path length difference must vary by much less than the wavelength of light during the experiment. This usually requires active temperature stabilization.
If the photon takes the short path, it is said to be in the state ; if it takes the long path, it is said to be in the state . If the photon has a non-zero probability to take either path, then it is in a coherent superposition of the two states:
These coherent superpositions of the two possible states are called qubits and are the basic ingredient of Quantum information science.
In general, it is easy to vary the phase gained by the photon between the two paths, for example by stretching the fiber, while it is much more difficult to vary the amplitudes w
|
https://en.wikipedia.org/wiki/Material
|
Material is a substance or mixture of substances that constitutes an object. Materials can be pure or impure, living or non-living matter. Materials can be classified on the basis of their physical and chemical properties, or on their geological origin or biological function. Materials science is the study of materials, their properties and their applications.
Raw materials can be processed in different ways to influence their properties, by purification, shaping or the introduction of other materials. New materials can be produced from raw materials by synthesis.
In industry, materials are inputs to manufacturing processes to produce products or more complex materials.
Historical elements
Materials chart the history of humanity. The system of the three prehistoric ages (Stone Age, Bronze Age, Iron Age) were succeeded by historical ages: steel age in the 19th century, polymer age in the middle of the following century (plastic age) and silicon age in the second half of the 20th century.
Classification by use
Materials can be broadly categorized in terms of their use, for example:
Building materials are used for construction
Building insulation materials are used to retain heat within buildings
Refractory materials are used for high-temperature applications
Nuclear materials are used for nuclear power and weapons
Aerospace materials are used in aircraft and other aerospace applications
Biomaterials are used for applications interacting with living systems
Material selection is a process to determine which material should be used for a given application.
Classification by structure
The relevant structure of materials has a different length scale depending on the material. The structure and composition of a material can be determined by microscopy or spectroscopy.
Microstructure
In engineering, materials can be categorised according to their microscopic structure:
Plastics: a wide range of synthetic or semi-synthetic materials that use polymers as a main ingred
|
https://en.wikipedia.org/wiki/Mastocarpus%20stellatus
|
Mastocarpus stellatus, commonly known as carrageenan moss or false Irish moss, is a species in the Rhodophyceae division, a red algae seaweed division, and the Phyllophoracea family. M. stellatus is closely related to Irish Moss (Chondrus crispus). It grows in the intertidal zone. It is most collected in North Atlantic regions such as Ireland and Scotland, together with Irish moss, dried, and sold for cooking and as the basis for a drink reputed to ward off colds and flu. Marine biologists have completed studies on the medicinal reputation of M. stellatus to discover the full potential of its pharmaceutical benefits. Additionally, marine biologists have conducted research on its potential to serve as an alternative to plastic. The application of M. stellatus in these different industries is correlated with the seaweed's adaptations which developed in response to the environmental stressors present around its location on the rocky intertidal.
Description
It grows from a discoid holdfast stipe, and the fronds are channeled unlike those of Chondrus crispus, which are flat. It grows to a height of and branches dichotomously. The frond is cartilaginous and reddish-brown in colour, with a greenish or purplish tinge. The mature algae show reproductive structures which develop on erect filaments up to in diameter, these make it readily distinguishable from Chondrus crispus. In colour it is reddish brown, purple or bleached.
Ecology
Habitat and distribution
M. stellatus occurs commonly on rocks in the mid and lower-intertidal. It is generally found on all coasts of Ireland and Britain, except perhaps for parts of the east of England: Lincoln, Norfolk and Suffolk. Other recorded locations include: Iceland, Faeroes, North Russia to Rio de Oro, Canada (Newfoundland) to U.S. (North Carolina). Mastocarpus stellatus is able to coexist with C. crispus on the northern New England coast despite being a competitive inferior to C. crispus. A greater tolerance for freezing allow
|
https://en.wikipedia.org/wiki/Spinor%20spherical%20harmonics
|
In quantum mechanics, the spinor spherical harmonics (also known as spin spherical harmonics, spinor harmonics and Pauli spinors) are special functions defined over the sphere. The spinor spherical harmonics are the natural spinor analog of the vector spherical harmonics. While the standard spherical harmonics are a basis for the angular momentum operator, the spinor spherical harmonics are a basis for the total angular momentum operator (angular momentum plus spin). These functions are used in analytical solutions to Dirac equation in a radial potential. The spinor spherical harmonics are sometimes called Pauli central field spinors, in honor to Wolfgang Pauli who employed them in the solution of the hydrogen atom with spin–orbit interaction.
Properties
The spinor spherical harmonics are the spinors eigenstates of the total angular momentum operator squared:
where , where , , and are the (dimensionless) total, orbital and spin angular momentum operators, j is the total azimuthal quantum number and m is the total magnetic quantum number.
Under a parity operation, we have
For spin-½ systems, they are given in matrix form by
where are the usual spherical harmonics.
|
https://en.wikipedia.org/wiki/American%20Society%20of%20Naturalists
|
The American Society of Naturalists was founded in 1883 and is one of the oldest professional societies dedicated to the biological sciences in North America. The purpose of the Society is "to advance and diffuse knowledge of organic evolution and other broad biological principles so as to enhance the conceptual unification of the biological sciences."
Founded in Massachusetts with Alpheus Spring Packard Jr. as its first president, it was called the Society of Naturalists of the Eastern United States until 1886.
The scientific journal The American Naturalist is published on behalf of the society, which also holds an annual meeting with a scientific program of symposia and contributed papers and posters. It also confers a number of awards for achievement in evolutionary biology and/or ecology, including the Sewall Wright Award (named in honor of Sewall Wright) for senior researchers making "fundamental contributions ... to the conceptual unification of the biological sciences", the E. O. Wilson award for "significant contributions" from naturalists in mid-career, the Jasper Loftus-Hills Young Investigators Award for promising scientists early in their careers, and also the Ruth Patrick Student Poster Award.
|
https://en.wikipedia.org/wiki/Bochner%20space
|
In mathematics, Bochner spaces are a generalization of the concept of spaces to functions whose values lie in a Banach space which is not necessarily the space or of real or complex numbers.
The space consists of (equivalence classes of) all Bochner measurable functions with values in the Banach space whose norm lies in the standard space. Thus, if is the set of complex numbers, it is the standard Lebesgue space.
Almost all standard results on spaces do hold on Bochner spaces too; in particular, the Bochner spaces are Banach spaces for
Bochner spaces are named for the mathematician Salomon Bochner.
Definition
Given a measure space a Banach space and the Bochner space is defined to be the Kolmogorov quotient (by equality almost everywhere) of the space of all Bochner measurable functions such that the corresponding norm is finite:
In other words, as is usual in the study of spaces, is a space of equivalence classes of functions, where two functions are defined to be equivalent if they are equal everywhere except upon a -measure zero subset of As is also usual in the study of such spaces, it is usual to abuse notation and speak of a "function" in rather than an equivalence class (which would be more technically correct).
Applications
Bochner spaces are often used in the functional analysis approach to the study of partial differential equations that depend on time, e.g. the heat equation: if the temperature is a scalar function of time and space, one can write to make a family (parametrized by time) of functions of space, possibly in some Bochner space.
Application to PDE theory
Very often, the space is an interval of time over which we wish to solve some partial differential equation, and will be one-dimensional Lebesgue measure. The idea is to regard a function of time and space as a collection of functions of space, this collection being parametrized by time. For example, in the solution of the heat equation on a region in
|
https://en.wikipedia.org/wiki/Neurofibromin
|
Neurofibromin can refer to one of two different proteins:
Neurofibromin 1
Neurofibromin 2
See also
Neurofibromatosis type I
Neurofibromatosis type II
Proteins
|
https://en.wikipedia.org/wiki/Underwater%20vision
|
Underwater vision is the ability to see objects underwater, and this is significantly affected by several factors. Underwater, objects are less visible because of lower levels of natural illumination caused by rapid attenuation of light with distance passed through the water. They are also blurred by scattering of light between the object and the viewer, also resulting in lower contrast. These effects vary with wavelength of the light, and color and turbidity of the water. The vertebrate eye is usually either optimised for underwater vision or air vision, as is the case in the human eye. The visual acuity of the air-optimised eye is severely adversely affected by the difference in refractive index between air and water when immersed in direct contact. Provision of an airspace between the cornea and the water can compensate, but has the side effect of scale and distance distortion. The diver learns to compensate for these distortions. Artificial illumination is effective to improve illumination at short range.
Stereoscopic acuity, the ability to judge relative distances of different objects, is considerably reduced underwater, and this is affected by the field of vision. A narrow field of vision caused by a small viewport in a helmet results in greatly reduced stereoacuity, and associated loss of hand-eye coordination. At very short range in clear water distance is underestimated, in accordance with magnification due to refraction through the flat lens of the mask, but at greater distances - greater than arm's reach, the distance tends to be overestimated to a degree influenced by turbidity. Both relative and absolute depth perception are reduced underwater. Loss of contrast results in overestimation, and magnification effects account for underestimation at short range. Divers can to a large extent adapt to these effects over time and with practice.
Light rays bend when they travel from one medium to another; the amount of bending is determined by the refractive in
|
https://en.wikipedia.org/wiki/Memory%20dependence%20prediction
|
Memory dependence prediction is a technique, employed by high-performance out-of-order execution microprocessors that execute memory access operations (loads and stores) out of program order, to predict true dependencies between loads and stores at instruction execution time. With the predicted dependence information, the processor can then decide to speculatively execute certain loads and stores out of order, while preventing other loads and stores from executing out-of-order (keeping them in-order). Later in the pipeline, memory disambiguation techniques are used to determine if the loads and stores were correctly executed and, if not, to recover.
By using the memory dependence predictor to keep most dependent loads and stores in order, the processor gains the benefits of aggressive out-of-order load/store execution but avoids many of the memory dependence violations that occur when loads and stores were incorrectly executed. This increases performance because it reduces the number of pipeline flushes that are required to recover from these memory dependence violations. See the memory disambiguation article for more information on memory dependencies, memory dependence violations, and recovery.
In general, memory dependence prediction predicts whether two memory operations are dependent, that is, if they interact by accessing the same memory location. Besides using store to load (RAW or true) memory dependence prediction for the out-of-order scheduling of loads and stores, other applications of memory dependence prediction have been proposed. See for example.
Memory dependence prediction is an optimization on top of memory dependency speculation. Sequential execution semantics imply that stores and loads appear to execute in the order specified by the program. However, as with out-of-order execution of other instructions, it may be possible to execute two memory operations in a different order from that implied by the program. This is possible when the two oper
|
https://en.wikipedia.org/wiki/Gravitational%20mirage
|
A gravitational mirage or cosmic mirage is an optical phenomenon affecting the appearance of a distant star or galaxy, seen only through a telescope. It can take the form of a ring or rings partially or completely surrounding the object, a duplicate image adjacent to the object, or multiple duplicate images surrounding the object. Sometimes the direct view of the original object itself is dimmed or absent.
The illusion is caused by a gravitational lens, in space between the object and the observer's telescope, which bends light as it travels. The effect is analogous to the atmospheric mirage, which has been observed since antiquity, in circumstances where the air temperature varies strongly with height over the ground or sea; the rapidly changing refractive index bends light, producing inverted and/or multiple images "floating" in the air.
Ring-shaped gravitational mirages are referred to as Einstein rings, and one multiple-image gravitational mirage is named the Einstein Cross, as tribute for Einstein's predictions regarding gravitational lensing.
History
Gravitational bending of light was predicted by Einstein's general relativity in 1916, and first observed by astronomers in 1919 during a total solar eclipse. Accurate measurements of stars seen in the dark sky near the eclipsed Sun indicated a displacement in the direction opposite to the Sun, about as much as predicted by Einstein's theory. The effect is due to the gravitational attraction of the photons when they pass near the Sun on their way to Earth. This was a direct confirmation of an entirely new phenomenon and it represented a milestone in physics.
The possibility of gravitational lensing was suggested in 1924 and clarified by Albert Einstein in 1936. In 1937, Swiss astronomer Fritz Zwicky (1898 - 1974), working at the Mount Wilson Observatory in California, realized that galaxies and galaxy clusters far out in space may be sufficiently compact and massive to observably bend the light from even more
|
https://en.wikipedia.org/wiki/Recognition%20sequence
|
A recognition sequence is a DNA sequence to which a structural motif of a DNA-binding domain exhibits binding specificity. Recognition sequences are palindromes.
The transcription factor Sp1 for example, binds the sequences 5'-(G/T)GGGCGG(G/A)(G/A)(C/T)-3', where (G/T) indicates that the domain will bind a guanine or thymine at this position.
The restriction endonuclease PstI recognizes, binds, and cleaves the sequence 5'-CTGCAG-3'.
A recognition sequence is different from a recognition site. A given recognition sequence can occur one or more times, or not at all, on a specific DNA fragment. A recognition site is specified by the position of the site. For example, there are two PstI recognition sites in the following DNA sequence fragment, starting at base 9 and 31 respectively. A recognition sequence is a specific sequence, usually very short (less than 10 bases). Depending on the degree of specificity of the protein, a DNA-binding protein can bind to more than one specific sequence. For PstI, which has a single sequence specificity, it is 5'-CTGCAG-3'. It is always the same whether at the first recognition site or the second in the following example sequence. For Sp1, which has multiple (16) sequence specificity as shown above, the two recognition sites in the following example sequence fragment are at 18 and 32, and their respective recognition sequences are 5'-GGGGCGGAGC-3' and 5'-TGGGCGGAAC-3'.
5'-AACGTTAGCTGCAGTCGGGGCGGAGCTAGGCTGCAGGAATTGGGCGGAACCT-3'
See also
DNA-binding domain
Transcription factor#Classes, for more examples
|
https://en.wikipedia.org/wiki/Piscivore
|
A piscivore () is a carnivorous animal that primarily eats fish. The name piscivore is derived . Piscivore is equivalent to the Greek-derived word ichthyophage, both of which mean "fish eater". Fish were the diet of early tetrapod evolution (via water-bound amphibians during the Devonian period); insectivory came next; then in time, the more terrestrially adapted reptiles and synapsids evolved herbivory.
Almost all predatory fishes (most sharks, tuna, billfishes, pikes etc.) are obligated piscivores. Some non-piscine aquatic animals, such as whales, sea lions, and crocodilians, are not completely piscivorous; often also preying on invertebrates, marine mammals, waterbirds and even wading land animals in addition to fish, while others, such as the bulldog bat and gharial, are strictly dependent on fish for food. Some creatures, including cnidarians, octopuses, squid, spiders, cetaceans, grizzly bears, jaguars, wolves, snakes, turtles and sea gulls, may have fish as significant if not dominant portions of their diets. Humans can live on fish-based diets, as can their carnivorous domesticated pets such as dogs and cats.
The ecological effects of piscivores can extend to other food chains. In a study of cutthroat trout stocking, researchers found that the addition of this piscivore can have noticeable effects on non-aquatic organisms, in this case bats feeding on insects emerging from the water with the trout.
Another study done on lionfish removal to maintain low densities used piscivore densities as a biological indicator for coral reef success.
There exist classifications of primary and secondary piscivores. Primary piscivores, also known as "specialists", shift to this habit in the first few months of their lives. Secondary piscivores will move to eating primarily fish later in their lifetime. It is hypothesized that the secondary piscivores' diet change is due to an adaptation to maintain efficiency in their use of energy while growing.
Examples of extant pis
|
https://en.wikipedia.org/wiki/Landolt%E2%80%93B%C3%B6rnstein
|
Landolt–Börnstein is a collection of property data in materials science and the closely related fields of chemistry, physics and engineering published by Springer Nature.
History
On July 28, 1882, Dr. Hans Heinrich Landolt and Dr. Richard Börnstein, both professors at the "Landwirtschaftliche Hochschule" (Agricultural College) at Berlin, signed a contract with the publisher Ferdinand Springer on the publication of a collection of tables with physical-chemical data. The title of this book "Physikalisch-chemische Tabellen" (Physical-Chemical Tables) published in 1883 was soon forgotten. Owing to its success the data collection has been known for more than a hundred years by each scientist only as "The Landolt-Börnstein".
1250 copies of the 1st Edition were printed and sold. In 1894, the 2nd Edition was published, in 1905 the 3rd Edition, in 1912 the 4th Edition, and finally in 1923 the 5th Edition. Supplementary volumes of the latter were printed until as late as 1936. New Editions saw changes in large expansion of volumes, number of authors, updated structure, additional tables and coverage of new areas of physics and chemistry.
The 5th Edition was eventually published in 1923, consisting of two volumes and comprising a total of 1,695 pages. Sixty three authors had contributed to it. The growth that had already been noticed in previous editions, continued. It was clear, that "another edition in approximately 10 years" was no solution. A complete conceptual change of the Landolt–Börnstein had thus become necessary. For the meantime supplementary volumes in two-year intervals should be provided to fill in the blanks and add the latest data. The first supplementary volume of the 5th Edition was published in 1927, the second in 1931 and the third in 1935/36. The latter consisted of three sub-volumes with a total of 3,039 pages and contributions from 82 authors.
The 6th Edition (1950) was published in line with the revised general frame. The basic idea was to have fou
|
https://en.wikipedia.org/wiki/Carnegie%20collection
|
The Carnegie Collection was a series of authentic replicas based on dinosaurs and other extinct prehistoric creatures, using fossils featured at the Carnegie Museum of Natural History as references. They were produced by Florida-based company Safari Ltd., known for their hand-painted replicas, from 1988 to 2015, and became known as "the world’s premier line of scale model dinosaur figures."
Description
65 models representing 53 species of dinosaurs and other prehistoric animals were produced for the line. Each of the models was hand-painted, ensuring that no two copies of the same model are identical. Each model was sculpted by artist Forest Rogers and authenticated by paleontologists associated with the Carnegie Museum, such as Matt Lamanna, as well as various species-specific experts. Most of the animals were sculpted at a 1:40 scale (where one inch on the model represents 40 inches on the real creature), although some models representing smaller creatures were produced at a larger scale. Models in the collection range greatly in size from 24 inches long (original Diplodocus) to only three inches long (original Dimetrodon) with all shapes and sizes represented in-between. On the underside of each model is information detailing its name, year of initial production, and copyright information.
The models feature an informational hang tag providing scientific details about the animal represented by the replica. In some cases, the dinosaurs were packaged in cardboard display boxes, in which case a small booklet featuring information on each dinosaur featured in the collection was included in lieu of the hang tags. In some instances, two or three models would be packaged together in a box. Examples include Dimetrodon and Deinonychus, Protoceratops and Euoplocephalus, Apatosaurus and Apatosaurus Baby, Elasmosaurus and Mosasaurus, and Australopithecus Male/Female pair and Smilodon. The boxes are not often seen today, and most of the time the dinosaurs are found fre
|
https://en.wikipedia.org/wiki/Space-based%20solar%20power
|
Space-based solar power (SBSP, SSP) is the concept of collecting solar power in outer space with solar power satellites (SPS) and distributing it to Earth. Its advantages include a higher collection of energy due to the lack of reflection and absorption by the atmosphere, the possibility of very little night, and a better ability to orient to face the Sun. Space-based solar power systems convert sunlight to some other form of energy (such as microwaves) which can be transmitted through the atmosphere to receivers on the Earth's surface.
Various SBSP proposals have been researched since the early 1970s, but none is economically viable with present-day space launch costs. Some technologists speculate that this may change in the distant future with space manufacturing from asteroids or lunar material, or with radical new space launch technologies other than rocketry.
Besides cost, SBSP also introduces several technological hurdles, including the problem of transmitting energy from orbit. Since wires extending from Earth's surface to an orbiting satellite are not feasible with current technology, SBSP designs generally include the wireless power transmission with its concomitant conversion inefficiencies, as well as land use concerns for antenna stations to receive the energy at Earth's surface. The collecting satellite would convert solar energy into electrical energy, power a microwave transmitter or laser emitter, and transmit this energy to a collector (or microwave rectenna) on Earth's surface. Contrary to appearances in fiction, most designs propose beam energy densities that are not harmful if human beings were to be inadvertently exposed, such as if a transmitting satellite's beam were to wander off-course. But the necessarily vast size of the receiving antennas would still require large blocks of land near the end users. The service life of space-based collectors in the face of long-term exposure to the space environment, including degradation from radiation
|
https://en.wikipedia.org/wiki/Cervarix
|
Cervarix is a vaccine against certain types of cancer-causing human papillomavirus (HPV).
Cervarix is designed to prevent infection from HPV types 16 and 18, that cause about 70% of cervical cancer cases. These types also cause most HPV-induced genital and head and neck cancers. Additionally, some cross-reactive protection against virus strains 45 and 31 were shown in clinical trials. Cervarix also contains AS04, a proprietary adjuvant that has been found to boost the immune system response for a longer period of time.
Cervarix is manufactured by GlaxoSmithKline. An alternative product, from Merck & Co., is known as Gardasil. Cervarix was voluntarily taken off of the market in the US in 2016 due to low demand.
Medical uses
HPV is a virus, usually transmitted sexually, which can cause cervical cancer in a small percentage of those women genital infected. Cervarix is a preventative HPV vaccine, not therapeutic. HPV immunity is type-specific, so a successful series of Cervarix shots will not block infection from cervical cancer-causing HPV types other than HPV types 16 and 18 and some related types, so experts continue to recommend routine cervical Pap smears even for women who have been vaccinated. Vaccination alone, without continued screening, would prevent fewer cervical cancers than regular screening alone.
Cervarix is indicated for the prevention of the following diseases caused by oncogenic HPV types 16 and 18: cervical cancer, cervical intraepithelial neoplasia (CIN) grade 2 or worse and adenocarcinoma in situ, and CIN grade 1. In the United States, Cervarix is approved for use in females 10 through 25 years of age while in some other countries the age limit is at least 45.
, Cervarix was shown to be effective 7.3 years after vaccination.
Administration
Immunization with Cervarix consists of 3 doses of 0.5-mL each, by intramuscular injection according to the following schedule: 0, 1, and 6 months. The preferred site of administration is the deltoid regi
|
https://en.wikipedia.org/wiki/Straight%20skeleton
|
In geometry, a straight skeleton is a method of representing a polygon by a topological skeleton. It is similar in some ways to the medial axis but differs in that the skeleton is composed of straight line segments, while the medial axis of a polygon may involve parabolic curves. However, both are homotopy-equivalent to the underlying polygon.
Straight skeletons were first defined for simple polygons by , and generalized to planar straight-line graphs (PSLG) by .
In their interpretation as projection of roof surfaces, they are already extensively discussed by .
Definition
The straight skeleton of a polygon is defined by a continuous shrinking process in which the edges of the polygon are moved inwards parallel to themselves at a constant speed. As the edges move in this way, the vertices where pairs of edges meet also move, at speeds that depend on the angle of the vertex. If one of these moving vertices collides with a nonadjacent edge, the polygon is split in two by the collision, and the process continues in each part. The straight skeleton is the set of curves traced out by the moving vertices in this process.
In the illustration the top figure shows the shrinking process and the middle figure depicts the straight skeleton in blue.
Algorithms
The straight skeleton may be computed by simulating the shrinking process by which it is defined; a number of variant algorithms for computing it have been proposed, differing in the assumptions they make on the input and in the data structures they use for detecting combinatorial changes in the input polygon as it shrinks.
The following algorithms consider an input that forms a polygon, a polygon with holes, or a PSLG. For a polygonal input we denote the number of vertices by n and the number of reflex (concave, i.e., angle greater than ) vertices by r. If the input is a PSLG then we consider the initial wavefront structure, which forms a set of polygons, and again denote by n the number of vertices and by r the number
|
https://en.wikipedia.org/wiki/Mixmath
|
mi×ma+h (or Mixmath) is a Canadian board game developed by Wrebbit and published in 1987. It resembles a variant of Scrabble in that tiles are placed on a crossword-style grid, with special premiums such as squares that double or triple the value of a tile and a 50-point bonus for playing all seven tiles on the player's rack in one turn. Unlike Scrabble, Mixmath uses numbered tiles to generate short equations using simple arithmetic. Wrebbit, maker of Puzz-3D jigsaw puzzles, has since been taken over by Hasbro, and it appears that Mixmath has been discontinued.
Gameplay
Mixmath is designed for 2 to 4 players in games lasting roughly 60 minutes. Players should be comfortable with the basic operations of addition, subtraction, multiplication, and division.
Contents
The gameboard is a 14x14 grid. The central four squares are orange and contain the numbers 1 (upper left), 2 (upper right), 3 (lower left), and 4 (lower right)- these squares cannot have tiles played upon them, and are the basis for beginning the game. There are special blue squares scattered throughout the board which contain an arithmetic symbol (a plus, minus, multiplication, or division sign), as well as premium squares labelled as 2x (green) and 3x (red).
There are 108 tiles included with the game, 2 of which are blank and can be used as replacements. Of the 106 playable tiles, there are seven each of the numbers 1 through 10, one 0, one each of the numbers 11 through 20, and one each of every number between 20 and 99 that can be represented as a multiplication of two numbers between 1 and 10 (for example, 21, 24, 25).
There are also four racks that can fit seven tiles, and a pouch from which to draw tiles.
Starting the game
After all the tiles are placed in the pouch, each player draws a single tile. The player with the highest number goes first. Each player places their drawn tile on their rack, and draws six more, for a total of seven.
The first move by the first player must use two of
|
https://en.wikipedia.org/wiki/Saint%20Petersburg%20Lyceum%20239
|
Presidential Physics and Mathematics Lyceum No. 239 (), is a public high school in Saint Petersburg, Russia that specializes in mathematics and physics. The school opened in 1918 and it became a specialized city school in 1961. The school is noted for its strong academic programs. It is the alma mater of numerous winners of International Mathematical Olympiads and it has produced many notable alumni. The lyceum has been named the best school in Russia in 2015, 2016, and 2017.
History
The school was founded in 1918. Originally, it was located in the Lobanov-Rostovsky Palace, also known as "house with lions" at the corner of Saint Isaac's Square and Admiralteysky Prospect. It was one of only handful of schools to remain open during Siege of Leningrad. In 1961 the school was granted status of city's school with specialization in physics and mathematics. In 1964 the school moved to the building on Kazansky Street 48/1, which was previously occupied by school of working youth, and in 1966 it moved again to Moika River, 108. Finally, in 1975 the school relocated to its current location, into the historic Annenschule building.
In 1990, the Russian Ministry of Education granted school the status of physico-mathematical lyceum and experimental laboratory for standard of education in physics, mathematics and informatics in Saint Petersburg. In 1994, the school won the George Soros grant. The US Mathematical society voted the school as one of top ten schools of former Soviet Union. The first of January 2014 the school received a status of "Presidential Physics and Mathematics Lyceum №239".
Famous alumni
Yelena Bonner (1940) – human rights activist (widow of Andrei Sakharov)
Leonid Kharitonov – actor
Alisa Freindlich (c. 1942 – 1953) – major Russian movie and theater actress
Yuri Matiyasevich (1962–1963) – mathematician who solved Hilbert's tenth problem
Andrei Tolubeyev (?–1963) – theatrical and cinema actor, People's Artist of Russia
Natalia Kuchinskaya (1964–1966)
|
https://en.wikipedia.org/wiki/ST%20Writer
|
ST Writer is a word processor program for the Atari ST series of personal computers. It was introduced by Atari Corporation in 1985 along with the 520ST. It is a port of Atari's AtariWriter Plus from the earlier 8-bit computer series, matching it closely enough to share files across platforms unchanged. Running on the ST allowed it to display a full 80-column layout, create much larger files, and support additional features.
ST Writer did not use a graphical user interface (GUI). Atari said it was intended as a stop-gap until a GUI word processor was available. When this became available at the end of 1985 in the form of 1st Word, ST Writer stopped being distributed with new machines. By this point, it had garnered a faithful following and Atari released the source code to one of its more vocal advocates. It continued to be supported through multiple major updates until 1992, when it was known as ST Writer Elite.
History
AtariWriter
When Atari began sales of the 8-bit series in late 1979, they released two models, the 400 and 800. The 800 was intended to be sold in professional settings, featuring a full mechanical keyboard and easily expandable memory. Sales of this model were initially slow due to a lack of suitable software and the company's reputation as a games developer. In 1981, Atari introduced Atari Word Processor. This required an 800 with 48 kB and a 810 disk drive, and left memory for about one page of text.
After a year on the market, Atari replaced Word Processor with the AtariWriter in 1982. This shipped on a ROM cartridge that allowed it to run on any machine in the Atari lineup. AtariWriter sold an estimated 800,000 copies of the US version, not including sales of the international versions or any of the later disk-based releases. (This means at least one in five of every 8-bit machine bought a copy of the program.) A updated version followed; AtariWriter Plus added 80-column typing using horizontal scrolling, a feature of the earlier Word Proc
|
https://en.wikipedia.org/wiki/Bioorganic%20chemistry
|
Bioorganic chemistry is a scientific discipline that combines organic chemistry and biochemistry. It is that branch of life science that deals with the study of biological processes using chemical methods. Protein and enzyme function are examples of these processes.
Sometimes biochemistry is used interchangeably for bioorganic chemistry; the distinction being that bioorganic chemistry is organic chemistry that is focused on the biological aspects. While biochemistry aims at understanding biological processes using chemistry, bioorganic chemistry attempts to expand organic-chemical researches (that is, structures, synthesis, and kinetics) toward biology. When investigating metalloenzymes and cofactors, bioorganic chemistry overlaps bioinorganic chemistry.
Sub disciplines
Biophysical organic chemistry is a term used when attempting to describe intimate details of molecular recognition by bioorganic chemistry.
Natural product chemistry is the process of Identifying compounds found in nature to determine their properties. Compound discoveries have and often lead to medicinal uses, development of herbicides and insecticides.
|
https://en.wikipedia.org/wiki/Layer%208
|
Layer 8 is a term used to refer to user or political layer on top of the 7-layer OSI model of computer networking.
The OSI model is a 7-layer abstract model that describes an architecture of data communications for networked computers. The layers build upon each other, allowing for the abstraction of specific functions in each one. The top (7th) layer is the Application Layer describing methods and protocols of software applications. It is then held that the user is the 8th layer.
Layers, defined
According to Bruce Schneier and RSA:
Layer 8: The individual person.
Layer 9: The organization.
Layer 10: Government or legal compliance
Network World readers humorously report:
Layer 8: Money - Provides network corruption by inspiring increased interference from the upper layer.
Layer 9: Politics - Consists of technically ignorant management that negatively impacts network performance and development.
and:
Layer 9: Politics. "Where the most difficult problems live."
Layer 8: The user factor. "It turned out to be another Layer-8 problem."
7 through 1: The usual OSI layers
Layer 0: Funding. "Because we should always start troubleshooting from the lowest layer, and nothing can exist before the funding."
Since the OSI layer numbers are commonly used to discuss networking topics, a troubleshooter may describe an issue caused by a user to be a layer 8 issue, similar to the PEBKAC acronym, the ID-Ten-T Error and also PICNIC.
Political economic theory holds that the 8th layer is important to understanding the OSI Model. Political policies such as network neutrality, spectrum management, and digital inclusion all shape the technologies comprising layers 1–7 of the OSI Model.
An 8th layer has also been referenced to physical (real-world) controllers containing an external hardware device which interacts with an OSI model network. An example of this is ALI in Profibus.
A network guru T-shirt from the 1980s shows Layer 8 as the "financial" layer, and Layer 9 as
|
https://en.wikipedia.org/wiki/Component%20causes
|
A component cause of a disease is an event required for the disease to develop.
Given a disease or medical condition, there is a causality chain of events from the first event to the appearance of the clinical disease
A cause of a disease event is an event that preceded the disease event in a disease causal chain. Without this antecedent event the disease event either would not have occurred at all or would not have occurred until some later time. However, no specific event is sufficient by itself to produce disease. Hence such an event is a component of a sufficient cause.
See also
Sufficient cause of the disease.
|
https://en.wikipedia.org/wiki/Probabilistic%20CTL
|
Probabilistic Computation Tree Logic (PCTL) is an extension of computation tree logic (CTL) that allows for probabilistic quantification of described properties. It has been defined in the paper by Hansson and Jonsson.
PCTL is a useful logic for stating soft deadline properties, e.g. "after a request for a service, there is at least a 98% probability that the service will be carried out within 2 seconds". Akin CTL suitability for model-checking PCTL extension is widely used as a property specification language for probabilistic model checkers.
PCTL syntax
A possible syntax of PCTL can be defined as follows:
Therein, is a comparison operator and is a probability threshold.
Formulas of PCTL are interpreted over discrete Markov chains. An interpretation structure
is a quadruple , where
is a finite set of states,
is an initial state,
is a transition probability function, , such that for all we have , and
is a labeling function, , assigning atomic propositions to states.
A path from a state is an infinite sequence of states
. The n-th state of the path is denoted as
and the prefix of of length is denoted as .
Probability measure
A probability measure on the set of paths with a common prefix of length is given by the product of transition probabilities along the prefix of the path:
For the probability measure is equal to .
Satisfaction relation
The satisfaction relation is inductively defined as follows:
if and only if ,
if and only if not ,
if and only if or ,
if and only if and ,
if and only if , and
if and only if .
See also
Computation tree logic
Temporal logic
|
https://en.wikipedia.org/wiki/Uniloc
|
Uniloc Corporation is a company founded in Australia in 1992.
History
The Uniloc technology is based on a patent granted to the inventor Ric Richardson who was also the founder of the Uniloc Company. The original patent application was dated late 1992 in Australia and granted in the US in 1996 and covers a technology popularly known as product activation, try and buy software and machine locking.
In 1993 Uniloc distributed "Try and Buy" versions of software for multiple publishers via a marketing agreement with IBM. An initial success was the sale of thousands of copies of a software package (First Aid, developed by Cybermedia) distributed on the front cover of Windows Sources magazine in 1994.
In 1997 a US subsidiary was set up called Uniloc PC Preload to produce preloaded unlockable editions of popular software products on new PCs. Distribution agreements were executed with eMachines and Toshiba. Family PC magazine also produced two months of magazines featuring unlockable software from Uniloc PC Preload on the cover in 2000.<ref> Uniloc.com Debuts`'Try-Before-You-Buy Software With Family PC Magazine;`'Star Wars Rogue]</ref>
In 2003, Uniloc Corporation set up a US subsidiary called Uniloc USA, which operates out of Rhode Island and Southern California. The company is currently licensing its patented technology to software publishers and entertainment companies including Sega.
Patent lawsuits
, Uniloc had sued 73 companies that it alleges have violated one of its copy-protection patents. According to Uniloc, 25 of those companies settled with it out of court. Due to the abstract nature of its patents, and its litigious activities, Uniloc has been deemed a "patent troll" by critics.
Microsoft
Uniloc sued Microsoft in 2003 for violating its patent relating to technology designed to deter software piracy. In 2006, US District Judge William Smith ruled in favour of Microsoft, but an appeals court overturned his decision, saying there was a "genuine issue of
|
https://en.wikipedia.org/wiki/Group%20key
|
In cryptography, a group key is a cryptographic key that is shared between a group of users. Typically, group keys are distributed by sending them to individual users, either physically, or encrypted individually for each user using either that user's pre-distributed private key.
A common use of group keys is to allow a group of users to decrypt a broadcast message that is intended for that entire group of users, and no one else.
For example, in the Second World War, group keys (known as "iodoforms", a term invented by a classically educated non-chemist, and nothing to do with the chemical of the same name) were sent to groups of agents by the Special Operations Executive. These group keys allowed all the agents in a particular group to receive a single coded message.
In present-day applications, group keys are commonly used in conditional access systems, where the key is the common key used to decrypt the broadcast signal, and the group in question is the group of all paying subscribers. In this case, the group key is typically distributed to the subscribers' receivers using a combination of a physically distributed secure cryptoprocessor in the form of a smartcard and encrypted over-the-air messages.
|
https://en.wikipedia.org/wiki/Skorokhod%27s%20representation%20theorem
|
In mathematics and statistics, Skorokhod's representation theorem is a result that shows that a weakly convergent sequence of probability measures whose limit measure is sufficiently well-behaved can be represented as the distribution/law of a pointwise convergent sequence of random variables defined on a common probability space. It is named for the Soviet mathematician A. V. Skorokhod.
Statement
Let be a sequence of probability measures on a metric space such that converges weakly to some probability measure on as . Suppose also that the support of is separable. Then there exist -valued random variables defined on a common probability space such that the law of is for all (including ) and such that converges to , -almost surely.
See also
Convergence in distribution
|
https://en.wikipedia.org/wiki/Gabor%20atom
|
In applied mathematics, Gabor atoms, or Gabor functions, are functions used in the analysis proposed by Dennis Gabor in 1946 in which a family of functions is built from translations and modulations of a generating function.
Overview
In 1946, Dennis Gabor suggested the idea of using a granular system to produce sound. In his work, Gabor discussed the problems with Fourier analysis. Although he found the mathematics to be correct, it did not reflect the behaviour of sound in the world, because sounds, such as the sound of a siren, have variable frequencies over time. Another problem was the underlying supposition, as we use sine waves analysis, that the signal under concern has infinite duration even though sounds in real life have limited duration – see time–frequency analysis. Gabor applied ideas from quantum physics to sound, allowing an analogy between sound and quanta. He proposed a mathematical method to reduce Fourier analysis into cells. His research aimed at the information transmission through communication channels. Gabor saw in his atoms a possibility to transmit the same information but using less data. Instead of transmitting the signal itself it would be possible to transmit only the coefficients which represent the same signal using his atoms.
Mathematical definition
The Gabor function is defined by
where a and b are constants and g is a fixed function in L2(R), such that ||g|| = 1. Depending on , , and , a Gabor system may be a basis for L2(R), which is defined by translations and modulations. This is similar to a wavelet system, which may form a basis through dilating and translating a mother wavelet.
When one takes
one gets the kernel of the Gabor transform.
See also
Gabor filter
Gabor wavelet
Fourier analysis
Wavelet
Morlet wavelet
|
https://en.wikipedia.org/wiki/Error%20floor
|
The error floor is a phenomenon encountered in modern iterated sparse graph-based error correcting codes like LDPC codes and turbo codes. When the bit error ratio (BER) is plotted for conventional codes like Reed–Solomon codes under algebraic decoding or for convolutional codes under Viterbi decoding, the BER steadily decreases in the form of a curve as the SNR condition becomes better. For LDPC codes and turbo codes there is a point after which the curve does not fall as quickly as before, in other words, there is a region in which performance flattens. This region is called the error floor region. The region just before the sudden drop in performance is called the waterfall region.
Error floors are usually attributed to low-weight codewords (in the case of Turbo codes) and trapping sets or near-codewords (in the case of LDPC codes).
|
https://en.wikipedia.org/wiki/Undulipodium
|
An undulipodium or undulopodium (Greek: "swinging foot"; plural undulipodia), or a 9+2 organelle is a motile filamentous extracellular projection of eukaryotic cells. It is basically synonymous to flagella and cilia which are differing terms for similar molecular structures used on different types of cells, and usually correspond to different waveforms.
The name was coined to differentiate from the analogous structures present in prokaryotic cells.
The usage of the term was early supported by Lynn Margulis, especially in support of endosymbiotic theory. The eukaryotic cilia are structurally identical to eukaryotic flagella, although distinctions are sometimes made according to function and/or length. The Gene Ontology database does not make a distinction between the two, referring to most undulipodia as "motile cilium", and to that in the sperm as sperm flagellum.
Usage
In the 1980s, biologists such as Margulis advocated the use of the name "undulipodium", because of the apparent structural and functional differences between the cilia and flagella of prokaryotic and eukaryotic cells. They argued that the name "flagellum" should be restricted to prokaryotic organelles, such as bacterial flagella and spirochaete axial filaments. A distinction was drawn between "primary" cilia which function as sensory antennae, and ordinary cilia: it was argued that these were not undulipodia as they lacked a rotary movement mechanism.
However, the term "undulipodium" is not generally endorsed by biologists, who argue that the original purpose of the name does not sufficiently differentiate the cilia and flagella of eukaryotic from those of prokaryotic cells. For example, the early concept was the trivial homology of the flagella of flagellates and the pseudopodia of amoebae. The consensus terminology is to use "cilium" and "flagellum" for all purposes.
|
https://en.wikipedia.org/wiki/Sysjail
|
sysjail is a defunct user-land virtualiser for systems supporting the systrace library - as of version 1.0 limited to OpenBSD, NetBSD and MirOS. Its original design was inspired by FreeBSD jail, a similar utility (although part of the kernel) for FreeBSD. sysjail was developed and released in 2006 by Kristaps Dzonsons (aka Johnson), a research assistant in Game theory at the Stockholm School of Economics, and Maikls Deksters.
sysjail was re-written from scratch in 2007 to support emulated processes in jails, limited (initially) to Linux emulation.
The project was officially discontinued on 3 March 2009 due to flaws inherent to syscall wrapper-based security architectures. The restrictions of sysjail could be evaded by exploiting race conditions between the wrapper's security checks and kernel's execution of the syscalls.
|
https://en.wikipedia.org/wiki/Generalized%20algebraic%20data%20type
|
In functional programming, a generalized algebraic data type (GADT, also first-class phantom type, guarded recursive datatype, or equality-qualified type) is a generalization of parametric algebraic data types.
Overview
In a GADT, the product constructors (called data constructors in Haskell) can provide an explicit instantiation of the ADT as the type instantiation of their return value. This allows defining functions with a more advanced type behaviour. For a data constructor of Haskell 2010, the return value has the type instantiation implied by the instantiation of the ADT parameters at the constructor's application.
-- A parametric ADT that is not a GADT
data List a = Nil | Cons a (List a)
integers :: List Int
integers = Cons 12 (Cons 107 Nil)
strings :: List String
strings = Cons "boat" (Cons "dock" Nil)
-- A GADT
data Expr a where
EBool :: Bool -> Expr Bool
EInt :: Int -> Expr Int
EEqual :: Expr Int -> Expr Int -> Expr Bool
eval :: Expr a -> a
eval e = case e of
EBool a -> a
EInt a -> a
EEqual a b -> (eval a) == (eval b)
expr1 :: Expr Bool
expr1 = EEqual (EInt 2) (EInt 3)
ret = eval expr1 -- False
They are currently implemented in the GHC compiler as a non-standard extension, used by, among others, Pugs and Darcs. OCaml supports GADT natively since version 4.00.
The GHC implementation provides support for existentially quantified type parameters and for local constraints.
History
An early version of generalized algebraic data types were described by and based on pattern matching in ALF.
Generalized algebraic data types were introduced independently by and prior by as extensions to ML's and Haskell's algebraic data types. Both are essentially equivalent to each other. They are similar to the inductive families of data types (or inductive datatypes) found in Coq's Calculus of Inductive Constructions and other dependently typed languages, modulo the dependent types and except that the latter have an ad
|
https://en.wikipedia.org/wiki/SREC%20%28file%20format%29
|
Motorola S-record is a file format, created by Motorola in the mid-1970s, that conveys binary information as hex values in ASCII text form. This file format may also be known as SRECORD, SREC, S19, S28, S37. It is commonly used for programming flash memory in microcontrollers, EPROMs, EEPROMs, and other types of programmable logic devices. In a typical application, a compiler or assembler converts a program's source code (such as C or assembly language) to machine code and outputs it into a HEX file. The HEX file is then imported by a programmer to "burn" the machine code into non-volatile memory, or is transferred to the target system for loading and execution.
Overview
History
The S-record format was created in the mid-1970s for the Motorola 6800 processor. Software development tools for that and other embedded processors would make executable code and data in the S-record format. PROM programmers would then read the S-record format and "burn" the data into the PROMs or EPROMs used in the embedded system.
Other hex formats
There are other ASCII encoding with a similar purpose. BPNF, BHLF, and B10F were early binary formats, but they are neither compact nor flexible. Hexadecimal formats are more compact because they represent 4 bits rather than 1 bit per character. Many, such as S-record, are more flexible because they include address information so they can specify just a portion of a PROM. Intel HEX format was often used with Intel processors. TekHex is another hex format that can include a symbol table for debugging.
Format
Record structure
An SREC format file consists of a series of ASCII text records. The records have the following structure from left to right:
Record start - each record begins with an uppercase letter "S" character (ASCII 0x53) which stands for "Start-of-Record".
Record type - single numeric digit "0" to "9" character (ASCII 0x30 to 0x39), defining the type of record. See table below.
Byte count - two hex digits ("00" to "FF"), ind
|
https://en.wikipedia.org/wiki/Letter%20frequency
|
Letter frequency is the number of times letters of the alphabet appear on average in written language. Letter frequency analysis dates back to the Arab mathematician Al-Kindi (–873 AD), who formally developed the method to break ciphers. Letter frequency analysis gained importance in Europe with the development of movable type in 1450 AD, where one must estimate the amount of type required for each letterform. Linguists use letter frequency analysis as a rudimentary technique for language identification, where it is particularly effective as an indication of whether an unknown writing system is alphabetic, syllabic, or ideographic.
The use of letter frequencies and frequency analysis plays a fundamental role in cryptograms and several word puzzle games, including Hangman, Scrabble, Wordle and the television game show Wheel of Fortune. One of the earliest descriptions in classical literature of applying the knowledge of English letter frequency to solving a cryptogram is found in Edgar Allan Poe's famous story The Gold-Bug, where the method is successfully applied to decipher a message giving the location of a treasure hidden by Captain Kidd.
Herbert S. Zim, in his classic introductory cryptography text "Codes and Secret Writing", gives the English letter frequency sequence as "ETAON RISHD LFCMU GYPWB VKJXZQ", the most common letter pairs as "TH HE AN RE ER IN ON AT ND ST ES EN OF TE ED OR TI HI AS TO", and the most common doubled letters as "LL EE SS OO TT FF RR NN PP CC". Different ways of counting can produce somewhat different orders.
Letter frequencies also have a strong effect on the design of some keyboard layouts. The most frequent letters are placed on the home row of the Blickensderfer typewriter, the Dvorak keyboard layout, Colemak and other optimized layouts.
Background
The frequency of letters in text has been studied for use in cryptanalysis, and frequency analysis in particular, dating back to the Arab mathematician Al-Kindi (c. 801–873 AD), who
|
https://en.wikipedia.org/wiki/San%20Diego%20State%20Aztecs
|
The San Diego State Aztecs are the intercollegiate athletic teams that represent San Diego State University (SDSU). San Diego State sponsors six men's and eleven women's sports at the varsity level.
The Aztecs compete in NCAA Division I (FBS for football). The program's primary conference is the Mountain West Conference, though the men's soccer team competes in the Pac-12 Conference, women's water polo competes in the Golden Coast Conference, and women's lacrosse competes as an independent. On May 31, 2022, it was announced that women's lacrosse had received and accepted an invitation to join the Pac-12 Conference no later than the 2024 season (2023–24 school year).
News reports (especially on local radio) often mention "Montezuma Mesa" or "news from the mesa" when discussing San Diego State-related sports events. The San Diego State campus is known as "Montezuma Mesa", as the university is situated on a mesa overlooking Mission Valley and is located at the intersection of Montezuma Road and College Avenue in the city of San Diego.
Sports sponsored
Men's varsity sports
Baseball
Head Coach: Mark Martinez
Stadium: Tony Gwynn Stadium
Conference regular season championships: 5 (1986 • 1988 • 1990 • 2002 • 2004)
Conference tournament championships: 8 (1990 • 1991 • 2000 • 2013 • 2014 • 2015 • 2017 • 2018)
NCAA Division I Baseball Championship appearances: 14 (1979 • 1981 • 1982 • 1983 • 1984 • 1986 • 1990 • 1991 • 2009 • 2013 • 2014 • 2015 • 2017 • 2018)
See: San Diego State baseball and College baseball
Basketball
Head Coach: Brian Dutcher
Arena: Viejas Arena
Conference regular season championships: 24 (1923 • 1925 • 1932 • 1934 • 1937 • 1939 • 1941 • 1942 • 1954 • 1957 • 1958 • 1967 • 1968 • 1977 • 1978 • 2006 • 2011 • 2012 • 2014 • 2015 • 2016 • 2020 • 2021 • 2023)
Conference tournament championships: 9 (1976 • 1985 • 2002 • 2006 • 2010 • 2011 • 2018 • 2021 • 2023)
NCAA Division I men's basketball tournament appearances: 15 (1975 • 1976 • 1985 •
|
https://en.wikipedia.org/wiki/Very%20high-speed%20Backbone%20Network%20Service
|
The very high-speed Backbone Network Service (vBNS) came on line in April 1995 as part of a National Science Foundation (NSF) sponsored project to provide high-speed interconnection between NSF-sponsored supercomputing centers and select access points in the United States. The network was engineered and operated by MCI Telecommunications under a cooperative agreement with the NSF.
NSF support was available to organizations that could demonstrate a need for very high speed networking capabilities and wished to connect to the vBNS or later to the Abilene Network, the high speed network operated by the University Corporation for Advanced Internet Development (UCAID, which operates Internet2).
By 1998, the vBNS had grown to connect more than 100 universities and research and engineering institutions via 12 national points of presence with DS-3 (45 Mbit/s), OC-3c (155 Mbit/s), and OC-12c (622 Mbit/s) links on an all OC-12c, a substantial engineering feat for that time. The vBNS installed one of the first ever production OC-48c (2.5 Gbit/s) IP links in February 1999, and went on to upgrade the entire backbone to OC-48c.
In June 1999 MCI WorldCom introduced vBNS+ which allowed attachments to the vBNS network by organizations that were not approved by or receiving support from NSF.
The vBNS pioneered the production deployment of many novel network technologies including Asynchronous Transfer Mode (ATM), IP multicasting, quality of service, and IPv6.
After the expiration of the NSF agreement, the vBNS largely transitioned to providing service to the government. Most universities and research centers migrated to the Internet2 educational backbone.
In January 2006 MCI and Verizon merged. The vBNS+ is now a service of Verizon Business.
|
https://en.wikipedia.org/wiki/Intrabeam%20scattering
|
Intrabeam scattering (IBS) is an effect in accelerator physics where collisions between particles couple the beam emittance in all three dimensions. This generally causes the beam size to grow. In proton accelerators, intrabeam scattering causes the beam to grow slowly over a period of several hours. This limits the luminosity lifetime. In circular lepton accelerators, intrabeam scattering is counteracted by radiation damping, resulting in a new equilibrium beam emittance with a relaxation time on the order of milliseconds. Intrabeam scattering creates an inverse relationship between the smallness of the beam and the number of particles it contains, therefore limiting luminosity.
The two principal methods for calculating the effects of intrabeam scattering were done by Anton Piwinski in 1974 and James Bjorken and Sekazi Mtingwa in 1983. The Bjorken-Mtingwa formulation is regarded as being the most general solution. Both of these methods are computationally intensive. Several approximations of these methods have been done that are easier to evaluate, but less general. These approximations are summarized in Intrabeam scattering formulas for high energy beams by K. Kubo et al.
Intrabeam scattering rates have a dependence. This means that its effects diminish with increasing beam energy. Other ways of mitigating IBS effects are the use of wigglers, and reducing beam intensity. Transverse intrabeam scattering rates are sensitive to dispersion.
Intrabeam scattering is closely related to the Touschek effect. The Touschek effect is a lifetime based on intrabeam collisions that result in both particles being ejected from the beam. Intrabeam scattering is a risetime based on intrabeam collisions that result in momentum coupling.
Bjorken–Mtingwa formulation
The betatron growth rates for intrabeam scattering are defined as,
,
,
.
The following is general to all bunched beams,
,
where , , and are the momentum spread, horizontal, and vertical are the betat
|
https://en.wikipedia.org/wiki/Jithan
|
Jithan is a 2005 Indian Tamil-language supernatural romantic thriller film directed by Vincent Selva. The film stars Jithan Ramesh (in his Tamil debut) along with Pooja, Kalabhavan Mani, Livingston, S. Ve. Shekher, and Nalini. R. Sarathkumar plays an extended cameo appearance in the film. The film was produced by Raadan Network, owned by Raadhika. Its score and soundtrack were composed by Srikanth Deva. The film is a remake of Bollywood film, Gayab (2004). The film released on 6 May 2005 to positive reviews and became a surprise hit. The film was produced by Sarathkumar's wife Raadhika.
Plot
Surya, an introvert by nature, loves his classmate Priya from childhood but has never expressed his feelings towards her. Meanwhile, Surya's classmate Ajay also tries to woo Priya, in which he succeeds but to an extent. Priya does not like Surya as she views him as a nerd. One day, Surya gets frustrated as he is not being liked by anyone and cries loudly in a beach, where he finds a small idol of a God. He holds the idol, saying that it is better to be invisible rather than being disliked by everyone.
On returning home, Surya gets shocked knowing that he has really become invisible, while others can't see him but can only hear his voice. Surya's father is worried, as he is missing. Surya understands his father's love and discloses the truth to him alone. Taking advantage of invisibility, he always accompanies Priya without disturbing her. He also plays pranks on Ajay to exhibit his anger. Surya discloses the truth to Priya also following which she gets scared on hearing Surya's voice. Surya decides to rob a bank so that he could get some gifts for Priya, following which he gets media attention.
A special police team led by Tamizharasu and his assistant Singampuli is appointed to trap the invisible man behind the bank robbery. Priya informs the truth about Surya's power to Thamizharasu, and they set an eye on Surya to prevent him from committing further crimes. Despite attemp
|
https://en.wikipedia.org/wiki/EnGarde%20Secure%20Linux
|
EnGarde Secure Linux was an open source server-only Linux distribution developed by Guardian Digital. EnGarde incorporates open source tools such as Postfix, BIND, and the LAMP stack.
The platform includes services for web hosting, DNS and email, and others. Since 2005, SELinux has been incorporated into the platform by default. Other security services are included by default as well, such as intrusion detection, anti-virus, network management and auditing and reporting tools.
Users can configure the services through the command line, or remotely manage them through WebTool, the platform's browser-based interface.
Overview of history and development
Since its inception in 2001, the platform has been developed as an OS that incorporates only server functionality while focusing on security as the priority. Originally, the platform loosely drew on some of the code from early versions of Red Hat Linux. Within less than a year of development, much of that was re-engineered. Since then EnGarde has been treated as its own platform, as it maintains its own package repository based on RPM, among other changes.
Additionally, many desktop functions were not included. For example, EnGarde Secure Linux does not include the X Window System. Traditionally, this practice is called hardening. According to the company, the platform has been engineered to maintain this focus on security for server functions.
Specific focus
EnGarde Secure Linux was one of the earliest distributions to include SELinux for complete server implementations, and was one of the first Linux server platforms designed solely for security.
Because there is no X Window System and EnGarde is configured via a graphical interface, it is recommended to configure the operating system using a second computer. The interface, accessible through a web browser, is one of the remarkable features of EnGarde Secure Linux. Linux.com reviewed the platform in November 2005, where WebTool was described as innovative and
|
https://en.wikipedia.org/wiki/L%C3%A9vy%27s%20modulus%20of%20continuity%20theorem
|
Lévy's modulus of continuity theorem is a theorem that gives a result about an almost sure behaviour of an estimate of the modulus of continuity for Wiener process, that is used to model what's known as Brownian motion.
Lévy's modulus of continuity theorem is named after the French mathematician Paul Lévy.
Statement of the result
Let be a standard Wiener process. Then, almost surely,
In other words, the sample paths of Brownian motion have modulus of continuity
with probability one, for and sufficiently small .
See also
Some properties of sample paths of the Wiener process
|
https://en.wikipedia.org/wiki/Sarcotesta
|
The sarcotesta is a fleshy seedcoat, a type of testa. Examples of seeds with a sarcotesta are pomegranate and some cycad seeds. The sarcotesta of pomegranate seeds consists of epidermal cells derived from the integument, and there are no arils on these seeds.
|
https://en.wikipedia.org/wiki/Phase%20margin
|
In electronic amplifiers, the phase margin (PM) is the difference between the phase lag (< 0) and -180°, for an amplifier's output signal (relative to its input) at zero dB gain - i.e. unity gain, or that the output signal has the same amplitude as the input.
.
For example, if the amplifier's open-loop gain crosses 0 dB at a frequency where the phase lag is -135°, then the phase margin of this feedback system is -135° -(-180°) = 45°. See Bode plot#Gain margin and phase margin for more details.
Theory
Typically the open-loop phase lag (relative to input, < 0) varies with frequency, progressively increasing to exceed 180°, at which frequency the output signal becomes inverted, or antiphase in relation to the input. The PM will be positive but decreasing at frequencies less than the frequency at which inversion sets in (at which PM = 0), and PM is negative (PM < 0) at higher frequencies. In the presence of negative feedback, a zero or negative PM at a frequency where the loop gain exceeds unity (1) guarantees instability. Thus positive PM is a "safety margin" that ensures proper (non-oscillatory) operation of the circuit. This applies to amplifier circuits as well as more generally, to active filters, under various load conditions (e.g. reactive loads). In its simplest form, involving ideal negative feedback voltage amplifiers with non-reactive feedback, the phase margin is measured at the frequency where the open-loop voltage gain of the amplifier equals the desired closed-loop DC voltage gain.
More generally, PM is defined as that of the amplifier and its feedback network combined (the "loop", normally opened at the amplifier input), measured at a frequency where the loop gain is unity, and prior to the closing of the loop, through tying the output of the open loop to the input source, in such a way as to subtract from it.
In the above loop-gain definition, it is assumed that the amplifier input presents zero load. To make this work for non-zero-load input,
|
https://en.wikipedia.org/wiki/Linear%20circuit
|
A linear circuit is an electronic circuit which obeys the superposition principle. This means that the output of the circuit F(x) when a linear combination of signals ax1(t) + bx2(t) is applied to it is equal to the linear combination of the outputs due to the signals x1(t) and x2(t) applied separately:
It is called a linear circuit because the output voltage and current of such a circuit are linear functions of its input voltage and current. This kind of linearity is not the same as that of straight-line graphs.
In the common case of a circuit in which the components' values are constant and don't change with time, an alternate definition of linearity is that when a sinusoidal input voltage or current of frequency f is applied, any steady-state output of the circuit (the current through any component, or the voltage between any two points) is also sinusoidal with frequency f. A linear circuit with constant component values is called linear time-invariant (LTI).
Informally, a linear circuit is one in which the electronic components' values (such as resistance, capacitance, inductance, gain, etc.) do not change with the level of voltage or current in the circuit. Linear circuits are important because they can amplify and process electronic signals without distortion. An example of an electronic device that uses linear circuits is a sound system.
Alternate definition
The superposition principle, the defining equation of linearity, is equivalent to two properties, additivity and homogeneity, which are sometimes used as an alternate definition
Additivity
Homogeneity
That is, a linear circuit is a circuit in which (1) the output when a sum of two signals is applied is equal to the sum of the outputs when the two signals are applied separately, and (2) scaling the input signal by a factor scales the output signal by the same factor.
Linear and nonlinear components
A linear circuit is one that has no nonlinear electronic components in it. Examples of line
|
https://en.wikipedia.org/wiki/BRST%20quantization
|
In theoretical physics, the BRST formalism, or BRST quantization (where the BRST refers to the last names of Carlo Becchi, Alain Rouet, Raymond Stora and Igor Tyutin) denotes a relatively rigorous mathematical approach to quantizing a field theory with a gauge symmetry. Quantization rules in earlier quantum field theory (QFT) frameworks resembled "prescriptions" or "heuristics" more than proofs, especially in non-abelian QFT, where the use of "ghost fields" with superficially bizarre properties is almost unavoidable for technical reasons related to renormalization and anomaly cancellation.
The BRST global supersymmetry introduced in the mid-1970s was quickly understood to rationalize the introduction of these Faddeev–Popov ghosts and their exclusion from "physical" asymptotic states when performing QFT calculations. Crucially, this symmetry of the path integral is preserved in loop order, and thus prevents introduction of counterterms which might spoil renormalizability of gauge theories. Work by other authors a few years later related the BRST operator to the existence of a rigorous alternative to path integrals when quantizing a gauge theory.
Only in the late 1980s, when QFT was reformulated in fiber bundle language for application to problems in the topology of low-dimensional manifolds (topological quantum field theory), did it become apparent that the BRST "transformation" is fundamentally geometrical in character. In this light, "BRST quantization" becomes more than an alternate way to arrive at anomaly-cancelling ghosts. It is a different perspective on what the ghost fields represent, why the Faddeev–Popov method works, and how it is related to the use of Hamiltonian mechanics to construct a perturbative framework. The relationship between gauge invariance and "BRST invariance" forces the choice of a Hamiltonian system whose states are composed of "particles" according to the rules familiar from the canonical quantization formalism. This esoteric co
|
https://en.wikipedia.org/wiki/Car%E2%80%93Parrinello%20molecular%20dynamics
|
Car–Parrinello molecular dynamics or CPMD refers to either a method used in molecular dynamics (also known as the Car–Parrinello method) or the computational chemistry software package used to implement this method.
The CPMD method is one of the major methods for calculating ab-initio molecular dynamics (ab-initio MD or AIMD).
Ab initio molecular dynamics (ab initio MD) is a computational method that uses first principles, or fundamental laws of nature, to simulate the motion of atoms in a system. It is a type of molecular dynamics (MD) simulation that does not rely on empirical potentials or force fields to describe the interactions between atoms, but rather calculates these interactions directly from the electronic structure of the system using quantum mechanics.
In an ab initio MD simulation, the total energy of the system is calculated at each time step using density functional theory (DFT) or another method of quantum chemistry. The forces acting on each atom are then determined from the gradient of the energy with respect to the atomic coordinates, and the equations of motion are solved to predict the trajectory of the atoms.
AIMD permits chemical bond breaking and forming events to occur and accounts for electronic polarization effect. Therefore, Ab initio MD simulations can be used to study a wide range of phenomena, including the structural, thermodynamic, and dynamic properties of materials and chemical reactions. They are particularly useful for systems that are not well described by empirical potentials or force fields, such as systems with strong electronic correlation or systems with many degrees of freedom. However, ab initio MD simulations are computationally demanding and require significant computational resources.
The CPMD method is related to the more common Born–Oppenheimer molecular dynamics (BOMD) method in that the quantum mechanical effect of the electrons is included in the calculation of energy and forces for the classical motion of t
|
https://en.wikipedia.org/wiki/Schabak%20Modell
|
Schabak is a die-cast toy producer based in Nuremberg, Germany. The company is well known for its line of German cars and commercial airline models. The company's on and off relation with German Schuco Modell is particularly notable.
German origins
Schabak was formed in 1966 by Max Haselmann, Gerhard Hertlein, Horst Widmann and Wolfgang Stolpe. The company began as a toy distributor, mainly for Schuco Modell toys – and not as a producer. When Schuco went out of business in the late 1970s, Schabak acquired most of Schuco's tooling (cars and airplanes) and made agreements with many airlines to continue producing model aircraft.
The company reissued many of Schuco's own diecast airplanes. Schabak then carried on the Schuco tradition of producing toy and model cars.
In the early 1980s, Schabak largely replaced Schuco, but it should be remember that Gama Toys acquired dies from Schuco and reproduced many of Schuco's 1:43 scale line as well.
Model lineup
Schabak models were a range of exclusively German vehicles first in 1:43 scale. A Volkswagen Jetta was the company's first car, and Volkswagens, Audis, BMWs, and German made Fords were the company's common offerings. Many cars were offered as promotional models in manufacturer approved packaging with official logos.
Later diecast cars were offered in 1:24 scale – so Schabak was one of the earlier model manufacturers to move up to the larger sizes, in the mid-1980s. Vehicles offered in the larger scale were: Ford Sierra Cosworth (the Sierra also offered in police livery), Granada, Orion, and Fiesta XR2i; Audi 80 Quattro, BMW 750, BMW 850, and Z1; Mercedes S Class, and VW Golf VR6.
Over time, Schabak has carved a respectable name for itself in the die-cast car market, although it is not well known outside of Europe. Outside Germany, competitors Matchbox, Hot Wheels, Maisto, and Corgi were more popular.
Similar to Schuco which preceded it, Gama Toys, and NZG, Schabak car models have excellent detail and proportion u
|
https://en.wikipedia.org/wiki/TORCH%20report
|
The TORCH report (The Other Report on Chernobyl) was a health impacts report requested by the European Greens in 2006, for the twentieth anniversary of the Chernobyl disaster, in reply to the 2006 report of the Chernobyl Forum which was criticized by some advocacy organizations opposed to nuclear energy such as Greenpeace.
In 2006, German Green Member of the European Parliament Rebecca Harms, commissioned two British scientists to write an alternate report (TORCH, The Other Report on Chernobyl) as a response to the 2006 Chernobyl Forum report. The two British scientists that published the report were radiation biologist Ian Fairlie and David Sumner. Both are members of the International Physicians for the Prevention of Nuclear War, an organization awarded the Nobel Peace Prize in 1985.
In 2016, an updated TORCH report was written by Ian Fairlie with support of Friends of the Earth Austria.
Themes
The summary of the 2006 TORCH report said, in part:
Contamination range
The 2006 TORCH Report stated that:
"In terms of their surface areas, Belarus (22% of its land area) and Austria (13%) were most affected by higher levels of contamination. Other countries were seriously affected; for example, more than 5% of Ukraine, Finland and Sweden were contaminated to high levels (> 40,000 Bq/m² caesium-137). More than 80% of Moldova, the European part of Turkey, Slovenia, Switzerland, Austria and the Slovak Republic were contaminated to lower levels (> 4000 Bq/m² caesium-137). And 44% of Germany and 34% of the UK were similarly affected."
The 2016 report said that 5 million people still live in areas highly contaminated by radiation, in Belarus, Russia, and Ukraine. 400 million people still live in areas which are less contaminated.
Iodine and thyroid effects
The TORCH 2006 report "estimated that more than half the iodine-131 from Chernobyl [which increases the risk of thyroid cancer] was deposited outside the former Soviet Union. Possible increases in thyroid cancer hav
|
https://en.wikipedia.org/wiki/Duchess%20of%20Kent%27s%20Annuity%20Act%201838
|
The Duchess of Kent's Annuity Act 1838 (1 & 2 Vict. c. 8) was an Act of Parliament in the United Kingdom, signed into law on 26 January 1838. It empowered Queen Victoria to grant an annuity of £30,000 to her mother, the Duchess of Kent, on the condition that all previously existing annuities to the duchess were to cease.
|
https://en.wikipedia.org/wiki/Dynamic%20scattering%20mode
|
George Heilmeier proposed the dynamic scattering effect which causes a strong scattering of light when the electric field applied to a special liquid crystal mixture exceeds a threshold value.
A DSM cell requires the following ingredients:
a liquid crystal with negative dielectric anisotropy (aligns the LC long axis perpendicular to the electric field),
homeotropic alignment of the LC (i.e. perpendicular to the substrate planes),
doping of the LC with a substance that increases the conductivity of the LC to allow a current to flow.
With no voltage applied the LC-cell with the homeotropically aligned LC is clear and transparent. With increasing voltage and current, the electric field tries to align the long molecular axis of the LC perpendicular to the field while the ion transport through the layer has the tendency to align the LC perpendicular to the substrate plates. As a result, a pattern of repetitive striped regions called Williams domains is generated in the cell. Increasing the voltage further causes this regular pattern to be replaced by a turbulent state which strongly scatters light. This effect belongs to the class of electro-hydrodynamic effects in LCs. Electro-optic displays can be realized with the effect in the transmissive and reflective mode of operation. The driving voltages required for light scattering are in the range of several tens of volts, and the non-trivial current depends on the area of the activated segments. Historically the DSM effect was thus poorly suited for displays in battery-powered devices.
|
https://en.wikipedia.org/wiki/Residual%20%28numerical%20analysis%29
|
Loosely speaking, a residual is the error in a result. To be precise, suppose we want to find x such that
Given an approximation x0 of x, the residual is
that is, "what is left of the right hand side" after subtracting f(x0)" (thus, the name "residual": what is left, the rest). On the other hand, the error is
If the exact value of x is not known, the residual can be computed, whereas the error cannot.
Residual of the approximation of a function
Similar terminology is used dealing with differential, integral and functional equations. For the approximation of the solution of the equation
the residual can either be the function
,
or can be said to be the maximum of the norm of this difference
over the domain , where the function is expected to approximate the solution ,
or some integral of a function of the difference, for example:
In many cases, the smallness of the residual means that the approximation is close to the solution, i.e.,
In these cases, the initial equation is considered as well-posed; and the residual can be considered as a measure of deviation of the approximation from the exact solution.
Use of residuals
When one does not know the exact solution, one may look for the approximation with small residual.
Residuals appear in many areas in mathematics, including iterative solvers such as the generalized minimal residual method, which seeks solutions to equations by systematically minimizing the residual.
|
https://en.wikipedia.org/wiki/Transformational%20theory
|
Transformational theory is a branch of music theory developed by David Lewin in the 1980s, and formally introduced in his 1987 work, Generalized Musical Intervals and Transformations. The theory—which models musical transformations as elements of a mathematical group—can be used to analyze both tonal and atonal music.
The goal of transformational theory is to change the focus from musical objects—such as the "C major chord" or "G major chord"—to relations between musical objects (related by transformation). Thus, instead of saying that a C major chord is followed by G major, a transformational theorist might say that the first chord has been "transformed" into the second by the "Dominant operation." (Symbolically, one might write "Dominant(C major) = G major.") While traditional musical set theory focuses on the makeup of musical objects, transformational theory focuses on the intervals or types of musical motion that can occur. According to Lewin's description of this change in emphasis, "[The transformational] attitude does not ask for some observed measure of extension between reified 'points'; rather it asks: 'If I am at s and wish to get to t, what characteristic gesture should I perform in order to arrive there?'" (from Generalized Musical Intervals and Transformations (GMIT), p. 159)
Formalism
The formal setting for Lewin's theory is a set S (or "space") of musical objects, and a set T of transformations on that space. Transformations are modeled as functions acting on the entire space, meaning that every transformation must be applicable to every object.
Lewin points out that this requirement significantly constrains the spaces and transformations that can be considered. For example, if the space S is the space of diatonic triads (represented by the Roman numerals I, ii, iii, IV, V, vi, and vii°), the "Dominant transformation" must be defined so as to apply to each of these triads. This means, for example, that some diatonic triad must be selected as the
|
https://en.wikipedia.org/wiki/Graceful%20exit
|
A graceful exit (or graceful handling) is a simple programming idiom wherein a program detects a serious error condition and "exits gracefully" in a controlled manner as a result. Often the program prints a descriptive error message to a terminal or log as part of the graceful exit.
Usually, code for a graceful exit exists when the alternative — allowing the error to go undetected and unhandled — would produce spurious errors or later anomalous behavior that would be more difficult for the programmer to debug. The code associated with a graceful exit may also take additional steps, such as closing files, to ensure that the program leaves data in a consistent, recoverable state.
Graceful exits are not always desired. In many cases, an outright crash can give the software developer the opportunity to attach a debugger or collect important information, such as a core dump or stack trace, to diagnose the root cause of the error.
In a language that supports formal exception handling, a graceful exit may be the final step in the handling of an exception. In other languages graceful exits can be implemented with additional statements at the locations of possible errors.
The phrase "graceful exit" has also been generalized to refer to letting go from a job or relationship in life that has ended.
In Perl
In the Perl programming language, graceful exits are generally implemented via the operator. For example, the code for opening a file often reads like the following:
# Open the file 'myresults' for writing, or die with an appropriate error message.
open RESULTS, '>', 'myresults' or die "can't write to 'myresults' file: $!";
If the attempt to open the file myresults fails, the containing program will terminate with an error message and an exit status indicating abnormal termination.
In Java
In the Java programming language, the block is used often to catch exceptions. All potentially dangerous code is placed inside the block and, if an exception occurred, is stop
|
https://en.wikipedia.org/wiki/Programming%20idiom
|
In computer programming, a programming idiom or code idiom is a group of code fragments sharing an equivalent semantic role, which recurs frequently across software projects often expressing a special feature of a recurring construct in one or more programming languages or libraries. This definition is rooted in the definition of "idiom" as used in the field of linguistics. Developers recognize programming idioms by associating meaning (semantic role) to one or more syntactical expressions within code snippets (code fragments). The idiom can be seen as an action on a programming concept underlying a pattern in code, which is represented in implementation by contiguous or scattered code fragments. These fragments are available in several programming languages, frameworks or even libraries. Generally speaking, a programming idiom's semantic role is a natural language expression of a simple task, algorithm, or data structure that is not a built-in feature in the programming language being used, or, conversely, the use of an unusual or notable feature that is built into a programming language.
Knowing the idioms associated with a programming language and how to use them is an important part of gaining fluency in that language. It also helps to transfer knowledge in the form of analogies from one language or framework to another. Such idiomatic knowledge is widely used in crowdsourced repositories to help developers overcome programming barriers. Mapping code idioms to idiosyncrasies can be a helpful way to navigate the tradeoffs between generalization and specificity. By identifying common patterns and idioms, developers can create mental models and schemata that help them quickly understand and navigate new code. Furthermore, by mapping these idioms to idiosyncrasies and specific use cases, developers can ensure that they are applying the correct approach and not overgeneralizing it.
One way to do this is by creating a reference or documentation that maps common idio
|
https://en.wikipedia.org/wiki/Generalized%20TTL%20security%20mechanism
|
The Generalized TTL Security Mechanism (GTSM) is a proposed Internet data transfer security method relying on a packet's Time to Live (IPv4) or Hop limit (IPv6) thus to protect a protocol stack from spoofing and denial of service attacks.
Introduction
The desired purpose of this proposal is to verify whether the packet was originated by an adjacent node and to protect router infrastructure from overload-based attacks.
Implementation
For protocols which GTSM is enabled, the following procedure is performed.
If the router is directly connected
Change the outbound TTL to 255 for its protocol connection
If the protocol is a configured protocol peerSet the Access Control List (ACL) to allow packets of the given protocol to only pass to the route processor (RP). The TTL must be set to either 255 if the destination is directly connect or 255 minus the range of acceptable hops if not connect directly. This method assumes however that the ACL designated by the receive path is configured to control packets passing to the RP.
If the inbound TTL is set to 255 or 255 minus the range of acceptable hops (when the peer is not directly connected), the packet will not be processed and will be sent to a low priority queue.
History
Many people have been given credit for creating the idea. Among them are Paul Traina and Jon Stewart. A similar method was also proposed by Ryan McDowell.
See also
Protocol stack
Denial-of-service attack
|
https://en.wikipedia.org/wiki/Rasti
|
Rasti (from the German verb rasten, 'to secure, to place firmly, to lock into its place') is a construction toy made from plastic—similar to Lego, Tente and Mis Ladrillos—that was produced by the Knittax company in Argentina during the 1960s and 1970s. In 2007, manufacturing began again.
Construction
The construction method allows a person to join pieces by using medium pressure, locking the blocks together with semi-rigid plastic pins which, unlike other brands, avoids the friction and wear between the pins and the internal surfaces of the blocks. This system avoids the fragility and instability in the models, giving a solidity unequaled by any other construction toy or block system.
Quality
Rasti bricks produced in Argentina were made with quality in mind—the shafts were made in chromed steel with plastic points for joining the special holes in the wheels. They were so popular that the word "Rasti" became regularly used as a synonym for durability and stability. It is still used as a generic noun to describe objects that can be assembled or disassembled in pieces saying: "it was broken like a Rasti" or "you can build it like a Rasti".
Having obtained a considerable popularity in the toy market in Argentina, the "Rasti" was exported to countries such as Canada and Germany (from where the original design came) until its production moved to Brazil where its license was granted to the company Hering, and remained in production for several years. The brand "Rasti" comes from the German word "rasten" which means "to affirm, to establish giving solidity, stability and firmness" which is the concept that governs the plastic blocks that are embedded to form various structures.
In Europe, with new colors but with the same basic pieces, the Rasti still continued selling after the new millennium (“Rasti-2000”).
Between the more popular sets sold in the Rasti decades, there are the Minibox 600, the Multibox 800, the technical kits 501 and 502, the three variants of Rasti
|
https://en.wikipedia.org/wiki/Dimensional%20modeling
|
Dimensional modeling (DM) is part of the Business Dimensional Lifecycle methodology developed by Ralph Kimball which includes a set of methods, techniques and concepts for use in data warehouse design. The approach focuses on identifying the key business processes within a business and modelling and implementing these first before adding additional business processes, as a bottom-up approach. An alternative approach from Inmon advocates a top down design of the model of all the enterprise data using tools such as entity-relationship modeling (ER).
Description
Dimensional modeling always uses the concepts of facts (measures), and dimensions (context). Facts are typically (but not always) numeric values that can be aggregated, and dimensions are groups of hierarchies and descriptors that define the facts. For example, sales amount is a fact; timestamp, product, register#, store#, etc. are elements of dimensions. Dimensional models are built by business process area, e.g. store sales, inventory, claims, etc. Because the different business process areas share some but not all dimensions, efficiency in design, operation, and consistency, is achieved using conformed dimensions, i.e. using one copy of the shared dimension across subject areas.
Dimensional modeling does not necessarily involve a relational database. The same modeling approach, at the logical level, can be used for any physical form, such as multidimensional database or even flat files. It is oriented around understandability and performance.
Design method
Designing the model
The dimensional model is built on a star-like schema or snowflake schema, with dimensions surrounding the fact table. To build the schema, the following design model is used:
Choose the business process
Declare the grain
Identify the dimensions
Identify the fact
Choose the business process
The process of dimensional modeling builds on a 4-step design method that helps to ensure the usability of the dimensional model and the
|
https://en.wikipedia.org/wiki/Flooz.com
|
Flooz.com was a dot-com venture, now defunct, based in New York City that went online in February 1999. It was promoted by comic actress Whoopi Goldberg in a series of television advertisements. Started by iVillage co-founder Robert Levitan, the company attempted to establish a currency unique to Internet merchants, somewhat similar in concept to airline frequent flyer programs or grocery store stamp books. The name "flooz" was based upon the Arabic word for money, فلوس, fuloos. Users accumulated flooz credits either as a promotional bonus given away by some internet businesses or purchased directly from flooz.com which then could be redeemed for merchandise at a variety of participating online stores. Adoption of flooz by both merchants and customers proved limited, and it never established itself as a widely recognized medium of exchange, which hindered both its usefulness and appeal.
Use by crime syndicate
In 2001, Flooz.com was notified by the Federal Bureau of Investigation that a Russian-Filipino organized crime syndicate used $300,000 worth of Flooz and stolen credit card numbers as part of a money-laundering scheme, in which stolen credit cards were used to purchase currency and then redeemed. Levitan has stated that fraudulent purchases accounted for 19% of consumer credit card transactions by mid-2001.
Closure
The company announced its closure on August 26, 2001, perceived as an early indicator of the growing dot-com bubble burst. Upon the company's closing, all unused flooz credits became worthless and nonrefundable. Over its short history, flooz.com reportedly exhausted from $35 million to $50 million in venture capital. The company's bankruptcy was cited as having approximately 325,000 creditors.
See also
Beenz.com
Digital currency
InternetCash.com
Virtual currency
Vouchers
Bitcoin
Diem (digital currency)
|
https://en.wikipedia.org/wiki/Wolf%20%28video%20game%29
|
Wolf is a 1994 life simulation and role-playing video game. The player takes the role of a wolf. It was followed by Lion in 1995.
It is unrelated to the 1994 film of the same name.
Gameplay
The gameplay is divided into two parts. The first is a sandbox mode, where the player has no predetermined goal. The second is a scenario mode, where the player has to complete specific actions; this is comparable to quests given in RPGs.
Reception
In Electronic Entertainment, Joel Enos wrote that Wolfs "unique take on role-playing/simulation games puts it in a class by itself". He concluded, "You'll not only spend many happy hours playing, you might even learn something." The magazine later named Wolf its 1994 "Best Simulation Game". The editors reiterated that the game is "great fun to play, and you also can't help but learn about these intriguing creatures as you step into their skin."
Vince DeNardo of Computer Gaming World called Wolf "a role-playing simulation that is both worthwhile for your children, and for the child that lies within each of us." He believed that it is "a novel concept backed up by solid execution", and that it "redefines the genre of Role-Playing as we gamers know it". Computer Gaming World went on to nominate Wolf as its 1994 "Role-Playing Game of the Year", with the editors calling it an innovative product that "skillfully mixes role-playing elements and scientific fact".
The reviewer for PC Gamer US remarked that "hours pass like minutes in this fascinating RPG for nature lovers", and summarized it as an "unusual, entertaining game that gives genuine insight into one of nature's most magnificent and misunderstood creatures." The magazine's editors later awarded Wolf their 1994 "Special Achievement in Innovative Design". They wrote that the game is "both entertaining and enlightening — and a breath of fresh air in a genre [role-playing games] that some say has run its course."
Computer Players Peter Suciu summarized it as "a nice novelty game
|
https://en.wikipedia.org/wiki/Sexual%20conflict
|
Sexual conflict or sexual antagonism occurs when the two sexes have conflicting optimal fitness strategies concerning reproduction, particularly over the mode and frequency of mating, potentially leading to an evolutionary arms race between males and females. In one example, males may benefit from multiple matings, while multiple matings may harm or endanger females, due to the anatomical differences of that species. Sexual conflict underlies the evolutionary distinction between male and female.
The development of an evolutionary arms race can also be seen in the chase-away sexual selection model, which places inter-sexual conflicts in the context of secondary sexual characteristic evolution, sensory exploitation, and female resistance. According to chase-away selection, continuous sexual conflict creates an environment in which mating frequency and male secondary sexual trait development are somewhat in step with the female's degree of resistance. It has primarily been studied in animals, though it can in principle apply to any sexually reproducing organism, such as plants and fungi. There is some evidence for sexual conflict in plants.
Sexual conflict takes two major forms:
Interlocus sexual conflict is the interaction of a set of antagonistic alleles at one or more loci in males and females. An example is conflict over mating rates. Males frequently have a higher optimal mating rate than females because in most animal species, they invest fewer resources in offspring than their female counterparts. Therefore, males have numerous adaptations to induce females to mate with them. Another well-documented example of inter-locus sexual conflict are the seminal fluid proteins of Drosophila melanogaster, which up-regulate females' egg-laying rate and reduces her desire to re-mate with another male (serving the male's interests), but also shorten the female's lifespan, reducing her fitness.
Intralocus sexual conflict – This kind of conflict represents a tug of war be
|
https://en.wikipedia.org/wiki/Lion%20%28video%20game%29
|
Lion is an animal simulation video game for MS-DOS where the player takes the role of a lion. It was published by Sanctuary Woods in 1995 as a follow-up to Wolf.
Gameplay
The gameplay is divided into two parts. The first is a sandbox simulation mode, where the player has no predetermined goal. The second is a scenario mode, where the player has to complete specific actions; this is comparable to quests given in RPGs.
As in the original, the player takes on the role of a lion chosen from a pool of 20 different animals, with varying attributes, in existing prides or handpicked groups made by the player. The player can control a single animal or all members of a pride.
Unlike Wolf, which takes place in three different locales, Lion is played only on the savannas and plains of the Serengeti National Park in Tanzania (though in four different seasons).
Also includes a "Lion Safari", an interactive tour of the leonine life on the Serengeti.
Reception
A Next Generation critic commented, "Sanctuary Woods hit a home run with its predator simulation Wolf. Its next title in the series, Lion, continues the trend by demonstrating in a very entertaining way what life can be like for large, aggressive creatures living in today's wilds." While remarking that the controls take considerable time to get used to, he found the game to ultimately be both educational and entertaining, and scored it four out of five stars.
|
https://en.wikipedia.org/wiki/Discrete%20tomography
|
Discrete tomography focuses on the problem of reconstruction of binary images (or finite subsets of the integer lattice) from a small number of their projections.
In general, tomography deals with the problem of determining shape and dimensional information of an object from a set of projections. From the mathematical point of view, the object corresponds to a function and the problem posed is to reconstruct this function from its integrals or sums over subsets of its domain. In general, the tomographic inversion problem may be continuous or discrete. In continuous tomography both the
domain and the range of the function are continuous and line integrals are used. In discrete tomography the domain of the function may be either discrete or continuous, and the range of the function is a finite set of real, usually nonnegative numbers. In continuous tomography when a large number of projections is available, accurate reconstructions can be made by many different algorithms.
It is typical for discrete tomography that only a few projections (line sums) are used. In this case, conventional techniques all fail. A special case of discrete tomography deals with the problem of the reconstruction of
a binary image from a small number of projections. The name discrete tomography is due to Larry Shepp, who organized the first meeting devoted to this topic (DIMACS Mini-Symposium on Discrete Tomography, September 19, 1994, Rutgers University).
Theory
Discrete tomography has strong connections with other mathematical fields, such as number theory, discrete mathematics, computational complexity theory and combinatorics. In fact, a number of discrete tomography problems were first discussed as combinatorial problems. In 1957, H. J. Ryser found a necessary and sufficient condition for a pair of vectors being the two orthogonal projections of a discrete set. In the proof of his theorem, Ryser also described a reconstruction algorithm, the very first reconstruction algorithm for a gen
|
https://en.wikipedia.org/wiki/Hexapod%20%28robotics%29
|
A six-legged walking robot should not be confused with a Stewart platform, a kind of parallel manipulator used in robotics applications.
A hexapod robot is a mechanical vehicle that walks on six legs. Since a robot can be statically stable on three or more legs, a hexapod robot has a great deal of flexibility in how it can move. If legs become disabled, the robot may still be able to walk. Furthermore, not all of the robot's legs are needed for stability; other legs are free to reach new foot placements or manipulate a payload.
Many hexapod robots are biologically inspired by Hexapoda locomotion – the insectoid robots. Hexapods may be used to test biological theories about insect locomotion, motor control, and neurobiology.
Designs
Hexapod designs vary in leg arrangement. Insect-inspired robots are typically laterally symmetric, such as the RiSE robot at Carnegie Mellon. A radially symmetric hexapod is ATHLETE (All-Terrain Hex-Legged Extra-Terrestrial Explorer) robot at JPL.
Typically, individual legs range from two to six degrees of freedom. Hexapod feet are typically pointed, but can also be tipped with adhesive material to help climb walls or wheels so the robot can drive quickly when the ground is flat.
Locomotion
Most often, hexapods are controlled by gaits, which allow the robot to move forward, turn, and perhaps side-step. Some of the
most common gaits are as follows:
Alternating tripod: 3 legs on the ground at a time.
Quadruped.
Crawl: move just one leg at a time.
Gaits for hexapods are often stable, even in slightly rocky and uneven terrain.
Motion may also be nongaited, which means the sequence of leg motions is not fixed, but rather chosen by the computer in response to the sensed environment. This may be most helpful in very rocky terrain, but existing techniques for motion planning are computationally expensive.
Biologically inspired
Insects are chosen as models because their nervous system are simpler than other animal species. Als
|
https://en.wikipedia.org/wiki/Riesz%20potential
|
In mathematics, the Riesz potential is a potential named after its discoverer, the Hungarian mathematician Marcel Riesz. In a sense, the Riesz potential defines an inverse for a power of the Laplace operator on Euclidean space. They generalize to several variables the Riemann–Liouville integrals of one variable.
Definition
If 0 < α < n, then the Riesz potential Iαf of a locally integrable function f on Rn is the function defined by
where the constant is given by
This singular integral is well-defined provided f decays sufficiently rapidly at infinity, specifically if f ∈ Lp(Rn) with 1 ≤ p < n/α. In fact, for any 1 ≤ p (p>1 is classical, due to Sobolev, while for p=1 see , the rate of decay of f and that of Iαf are related in the form of an inequality (the Hardy–Littlewood–Sobolev inequality)
where is the vector-valued Riesz transform. More generally, the operators Iα are well-defined for complex α such that .
The Riesz potential can be defined more generally in a weak sense as the convolution
where Kα is the locally integrable function:
The Riesz potential can therefore be defined whenever f is a compactly supported distribution. In this connection, the Riesz potential of a positive Borel measure μ with compact support is chiefly of interest in potential theory because Iαμ is then a (continuous) subharmonic function off the support of μ, and is lower semicontinuous on all of Rn.
Consideration of the Fourier transform reveals that the Riesz potential is a Fourier multiplier.
In fact, one has
and so, by the convolution theorem,
The Riesz potentials satisfy the following semigroup property on, for instance, rapidly decreasing continuous functions
provided
Furthermore, if , then
One also has, for this class of functions,
See also
Bessel potential
Fractional integration
Sobolev space
Notes
|
https://en.wikipedia.org/wiki/TREC%20Genomics
|
The TREC Genomics track was a workshop held under the auspices of NIST for the purpose of evaluating systems for information retrieval and related technologies in the genomics domain. The TREC Genomics track took place annually from 2003 to 2007, with some modifications to the task set every year; tasks included information retrieval, document classification, GeneRIF prediction, and question answering.
See also
TREC, the Text Retrieval Conference
External links
The TREC Genomics track web site
Genomics organizations
|
https://en.wikipedia.org/wiki/Kepler%E2%80%93Bouwkamp%20constant
|
In plane geometry, the Kepler–Bouwkamp constant (or polygon inscribing constant) is obtained as a limit of the following sequence. Take a circle of radius 1. Inscribe a regular triangle in this circle. Inscribe a circle in this triangle. Inscribe a square in it. Inscribe a circle, regular pentagon, circle, regular hexagon and so forth.
The radius of the limiting circle is called the Kepler–Bouwkamp constant. It is named after Johannes Kepler and , and is the inverse of the polygon circumscribing constant.
Numerical value
The decimal expansion of the Kepler–Bouwkamp constant is
The natural logarithm of the Kepler-Bouwkamp constant is given by
where is the Riemann zeta function.
If the product is taken over the odd primes, the constant
is obtained .
|
https://en.wikipedia.org/wiki/Deep%20water%20culture
|
Deep water culture (DWC) is a hydroponic method of plant production by means of suspending the plant roots in a solution of nutrient-rich, oxygenated water. Also known as deep flow technique (DFT), floating raft technology (FRT), or raceway, this method uses a rectangular tank less than one foot deep filled with a nutrient-rich solution with plants floating in Styrofoam boards on top. This method of floating the boards on the nutrient solution creates a near friction-less conveyor belt of floating rafts. DWC, along with nutrient film technique (NFT), and aggregate culture, is considered to be one of the most common hydroponic systems used today. Typically, DWC is used to grow short-term, non-fruiting crops such as leafy greens and herbs. The large volume of water helps mitigate rapid changes in temperature, pH, electrical conductivity (EC), and nutrient solution composition.
Hobby methods
Deep water culture has also been used by hobby growers. Net pots, plastic pots with netting to allow roots to grow through their surface, are filled with a hydroponic medium such as Hydroton or Rockwool to hold the base of the plant. In some cases net pots are not needed. For oxygenation of the hydroponic solution, an airstone is added. This air stone is then connected to an airline that runs to an air pump.
As the plant grows, the root mass stretches through the rockwool or hydroton into the water below. Under ideal growing conditions, plants are able to grow a root mass that comprises the entire bin in a loosely packed mass. As the plant grows and consumes nutrients the pH and EC of the water fluctuate. For this reason, frequent monitoring must be kept of the nutrient solution to ensure that it remains in the uptake range of the crop. A pH that is too high or too low will make certain nutrients unavailable for uptake by plants. Generally, the best pH for hydroponic crops is around 5.5–6.0. In terms of EC, too low means that there is a low salt content, usually meaning a lac
|
https://en.wikipedia.org/wiki/Pharsalia%20Technologies
|
Pharsalia Technologies, Inc. was founded in December 1999, located in Roswell, Georgia, as an emerging company developing network infrastructure products for the Internet market. Led by a team of over 28 software engineers, Pharsalia focused on developing software products for the rapidly escalating content delivery market. The company was headed by Chip Howes, whose team is the inventor of record for over 30 patents in the area of TCP/IP server load balancing, and are credited with the invention of the technology in 1996.
Pharsalia was acquired by Alteon WebSystems of San Jose, California on July 21, 2000 in a stock swap worth about $221 million. Subsequently, Alteon was acquired by Nortel Networks on October 4, 2000.
After integrating the company into Nortel Networks, most of the original team was laid off by August 2003. The founders went on to start another company, Steelbox Networks, and hired back many of the same engineers.
Trivia
Pharsalia is a name related to the location of Julius Caesar's victory over Pompey.
Pharsalia Technologies was named after Pharsalia Plantation in Massies Mill, VA, the birthplace of Sarah Howes, mother of founder, Chip Howes.
|
https://en.wikipedia.org/wiki/Network%20search%20engine
|
Computer networks are connected together to form larger networks such as campus networks, corporate networks, or the Internet. Routers are network devices that may be used to connect these networks (e.g., a home network connected to the network of an Internet service provider). When a router interconnects many networks or handles much network traffic, it may become a bottleneck and cause network congestion (i.e., traffic loss).
A number of techniques have been developed to prevent such problems. One of them is the network search engine (NSE), also known as network search element. This special-purpose device helps a router perform one of its core and repeated functions very fast: address lookup. Besides routing, NSE-based address lookup is also used to keep track of network service usage for billing purposes, or to look up patterns of information in the data passing through the network for security reasons .
Network search engines are often available as ASIC chips to be interfaced with the network processor of the router. Content-addressable memory and Trie are two techniques commonly used when implementing NSEs.
|
https://en.wikipedia.org/wiki/Gelonin
|
Gelonin is a type 1 ribosome-inactivating protein and toxin of approximately 30 kDa found in the seeds of the Himalayan plant Gelonium multiflorum. In cell-free systems gelonin exerts powerful N-glycosidase activity on the 28S rRNA unit of eukaryotic ribosomes by cleaving out adenine at the 4324 site. Gelonin lacks carbohydrate-binding domains so it is unable to cross the plasma membrane, making it highly effective only in cell free systems.
Structure
Gelonin is a 30 kDa protein. Gelonin is a dimer, consisting of two identical monomers. Each monomer is composed of 251 amino acids, for a total of 502 residues. Gelonin is classified as an (α + β) protein, as its secondary structure consists of both beta sheets and alpha helices. Each monomer’s first 100 amino acids form 10 beta sheets, while their last 151 amino acids form 10 alpha helices. Gelonin’s two dimers are stabilized by hydrophobic interactions and hydrogen bonds. Specifically, the Asn22, Arg178, Asn180, and Lys237 residues of each monomer hydrogen bond with each other to stabilize the molecules. Likewise, the hydrophobic residues Tyr14, Ile15, Val16 and Pro38 from one monomer form hydrophobic interactions with the same residues in the adjacent monomer to further stabilize the dimer.
Active Site
Gelonin’s active site is a cleft formed by six key residues: Tyr74, Gly111, Tyr113, Glu166, Arg169, and Trp198. The shape of the active site is stabilized by hydrogen bonding between Gly111 and Tyr113. Tyr113, Glu166, and Arg169 residues in the activate site participate in the enzymatic removal of adenine at the 4324 site of eukaryotic 28S rRNA. Although the reaction mechanism of gelonin has yet to be characterized in detail, it is believed to take place in a manner that is conserved among other type 1 ribosome-inactivating proteins(RIP). According to research performed on other Type 1 RIPs, Tyr113 and Arg169 form hydrogen bonds with nitrogen atoms in the adenine nucleobase. This facilitates the cleavage of the glyc
|
https://en.wikipedia.org/wiki/Bloody%20flag
|
Often called bloody flags or the bloody red (among other names, see ), pattern-free red flags were the traditional nautical symbol in European waters prior to the invention of flag signal codes to signify an intention to give battle and that ‘no quarter would be given’, indicating that surrender would not be accepted and all prisoners killed, but also vice versa, meaning that the one flying the flag would fight to the last man (defiance to the death). Such flags were traditionally plain but examples with motifs also existed, such as skull and crossbones on some pirate examples (see Jolly Roger).
The origin of bloody flags is unknown, but deep red coloring is strongly associated with the color of blood and thus symbolises suffering and combat. Historical sources mentions bloody flags being used by Normans as early as the 13th century, but red-painted shields were used similarly by seafaring Norse warriors in previous centuries. Since the late 18th century, the bloody flag has been transformed into the political flag for revolution (see Red flag (politics)).
Pirate usage
During the Golden Age of Piracy, pirates used bloody flags in combination with the "black flag of piracy" to signal demands and threats to their victims. Initially, a false national flag would be flown as a way to close the gap between the pirate ship and the victim ship. When the victim ship came within gun range, the black flag would be raised, signaling that "quarter would be given", if the enemy surrendered, meaning they would spare the victims after rifling through their cargo. To signal "yes", the victim ship would have to take down their own flag (in naval terminology called "striking their flag"). However, if they refused or were too slow, the pirates would raise the bloody flag, which would signal that the cargo would be taken by force and that "no quarter would be given" to prisoners. If the pirates had several ships, the raising of the bloody flag would also act as the signal "to attack"
|
https://en.wikipedia.org/wiki/Diurnal%20enuresis
|
Diurnal enuresis is daytime wetting (functional daytime urinary incontinence). Nocturnal enuresis is nighttime wetting. Enuresis is defined as the involuntary voiding of urine beyond the age of anticipated control. Both of these conditions can occur at the same time, although many children with nighttime wetting will not have wetting during the day. Children with daytime wetting may have frequent urination, have urgent urination or dribble after urinating.
The DSM-V classifies enuresis as an elimination disorder and as such it may be defined as the involuntary or voluntary elimination of urine into inappropriate places. A patient must be of at least a developmental level equivalent to the chronological age of a 5 year old in order to be diagnosed with enuresis (in other words it is not abnormal for a child below the age of 5).
The patient must either experience a frequency of inappropriate voiding at least twice a week for a period of at least 3 consecutive months OR experience clinically significant distress or impairment in social, occupational or other important areas of functioning, in order to be diagnosed with enuresis. These symptoms must not be due to any underlying medical condition (e.g. a child who wets the bed because their kidneys produce too much urine, does not have enuresis, they have kidney disease which is causing the inappropriate urination). Also, these symptoms must not be due exclusively to the direct physiological effect of a substance (such as a diuretic or antipsychotic).
Causes
Common causes include, but not limited to:
Incomplete emptying of the bladder
Irritable bladder
Constipation
Stress
Urinary tract infection
Urgency (not "making it" to the bathroom in time)
Anatomic abnormality
Poor toileting habits
Small bladder capacity
Medical conditions like overactive bladder disorder
Management
Management approaches include reassuring families that the child is not wetting their pants on purpose and treatment should include positi
|
https://en.wikipedia.org/wiki/Cline%20%28biology%29
|
In biology, a cline (from the Greek κλίνειν klinein, meaning "to lean") is a measurable gradient in a single characteristic (or biological trait) of a species across its geographical range. First coined by Julian Huxley in 1938, the cline usually has a genetic (e.g. allele frequency, blood type), or phenotypic (e.g. body size, skin pigmentation) character. Clines can show smooth, continuous gradation in a character, or they may show more abrupt changes in the trait from one geographic region to the next.
A cline refers to a spatial gradient in a specific, singular trait, rather than a collection of traits; a single population can therefore have as many clines as it has traits, at least in principle. Additionally, Huxley recognised that these multiple independent clines may not act in concordance with each other. For example, it has been observed that in Australia, birds generally become smaller the further towards the north of the country they are found. In contrast, the intensity of their plumage colouration follows a different geographical trajectory, being most vibrant where humidity is highest and becoming less vibrant further into the arid centre of the country.
Because of this, clines were defined by Huxley as being an "auxiliary taxonomic principle"; that is, clinal variation in a species is not awarded taxonomic recognition in the way subspecies or species are.
While the terms "ecotype" and "cline" are sometimes used interchangeably, they do in fact differ in that "ecotype" refers to a population which differs from other populations in a number of characters, rather than the single character that varies amongst populations in a cline.
Drivers and the evolution of clines
Clines are often cited to be the result of two opposing drivers: selection and gene flow (also known as migration). Selection causes adaptation to the local environment, resulting in different genotypes or phenotypes being favoured in different environments. This diversifying force is c
|
https://en.wikipedia.org/wiki/MISRA%20C
|
MISRA C is a set of software development guidelines for the C programming language developed by The MISRA Consortium. Its aims are to facilitate code safety, security, portability and reliability in the context of embedded systems, specifically those systems programmed in ISO C / C90 / C99.
There is also a set of guidelines for MISRA C++ not covered by this article.
History
Draft: 1997
First edition: 1998 (rules, required/advisory)
Second edition: 2004 (rules, required/advisory)
Third edition: 2012 (directives; rules, Decidable/Undecidable)
MISRA compliance: 2016, updated 2020
For the first two editions of MISRA-C (1998 and 2004) all Guidelines were considered as Rules. With the publication of MISRA C:2012 a new category of Guideline was introduced - the Directive whose compliance is more open to interpretation, or relates to process or procedural matters.
Adoption
Although originally specifically targeted at the automotive industry, MISRA C has evolved as a widely accepted model for best practices by leading developers in sectors including automotive, aerospace, telecom, medical devices, defense, railway, and others.
For example:
The Joint Strike Fighter project C++ Coding Standards are based on MISRA-C:1998.
The NASA Jet Propulsion Laboratory C Coding Standards are based on MISRA-C:2004.
ISO 26262 Functional Safety - Road Vehicles cites MISRA C as being an appropriate sub-set of the C language:
ISO 26262-6:2011 Part 6: Product development at the software level cites MISRA-C:2004 and MISRA AC AGC.
ISO 26262-6:2018 Part 6: Product development at the software level cites MISRA C:2012.
The AUTOSAR General Software Specification (SRS_BSW_00007) likewise cites MISRA C:
The AUTOSAR 4.2 General Software Specification requires that If the BSW Module implementation is written in C language, then it shall conform to the MISRA C:2004 Standard.
The AUTOSAR 4.3 General Software Specification requires that If the BSW Module implementation is written in C la
|
https://en.wikipedia.org/wiki/Bob%20Pease
|
Robert Allen Pease (August 22, 1940 – June 18, 2011) was an electronics engineer known for analog integrated circuit (IC) design, and as the author of technical books and articles about electronic design. He designed several very successful "best-seller" ICs, many of them in continuous production for multiple decades.These include LM331 voltage-to-frequency converter, and the LM337 adjustable negative voltage regulator (complement to the LM317).
Life and career
Pease was born on August 22, 1940, in Rockville, Connecticut. He attended Northfield Mount Hermon School in Massachusetts, and subsequently obtained a Bachelor of Science in Electrical Engineering (BSEE) degree from Massachusetts Institute of Technology in 1961.
He started work in the early 1960s at George A. Philbrick Researches (GAP-R). GAP-R pioneered the first reasonable-cost, mass-produced operational amplifier (op-amp), the K2-W. At GAP-R, Pease developed many high-performance op-amps, built with discrete solid-state components.
In 1976, Pease moved to National Semiconductor Corporation (NSC) as a Design and Applications Engineer, where he began designing analog monolithic ICs, as well as design reference circuits using these devices. He had advanced to Staff Engineer by the time of his departure in 2009. During his tenure at NSC, he began writing a popular continuing monthly column called "Pease Porridge" in Electronic Design about his experiences in the world of electronic design and application.
The last project Pease worked on was the THOR-LVX (photo-nuclear) microtron Advanced Explosives contraband Detection System: "A Dual-Purpose Ion-Accelerator for Nuclear-Reaction-Based Explosives-and SNM-Detection in Massive Cargo".
Pease was the author of eight books, including Troubleshooting Analog Circuits, and he held 21 patents. Although his name was listed as "Robert A. Pease" in formal documents, he preferred to be called "Bob Pease" or to use his initials "RAP" in his magazine columns.
His other
|
https://en.wikipedia.org/wiki/Jury%20research
|
Jury or juror research is an umbrella term for the use of research methods in an attempt to gain some understanding of the juror experience in the courtroom and how jurors individually and collectively come to a determination about the guilt or otherwise of the accused.
Brief history
Historically, juries have played a significant role in the determination of issues that could not be managed via 'general social interactions' or ones which required punitive measures, retribution and/or compensation. The role of jurors and juries however, has changed over the centuries and have generally been moulded by social and cultural forces embedded in the wider communities in which they have evolved. "Although the role of juries and jurors has a somewhat chequered history, 'the jury, in one form or the other, became the formal method of proof of the guilt or [otherwise] of a person on trial, and juries remain one of the 'cornerstones' of the criminal justice system in many countries.
There are however, many debates about the efficacy of the jury system and the ability of jurors to adequately determine the guilt or otherwise of the accused. Some argue that lay individuals are incapable of digesting the often complex forensic evidence presented during a trial, others argue that any misunderstanding of the evidence is a flaw in legal cross examination and summing up. Many observe that the juror and the accused seldom can be considered 'peers' which is historically considered a fundamental precept of jury makeup. Others consider the jury system to be inherently flawed as a result of the humanity of jurors. They cite incidents in which the Judiciary have become aware of Juror assumptions made in the absence of supporting evidence, the unidentified effect on Jurors of stereotyping, culture, gender, age, education etc., which can and have influenced their ability to make a decision from an objective stance. These arguments and debates are founded in legal and psychological practice
|
https://en.wikipedia.org/wiki/Bernt%20%C3%98ksendal
|
Bernt Karsten Øksendal (born 10 April 1945 in Fredrikstad) is a Norwegian mathematician. He completed his undergraduate studies at the University of Oslo, working under Otte Hustad. He obtained his PhD from University of California, Los Angeles in 1971; his thesis was titled Peak Sets and Interpolation Sets for Some Algebras of Analytic Functions and was supervised by Theodore Gamelin. In 1991, he was appointed as a professor at the University of Oslo. In 1992, he was appointed as an adjunct professor at the Norwegian School of Economics and Business Administration, Bergen, Norway.
His main field of interest is stochastic analysis, including stochastic control, optimal stopping, stochastic ordinary and partial differential equations and applications, particularly to physics, biology and finance. For his contributions to these fields, he was awarded the Nansen Prize in 1996. He has been a member of the Norwegian Academy of Science and Letters since 1996. He was elected as a member of the Norwegian Royal Society of Sciences in 2002.
As of February 2003, Øksendal has over 130 published works, including nine books. In 1982 he taught a postgraduate course in stochastic calculus at the University of Edinburgh which led to the book In 2005, he taught a course in stochastic calculus at the African Institute for Mathematical Sciences in Cape Town.
He resided at Hosle. He married Eva Aursland in June 1968. They have three children: Elise (born 1971), Anders (1974) and Karina (1981).
Awards and Appointments
Visiting Research Fellow at the University of Edinburgh, Scotland. The fellowship was awarded by the Science and Engineering Research Council, UK (Jan.-June 1982).
Norwegian Mathematical Society (Chairman 1987-1989)
Appointed VISTA Professor by The Norwegian Academy of Sciences (1992-1996)
Awarded Nansen Prize (1996)
Elected member of Royal Norwegian Science Society (2002)
European Research Council's Advanced Grant for Innovations in Stochastic Analysis and Appl
|
https://en.wikipedia.org/wiki/Jaro%E2%80%93Winkler%20distance
|
In computer science and statistics, the Jaro–Winkler similarity is a string metric measuring an edit distance between two sequences. It is a variant of the Jaro distance metric metric (1989, Matthew A. Jaro) proposed in 1990 by William E. Winkler.
The Jaro–Winkler distance uses a prefix scale which gives more favourable ratings to strings that match from the beginning for a set prefix length .
The higher the Jaro–Winkler distance for two strings is, the less similar the strings are. The score is normalized such that 0 means an exact match and 1 means there is no similarity. The original paper actually defined the metric in terms of similarity, so the distance is defined as the inversion of that value (distance = 1 − similarity).
Although often referred to as a distance metric, the Jaro–Winkler distance is not a metric in the mathematical sense of that term because it does not obey the triangle inequality.
Definition
Jaro similarity
The Jaro similarity of two given strings and is
Where:
is the length of the string ;
is the number of matching characters (see below);
is the number of transpositions (see below).
Jaro similarity score is 0 if the strings do not match at all, and 1 if they are an exact match. In the first step, each character of is compared with all its matching characters in . Two characters from and respectively, are considered matching only if they are the same and not farther than characters apart. For example, the following two nine character long strings, FAREMVIEL and FARMVILLE, have 8 matching characters. 'F', 'A' and 'R' are in the same position in both strings. Also 'M', 'V', 'I', 'E' and 'L' are within three (result of ) characters away. If no matching characters are found then the strings are not similar and the algorithm terminates by returning Jaro similarity score 0.
If non-zero matching characters are found, the next step is to find the number of transpositions. Transposition is the number of matching characters
|
https://en.wikipedia.org/wiki/Ramogen
|
The term ramogen refers to a biological factor, typically a growth factor or other protein, that causes a developing biological cell or tissue to branch in a tree-like manner. Ramogenic molecules are branch promoting molecules found throughout the human body,.
Brief History
The term was first coined (from the Latin ramus = branch and the Greek genesis = creation) in an article about kidney development by Davies and Davey (Pediatr Nephrol. 1999 Aug;13(6):535-41). In the article, Davies and Davy describe the existence of "ramogens" in the kidney as glial cell line-derived neurotrophic factors, neurturin and persephin. The term has now passed into general use in the technical literature concerned with branching of biological structures.
Function
A ramogen is a biochemical signal that enables the creation of a physiological branch. The signal can be in the form of a growth factor or a hormone that makes a tube branch. One specific example would be the hormone that forms the simple tube through which the mammary glands begin to form causing the formation of a highly branched “tree” of milk ducts in females.
Types of Ramogens
Mesenchyme-derived ramogens are found throughout the body and serve as chemoattractants to branching tissues.
An example of how this works is found through a study on a bead soaked in the renal ramogen GDNF. When this ramogen was placed next to a kidney sample in culture, the nearby uteric parts branch and grow toward it.
Another example of a ramogen in use was found in the lungs. The existence of Sprouty2 in the body is demonstrated in response to the signaling of the ramogen FGF10, serving as an inhibitor of branching.
The following table is a list of Key Ramogens in Branching Organs of a mouse species.
Studies involving Ramogens
The physiological capabilities of ramogens are still being postulated in medical studies involving kidney functions on mice.
In development maturing nephrons and stroma in the body may cease to prod
|
https://en.wikipedia.org/wiki/Sound%20particle
|
In the context of particle displacement and velocity, a sound particle is an imaginary infinitesimal volume of a medium that shares the movement of the medium in response to the presence of sound at a specified point or in a specified region. Sound particles are not molecules in the physical or chemical sense; they do not have defined physical or chemical properties or the temperature-dependent kinetic behavior of ordinary molecules. Sound particles are, then, indefinitely small (small compared to the wavelength of sound) so that their movement truly represents the movement of the medium in their locality. They exist in the mind’s eye to enable this movement to be visualized and described quantitatively. Assuming the medium as a whole to be at rest, sound particles are imagined to vibrate about fixed points.
See also
Sound
Particle displacement
Particle velocity
Particle acceleration
|
https://en.wikipedia.org/wiki/Snoop%20%28software%29
|
Snoop software is a command line packet analyzer included in the Solaris Operating System created by Sun Microsystems. Its source code was available via the OpenSolaris project.
See also
Comparison of packet analyzers
Network tap
|
https://en.wikipedia.org/wiki/Merkur%20%28toy%29
|
Merkur refers to a metal construction set built in Czechoslovakia (later the Czech Republic). It was also referred to as Constructo or Build-O in English-speaking countries and Tecc in the Netherlands.
Unlike Erector/Meccano, which was based on Imperial/customary measurements, Merkur used metric. There is 1x1 cm raster of connection holes on building parts, connected by M3.5 screws.
The brand was launched in 1920 and ran until 1940 when World War II put a halt to production. It was resumed in 1947. The private company was closed down and its assets nationalised by the Communist Czechoslovak state in 1953. The Merkur toys were made throughout the communist period and were exported all over Europe. The company was privatized by some of the former employees after 1989, but went into insolvency in 1993. Later on, Jaromír Kříž bought out the company and during three years he got back the production and saved this renowned Czech toy.
In 1961, Otto Wichterle used Merkur based apparatus for experimental production of the first soft contact lenses.
The factory and Merkur museum are located in Police nad Metují, Czech Republic.
Merkur also produces a wide range of toys, including metal 0 scale model trains and steam engines.
|
https://en.wikipedia.org/wiki/History%20of%20trigonometry
|
Early study of triangles can be traced to the 2nd millennium BC, in Egyptian mathematics (Rhind Mathematical Papyrus) and Babylonian mathematics. Trigonometry was also prevalent in Kushite mathematics.
Systematic study of trigonometric functions began in Hellenistic mathematics, reaching India as part of Hellenistic astronomy. In Indian astronomy, the study of trigonometric functions flourished in the Gupta period, especially due to Aryabhata (sixth century CE), who discovered the sine function. During the Middle Ages, the study of trigonometry continued in Islamic mathematics, by mathematicians such as Al-Khwarizmi and Abu al-Wafa. It became an independent discipline in the Islamic world, where all six trigonometric functions were known. Translations of Arabic and Greek texts led to trigonometry being adopted as a subject in the Latin West beginning in the Renaissance with Regiomontanus. The development of modern trigonometry shifted during the western Age of Enlightenment, beginning with 17th-century mathematics (Isaac Newton and James Stirling) and reaching its modern form with Leonhard Euler (1748).
Etymology
The term "trigonometry" was derived from Greek τρίγωνον trigōnon, "triangle" and μέτρον metron, "measure".
The modern words "sine" and "cosine" are derived from the Latin word via mistranslation from Arabic (see Sine and cosine#Etymology). Particularly Fibonacci's sinus rectus arcus proved influential in establishing the term.
The word tangent comes from Latin meaning "touching", since the line touches the circle of unit radius, whereas secant stems from Latin "cutting" since the line cuts the circle.
The prefix "co-" (in "cosine", "cotangent", "cosecant") is found in Edmund Gunter's Canon triangulorum (1620), which defines the cosinus as an abbreviation for the sinus complementi (sine of the complementary angle) and proceeds to define the cotangens similarly.
The words "minute" and "second" are derived from the Latin phrases partes minutae primae
|
https://en.wikipedia.org/wiki/Phaeton%20%28hypothetical%20planet%29
|
Phaeton (alternatively Phaethon or Phaëton ; from , ) was the hypothetical planet hypothesized by the Titius–Bode law to have existed between the orbits of Mars and Jupiter, the destruction of which supposedly led to the formation of the asteroid belt (including the dwarf planet Ceres). The hypothetical planet was named for Phaethon, the son of the sun god Helios in Greek mythology, who attempted to drive his father's solar chariot for a day with disastrous results and was ultimately destroyed by Zeus.
Phaeton hypothesis
According to the hypothesized Titius–Bode law proposed in the 1700s to explain the spacing of planets in a solar system a planet may have once existed between Mars and Jupiter. After learning of the regular sequence discovered by the German astronomer and mathematician Johann Daniel Titius, astronomer Johann E. Bode urged a search for the fifth planet corresponding to a gap in the sequence. (1) Ceres, the largest asteroid in the asteroid belt (now considered a dwarf planet), was serendipitously discovered in 1801 by the Italian Giuseppe Piazzi and found to closely match the "empty" position in Titius' sequence, which led many to believe it to be the "missing planet". However, in 1802 astronomer Heinrich Wilhelm Matthäus Olbers discovered and named the asteroid (2) Pallas, a second object in roughly the same orbit as (1) Ceres.
Olbers proposed that these two discoveries were the fragments of a disrupted planet that had formerly orbited the Sun, and predicted that more of these pieces would be found. The discovery of the asteroid (3) Juno by Karl Ludwig Harding and (4) Vesta by Olbers, buttressed his hypothesis. In 1823, German linguist and retired teacher called Olbers' destroyed planet Phaëthon, linking it to the Greek myths and legends about Phaethon and others.
In 1927, Franz Xaver Kugler wrote a short book titled Sibyllinischer Sternkampf und Phaëthon in naturgeschichtlicher Beleuchtung (The Sybilline Battle of the Stars and Phaeton Seen
|
https://en.wikipedia.org/wiki/Bearing%20capacity
|
In geotechnical engineering, bearing capacity is the capacity of soil to support the loads applied to the ground. The bearing capacity of soil is the maximum average contact pressure between the foundation and the soil which should not produce shear failure in the soil. Ultimate bearing capacity is the theoretical maximum pressure which can be supported without failure; allowable bearing capacity is the ultimate bearing capacity divided by a factor of safety. Sometimes, on soft soil sites, large settlements may occur under loaded foundations without actual shear failure occurring; in such cases, the allowable bearing capacity is based on the maximum allowable settlement. The allowable bearing pressure is the maximum pressure that can be applied to the soil without causing failure. The ultimate bearing capacity, on the other hand, is the maximum pressure that can be applied to the soil before it fails.
There are three modes of failure that limit bearing capacity: general shear failure, local shear failure, and punching shear failure.
It depends upon the shear strength of soil as well as shape, size, depth and type of foundation.
Introduction
A foundation is the part of a structure which transmits the weight of the structure to the ground. All structures constructed on land are supported on foundations. A foundation is a connecting link between the structure proper and the ground which supports it.
The bearing strength characteristics of foundation soil are major design criterion for civil engineering structures. In nontechnical engineering, bearing capacity is the capacity of soil to support the loads applied to the ground. The bearing capacity of soil is the maximum average contact pressure between the foundation and the soil which should not produce shear failure in the soil. Ultimate bearing capacity is the theoretical maximum pressure which can be supported without failure; allowable bearing capacity is the ultimate bearing capacity divided
|
https://en.wikipedia.org/wiki/Palindromic%20rheumatism
|
Palindromic rheumatism (PR) is a syndrome characterised by recurrent, self-resolving inflammatory attacks in and around the joints, and consists of arthritis or periarticular soft tissue inflammation. The course is often acute onset, with sudden and rapidly developing attacks or flares. There is pain, redness, swelling, and disability of one or multiple joints. The interval between recurrent palindromic attacks and the length of an attack is extremely variable from few hours to days. Attacks may become more frequent with time but there is no joint damage after attacks. It is thought to be an autoimmune disease, possibly an abortive form of rheumatoid arthritis.
Presentation
The exact prevalence of palindromic rheumatism in the general population is unknown, and this condition is often considered a rare disease by nonrheumatologists. However, a recent Canadian study showed that the incidence of PR in a cohort of incident arthritis was one case of PR for every 1.8 cases of rheumatoid arthritis (RA). The incidence of PR is less than that of RA but is not as rare as previously thought.
Palindromic rheumatism is a syndrome presented with inflammatory para-arthritis (soft tissue rheumatism) and inflammatory arthritis both of which cause sudden inflammation in one or several joints or soft tissue around joints. The flares usually present with mono- or oligo-articular involvement, which have onset over hours and last a few hours to a few days, and then go away completely. However episodes of recurrence form a pattern, with symptom-free periods between attacks lasting for weeks to months. The most commonly involved joints were knees, metacarpophalangeals and proximal interphalangeal joints of the hands or feet. Constitutionally, there may or may not be a fever and swelling of the joints. The soft tissues are involved with swelling of the periarticular tissues, especially heel pads and finger pads. Nodules may be found in the subcutaneous tissues. The frequency of attacks
|
https://en.wikipedia.org/wiki/Ab%20initio%20multiple%20spawning
|
The ab initio multiple spawning, or AIMS, method is a time-dependent formulation of quantum chemistry.
In AIMS, nuclear dynamics and electronic structure problems are solved simultaneously. Quantum mechanical effects in the nuclear dynamics are included, especially the nonadiabatic effects which are crucial in modeling dynamics on multiple electronic states.
The AIMS method makes it possible to describe photochemistry from first principles molecular dynamics, with no empirical parameters. The method has been applied to two molecules of interest in organic photochemistry - ethylene and cyclobutene.
The photodynamics of ethylene involves both covalent and ionic electronic excited states and the return to the ground state proceeds through a pyramidalized geometry. For the photoinduced ring opening of cyclobutene, is it shown that the disrotatory motion predicted by the Woodward–Hoffmann rules is established within the first 50 fs after optical excitation.
The method was developed by chemistry professor Todd Martinez.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.