source
stringlengths 32
209
| text
stringlengths 18
1.5k
|
---|---|
https://en.wikipedia.org/wiki/Low-frequency%20radar
|
Low-frequency radar is radar which uses frequencies lower than 1 GHz such as L-band, UHF, VHF, and HF, as opposed to the usual radar bands, which range from 2 GHz to 40 GHz.The radar cross section of any target depends on the frequency transmitted by the radar. Below 900 MHz the target radar cross section increases exponentially, however the increased radar cross section means that there is also much more radar return from undesirable sources, such as cloud cover and rain (cf. weather radar). It is because of this that radars traditionally use much higher frequencies, with an exception being the radars operated in the 3-30 MHz band which are used as over-the-horizon radar stations because signals in that range are able to reflect off the ionosphere.
Recent interest has accumulated in developing radars which operate in these low frequencies to help counter the advancement in stealth technology by applying advanced digital signal processing to these bands in order to reduce radar clutter. If the radar wavelength is roughly twice the size of the target, a half-wave resonance effect can still generate a significant return. However, low-frequency radar is limited by shortage of unused frequencies, lack of accuracy given the long wavelength, and by the radar's size, making it difficult to transport and making for an easy target. A long-wave radar may detect a target and roughly locate it, but not identify it, and the location information lacks sufficient weapon targeting accuracy
|
https://en.wikipedia.org/wiki/Nick%20Whitaker
|
Nick Whitaker (born October 1, 1988) is an American actor who is known for playing the lead role in Benji: Off the Leash!.
Career
Whitaker's first role was a Chase Patterson in Message in a Cell Phone. He is an active member of the Church of Jesus Christ of Latter-day Saints and has appeared in many church related movies which include starring as Joseph Smith in the major film from the LDS Motion Picture Studios, Joseph Smith: The Prophet of the Restoration, which is currently playing at the Joseph Smith Memorial Building and various visitor centres across the world. He has also appeared in Brigham City, Money or The Mission, and the children's film Bug Off.
Whitaker has also appeared in the Disney Channel movies High School Musical and Read It and Weep, and he had a minor role as one of the basketball players in Hatching Pete.
In Read It and Weep he plays Lenny Bartlett, the older brother of Jamie Bartlett, who is the main character. In the movie he sings the song "I Will Be Around".
Filmography
References
External links
Official web Site
Living people
1988 births
American male child actors
American Latter Day Saints
American male film actors
American male television actors
21st-century American male actors
21st-century American singers
21st-century American male singers
|
https://en.wikipedia.org/wiki/Differential%20algebra
|
In mathematics, differential algebra is, broadly speaking, the area of mathematics consisting in the study of differential equations and differential operators as algebraic objects in view of deriving properties of differential equations and operators without computing the solutions, similarly as polynomial algebras are used for the study of algebraic varieties, which are solution sets of systems of polynomial equations. Weyl algebras and Lie algebras may be considered as belonging to differential algebra.
More specifically, differential algebra refers to the theory introduced by Joseph Ritt in 1950, in which differential rings, differential fields, and differential algebras are rings, fields, and algebras equipped with finitely many derivations.
A natural example of a differential field is the field of rational functions in one variable over the complex numbers, where the derivation is differentiation with respect to More generally, every differential equation may be viewed as an element of a differential algebra over the differential field generated by the (known) functions appearing in the equation.
History
Joseph Ritt developed differential algebra because he viewed attempts to reduce systems of differential equations to various canonical forms as an unsatisfactory approach. However, the success of algebraic elimination methods and algebraic manifold theory motivated Ritt to consider a similar approach for differential equations. His efforts led to an initial pa
|
https://en.wikipedia.org/wiki/Cavity
|
Cavity may refer to:
Biology and healthcare
Body cavity, a fluid-filled space in many animals where organs typically develop
Gastrovascular cavity, the primary organ of digestion and circulation in cnidarians and flatworms
Dental cavity or tooth decay, damage to the structure of a tooth
Lung cavity, an air-filled space within the lung
Nasal cavity, a large, air-filled space above and behind the nose in the middle of the face
Radio frequency resonance
Microwave cavity or RF cavity, a cavity resonator in the radio frequency range, for example used in particle accelerators
Optical cavity, the cavity resonator of a laser
Resonant cavity, a device designed to select for waves of particular wavelengths
Other uses
Cavity (band), a sludge metal band from Miami, Florida
Cavity method, a mathematical method to solve some mean field type of models
Cavity wall, a wall consisting of two skins with a cavity
See also
Cavitation, the phenomenon of partial vacuums forming in fluid, for example, in propellers
Cavitary pneumonia, a type of pneumonia in which a hole is formed in the lung
Cavity Search (disambiguation)
Hollow (disambiguation)
|
https://en.wikipedia.org/wiki/CFOX-FM
|
CFOX-FM (identified on air and in print as CFOX) is a Canadian radio station in Vancouver, British Columbia. It broadcasts on an assigned frequency of 99.3 MHz on the FM band with an effective radiated power of 100,000 watts (peak). The transmitter is located on Mount Seymour in the District of North Vancouver, with studios located in Downtown Vancouver, in the TD Tower. The station is owned by Corus Entertainment. CFOX has a modern rock format, as it reports to Mediabase as a Canadian alternative rock station.
History
CFOX began broadcasting on October 15, 1964 on 99.3 MHz with 100,000 watts, under the call sign CKLG-FM (not to be confused with the new "LG" in Vancouver CHLG-FM on 104.3 MHz.). Transmissions originally came from the south slope of Fromme Mountain in North Vancouver.
CKLG initially began with an easy listening format, but in the fall of 1967, it started experimenting with rock music at night. In October that year, CKLG program director Frank Callaghan hired record store owner Bill Reiter (who later went on to become part of the Dr. Bundolo's Pandemonium Medicine Show comedy troupe) to host the jazz/blues program Groovin' Blue on Saturday evenings. CKLG-FM soon shifted to become Canada's first full-time FM rock station on March 16, 1968, with the expansion of Groovin' Blue to six nights a week and the addition of tracks from rock, folk and popular albums. In 1970, CKLG-FM added a two-hour daily talk show hosted by Allen Garr, which ran on the station until 19
|
https://en.wikipedia.org/wiki/Weiler%E2%80%93Atherton%20clipping%20algorithm
|
The Weiler–Atherton is a polygon-clipping algorithm. It is used in areas like computer graphics and games development where clipping of polygons is needed. It allows clipping of a subject or candidate polygon by an arbitrarily shaped clipping polygon/area/region.
It is generally applicable only in 2D. However, it can be used in 3D through visible surface determination and with improved efficiency through Z-ordering.
Preconditions
Before being applied to a polygon, the algorithm requires several preconditions to be fulfilled:
Candidate polygons need to be oriented clockwise.
Candidate polygons should not be self-intersecting (i.e., re-entrant).
The algorithm can support holes (as counter-clockwise polygons wholly inside their parent polygon), but requires additional algorithms to decide which polygons are holes, after which merging of the polygons can be performed using a variant of the algorithm.
Algorithm
Given polygon A as the clipping region and polygon B as the subject polygon to be clipped, the algorithm consists of the following steps:
List the vertices of the clipping-region polygon A and those of the subject polygon B.
Label the listed vertices of subject polygon B as either inside or outside of clipping region A.
Find all the polygon intersections and insert them into both lists, linking the lists at the intersections.
Generate a list of "inbound" intersections – the intersections where the vector from the intersection to the subsequent vertex of
|
https://en.wikipedia.org/wiki/Quantum%20fluid
|
A quantum fluid refers to any system that exhibits quantum mechanical effects at the macroscopic level such as superfluids, superconductors, ultracold atoms, etc. Typically, quantum fluids arise in situations where both quantum mechanical effects and quantum statistical effects are significant.
Most matter is either solid or gaseous (at low densities) near absolute zero. However, for the cases of helium-4 and its isotope helium-3, there is a pressure range where they can remain liquid down to absolute zero because the amplitude of the quantum fluctuations experienced by the helium atoms is larger than the inter-atomic distances.
In the case of solid quantum fluids, it is only a fraction of its electrons or protons that behave like a “fluid”. One prominent example is that of superconductivity where quasi-particles made up of pairs of electrons and a phonon act as bosons which are then capable of collapsing into the ground state to establish a supercurrent with a resistivity near zero.
Derivation
Quantum mechanical effects become significant for physics in the range of the de Broglie wavelength. For condensed matter, this is when the de Broglie wavelength of a particle is greater than the spacing between the particles in the lattice that comprises the matter.
The de Broglie wavelength associated with a massive particle is
where h is the Planck constant. The momentum can be found from the kinetic theory of gases, where
Here, the temperature can be found as
Of course, we c
|
https://en.wikipedia.org/wiki/Hamiltonian%20vector%20field
|
In mathematics and physics, a Hamiltonian vector field on a symplectic manifold is a vector field defined for any energy function or Hamiltonian. Named after the physicist and mathematician Sir William Rowan Hamilton, a Hamiltonian vector field is a geometric manifestation of Hamilton's equations in classical mechanics. The integral curves of a Hamiltonian vector field represent solutions to the equations of motion in the Hamiltonian form. The diffeomorphisms of a symplectic manifold arising from the flow of a Hamiltonian vector field are known as canonical transformations in physics and (Hamiltonian) symplectomorphisms in mathematics.
Hamiltonian vector fields can be defined more generally on an arbitrary Poisson manifold. The Lie bracket of two Hamiltonian vector fields corresponding to functions f and g on the manifold is itself a Hamiltonian vector field, with the Hamiltonian given by the
Poisson bracket of f and g.
Definition
Suppose that is a symplectic manifold. Since the symplectic form is nondegenerate, it sets up a fiberwise-linear isomorphism
between the tangent bundle and the cotangent bundle , with the inverse
Therefore, one-forms on a symplectic manifold may be identified with vector fields and every differentiable function determines a unique vector field , called the Hamiltonian vector field with the Hamiltonian , by defining for every vector field on ,
Note: Some authors define the Hamiltonian vector field with the opposite sign. One has to
|
https://en.wikipedia.org/wiki/Symplectic%20vector%20field
|
In physics and mathematics, a symplectic vector field is one whose flow preserves a symplectic form. That is, if is a symplectic manifold with smooth manifold and symplectic form , then a vector field in the Lie algebra is symplectic if its flow preserves the symplectic structure. In other words, the Lie derivative of the vector field must vanish:
.
An alternative definition is that a vector field is symplectic if its interior product with the symplectic form is closed. (The interior product gives a map from vector fields to 1-forms, which is an isomorphism due to the nondegeneracy of a symplectic 2-form.) The equivalence of the definitions follows from the closedness of the symplectic form and Cartan's magic formula for the Lie derivative in terms of the exterior derivative.
If the interior product of a vector field with the symplectic form is an exact form (and in particular, a closed form), then it is called a Hamiltonian vector field. If the first De Rham cohomology group of the manifold is trivial, all closed forms are exact, so all symplectic vector fields are Hamiltonian. That is, the obstruction to a symplectic vector field being Hamiltonian lives in . In particular, symplectic vector fields on simply connected manifolds are Hamiltonian.
The Lie bracket of two symplectic vector fields is Hamiltonian, and thus the collection of symplectic vector fields and the collection of Hamiltonian vector fields both form Lie algebras.
References
Symplectic geo
|
https://en.wikipedia.org/wiki/Wobulation
|
Wobulation is the known variation ("wobble") in a characteristic. For example, wobulation of advanced radar waveform modulations – where the repetition rate or center frequency of a signal is changed in a repetitive fashion to reduce the probability of interception.
In large-screen television technology, wobulation is Hewlett-Packard's term for a form of interlacing designed for use with fixed pixel displays. The term is loosely derived from the word 'wobble' and was inspired by HP's work with the overlap of printing ink. Wobulation reduces the cost and complexity of components required for the creation of high resolution TVs.
Wobulation works by overlapping pixels. It does so by generating multiple sub-frames of data while an optical image shifting mechanism (e.g. the mirror of a digital micromirror device) then displaces the projected image of each sub-frame by a fraction of a pixel (e.g. one-half or one-third). The sub-frames are then projected in rapid succession, and appear to the human eye as if they are being projected simultaneously and superimposed. For example, a high-resolution HDTV video frame is divided into two sub-frames, A and B. Sub-frame A is projected, and then the miniature mirror on a digital micromirror device switches and displaces sub-frame B one half pixel length as it is projected. When projected in rapid succession, the sub-frames superimpose, and create to the human eye a complete and seamless image. If the video sub-frames are aligned so that th
|
https://en.wikipedia.org/wiki/Danburite
|
Danburite is a calcium boron silicate mineral with a chemical formula of CaB2(SiO4)2.
It has a Mohs hardness of 7 to 7.5 and a specific gravity of 3.0. The mineral has an orthorhombic crystal form. It is usually colourless, like quartz, but can also be either pale yellow or yellowish-brown. It typically occurs in contact metamorphic rocks.
The Dana classification of minerals categorizes danburite as a sorosilicate, while the Strunz classification scheme lists it as a tectosilicate; its structure can be interpreted as either.
Its crystal symmetry and form are similar to topaz; however, topaz is a calcium fluorine bearing nesosilicate. The clarity, resilience, and strong dispersion of danburite make it valuable as cut stones for jewelry.
It is named for Danbury, Connecticut, United States, where it was first discovered in 1839 by Charles Upham Shephard.
References
Calcium minerals
Tectosilicates
Orthorhombic minerals
Minerals in space group 62
Luminescent minerals
Gemstones
|
https://en.wikipedia.org/wiki/65%20nm%20process
|
The 65 nm process is an advanced lithographic node used in volume CMOS (MOSFET) semiconductor fabrication. Printed linewidths (i.e. transistor gate lengths) can reach as low as 25 nm on a nominally 65 nm process, while the pitch between two lines may be greater than 130 nm.
Process node
For comparison, cellular ribosomes are about 20 nm end-to-end. A crystal of bulk silicon has a lattice constant of 0.543 nm, so such transistors are on the order of 100 atoms across. By September 2007, Intel, AMD, IBM, UMC and Chartered were also producing 65 nm chips.
While feature sizes may be drawn as 65 nm or less, the wavelengths of light used for lithography are 193 nm and 248 nm. Fabrication of sub-wavelength features requires special imaging technologies, such as optical proximity correction and phase-shifting masks. The cost of these techniques adds substantially to the cost of manufacturing sub-wavelength semiconductor products, with the cost increasing exponentially with each advancing technology node. Furthermore, these costs are multiplied by an increasing number of mask layers that must be printed at the minimum pitch, and the reduction in yield from printing so many layers at the cutting edge of the technology. For new integrated-circuit designs, this factors into the costs of prototyping and production.
Gate thickness, another important dimension, is reduced to as little as 1.2 nm (Intel). Only a few atoms insulate the "switch" part of the transistor, causing charge to flow
|
https://en.wikipedia.org/wiki/RealSound
|
RealSound is a patented (US US5054086 A) technology for the PC created by Steve Witzel of Access Software during the late 1980s. RealSound enables 6-bit digitized pulse-code modulation (PCM)-audio playback on the PC speaker by means of pulse-width modulation (PWM) drive, allowing software control of the loud speaker's amplitude of displacement. The first video games to use it were World Class Leader Board and Echelon, both released in 1988. At the time of release, sound cards were very expensive and RealSound allowed players to hear lifelike sounds and speech with no additional sound hardware, just the standard PC speaker.
RealSound was an impressive enough technology that a few other PC video game developers, like Legend Entertainment, licensed it for use in their own games. However, as the early 1990s progressed, sound card prices dropped to the point that they eventually became a baseline requirement for gaming PC-audio, leaving RealSound obsolete as it no longer filled a niche in the market.
Examples of games using RealSound
Access Software:
World Class Leader Board (1988)
Echelon (1988)
Mean Streets (1989)
Countdown (1990)
Crime Wave (1990)
Links (1990)
Martian Memorandum (1991)
Amazon: Guardians of Eden (1992)
Legend Entertainment:
Spellcasting 101 (1990)
Spellcasting 201 (1991)
Timequest (1991)
Spellcasting 301 (1992)
Companions of Xanth (1993)
See also
PC speaker
Sound card
Covox Speech Thing
References
Sound production technology
Obsolete technologies
|
https://en.wikipedia.org/wiki/John%20Goodsir
|
John Goodsir (20 March 1814 – 6 March 1867) was a Scottish anatomist and a pioneer in the formulation of cell theory.
Early life
Goodsir was born on 20 March 1814 in Anstruther, Fife, the son of Elizabeth Dunbar Taylor and John Goodsir (1742–1848), a medical practitioner in the town. He was baptised on 17 April 1814. His younger brother, Joseph Taylor Goodsir, entered the ministry and became minister in Lower Largo. His younger brother, Harry Goodsir, perished on the Franklin expedition. Another brother, Robert, (b. 1824) qualified as a doctor and sailed twice to the Arctic searching for his brother Harry. His youngest brother, Archibald, (b. 1826) qualified as a Member of the Royal College of Surgeons of England.
In December 1826, at the age of 12, Goodsir entered the University of St Andrews, where his classes included classics and mathematics. The following year he was apprenticed to the surgeon and dentist Robert Nasmyth, at 78 Great King Street in Edinburgh's New Town. This allowed him to enter the Edinburgh University Medical School and also attend classes at the Royal College of Surgeons of Edinburgh. He finished his apprenticeship with Nasmyth in 1833, and qualified as Licentiate of the Royal College of Surgeons of Edinburgh in 1835.
He then moved back to Anstruther to work in his father's medical practice, which allowed him to resume his boyhood hobby of searching the local coastline along the Firth of Forth for all forms of wildlife. The specimens which he coll
|
https://en.wikipedia.org/wiki/Microsoft%20PhotoDraw
|
Microsoft PhotoDraw was a vector graphics and raster image editing software developed by Microsoft. It was released in 1999 as part of the Microsoft Office 2000 family of products and was specifically designed for creating and editing graphics, illustrations, and photo compositions.
PhotoDraw offered a range of features including image editing tools, drawing and painting capabilities, text manipulation, special effects, and the ability to work with layers. It also included templates and clip art to assist users in creating various types of designs, such as flyers, brochures, and web graphics.
Despite its initial popularity, Microsoft PhotoDraw did not receive significant updates or continued development. The last version released was PhotoDraw 2000, which came bundled with Microsoft Office 2000 Premium and was compatible with Windows 98 and later versions of Windows. Microsoft eventually discontinued PhotoDraw, and it is no longer actively supported or available for purchase.
History
Microsoft PhotoDraw 2000
Microsoft PhotoDraw 2000 was released in 1999 along with Microsoft Office 2000 Premium and Developer, but came separately on 2 CDs. It developed from the Picture It! 2.0 engine's format and expanded further into vector imaging technology. It required a separate installation from the main installer for the core Office suite, and was also released as a stand-alone product as part of Microsoft's Graphics Studio line of products (Greetings, etc.).
PhotoDraw 2000 Version
|
https://en.wikipedia.org/wiki/Pyroelectric%20fusion
|
Pyroelectric fusion refers to the technique of using pyroelectric crystals to generate high strength electrostatic fields to accelerate deuterium ions (tritium might also be used someday) into a metal hydride target also containing deuterium (or tritium) with sufficient kinetic energy to cause these ions to undergo nuclear fusion. It was reported in April 2005 by a team at UCLA. The scientists used a pyroelectric crystal heated from −34 to 7 °C (−29 to 45 °F), combined with a tungsten needle to produce an electric field of about 25 gigavolts per meter to ionize and accelerate deuterium nuclei into an erbium deuteride target. Though the energy of the deuterium ions generated by the crystal has not been directly measured, the authors used 100 keV (a temperature of about 109 K) as an estimate in their modeling. At these energy levels, two deuterium nuclei can fuse to produce a helium-3 nucleus, a 2.45 MeV neutron and bremsstrahlung. Although it makes a useful neutron generator, the apparatus is not intended for power generation since it requires far more energy than it produces.
History
The process of light ion acceleration using electrostatic fields and deuterium ions to produce fusion in solid deuterated targets was first demonstrated by Cockcroft and Walton in 1932 (see Cockcroft–Walton generator). That process is used in miniaturized versions of their original accelerator, in the form of small sealed tube neutron generators, for petroleum exploration.
The process of pyroel
|
https://en.wikipedia.org/wiki/Tom%20Soares
|
Thomas James Soares (born 10 July 1986) is an English professional footballer who plays as a midfielder for Marlow. He has previously played for Crystal Palace, Stoke City, Bury, AFC Wimbledon and Stevenage.
Career
Crystal Palace
Born in Reading, Berkshire, Soares came up through the Crystal Palace academy, playing in central midfield. When he made the Palace first team he was used as a winger. Under the management of Neil Warnock, however, Soares was played in central midfield. His driving runs, combined with his best goalscoring season to date, meant that he was a key performer in the 2007–08 season for Palace.
Stoke City
In August 2008 Soares joined Premier League newcomers Stoke City for a fee of £1.25 million. He made an impressive start, as in his second match for Stoke he won two penalty kicks in a 2–1 win against Tottenham Hotspur at the Britannia Stadium. Soares played a few more matches for Stoke, against Sunderland, Manchester City, Portsmouth and West Bromwich Albion. After playing against Hartlepool United in the FA Cup, however, Soares failed to make a league appearance. He was loaned out to Charlton Athletic for the remainder of the 2008–09 season, but failed to help keep them in the Championship. Soares scored one goal for Charlton, on 3 February against Bristol City.
Back at Stoke, he failed to make a Premier League appearance in the 2009–10 season. On 26 November 2009 Soares joined Sheffield Wednesday, initially on a month-long loan deal, but he remained
|
https://en.wikipedia.org/wiki/Military%20parlance
|
Military parlance is the vernacular used within the military and embraces all aspects of service life; it can be described as both a "code" and a "classification" of something. Like many close and closed communities, the language used can often be full of jargon and not readily intelligible to outsiders—sometimes this is for military operational or security reasons; other times it is because of the natural evolution of the day-to-day language used in the various units.
For example: "Captain, this situation is 'Scale A'", "Scale A" being an army's parlance for "This situation requires the closest of attention and resources and all members of relevance should be present."
The military has developed its own slang, partly as means of self-identification. This slang is also used to reinforce the (usually friendly) interservice rivalries. Some terms are derogatory to varying degrees and many service personnel take some pleasure in the sense of shared hardships which they endure and which is reflected in the slang terms.
Military abbreviations
The military often use initials and abbreviations of all kinds - partly for security and operational reasons and partly for the simple convenience of their use; like all such things they can be hard to understand for outsiders. A few examples are given below:
US Army
G.I. - originally stood for "Galvanized Iron" but has come to be interpreted as everything from "General Infantry" (soldiers) to "Government Issue" and "Government Inductee"
|
https://en.wikipedia.org/wiki/Fusion%20gene
|
A fusion gene is a hybrid gene formed from two previously independent genes. It can occur as a result of translocation, interstitial deletion, or chromosomal inversion. Fusion genes have been found to be prevalent in all main types of human neoplasia. The identification of these fusion genes play a prominent role in being a diagnostic and prognostic marker.
History
The first fusion gene was described in cancer cells in the early 1980s. The finding was based on the discovery in 1960 by Peter Nowell and David Hungerford in Philadelphia of a small abnormal marker chromosome in patients with chronic myeloid leukemia—the first consistent chromosome abnormality detected in a human malignancy, later designated the Philadelphia chromosome. In 1973, Janet Rowley in Chicago showed that the Philadelphia chromosome had originated through a translocation between chromosomes 9 and 22, and not through a simple deletion of chromosome 22 as was previously thought. Several investigators in the early 1980s showed that the Philadelphia chromosome translocation led to the formation of a new BCR::ABL1 fusion gene, composed of the 3' part of the ABL1 gene in the breakpoint on chromosome 9 and the 5' part of a gene called BCR in the breakpoint in chromosome 22. In 1985 it was clearly established that the fusion gene on chromosome 22 produced an abnormal chimeric BCR::ABL1 protein with the capacity to induce chronic myeloid leukemia.
Oncogenes
It has been known for 30 years that the corresponding
|
https://en.wikipedia.org/wiki/Blocking%20%28statistics%29
|
In the statistical theory of the design of experiments, blocking is the arranging of experimental units that are similar to one another in groups (blocks). Blocking can be used to tackle the problem of pseudoreplication.
Use
Blocking reduces unexplained variability. Its principle lies in the fact that variability which cannot be overcome (e.g. needing two batches of raw material to produce 1 container of a chemical) is confounded or aliased with a(n) (higher/highest order) interaction to eliminate its influence on the end product. High order interactions are usually of the least importance (think of the fact that temperature of a reactor or the batch of raw materials is more important than the combination of the two – this is especially true when more (3, 4, ...) factors are present); thus it is preferable to confound this variability with the higher interaction.
Examples
Male and female: An experiment is designed to test a new drug on patients. There are two levels of the treatment, drug, and placebo, administered to male and female patients in a double blind trial. The sex of the patient is a blocking factor accounting for treatment variability between males and females. This reduces sources of variability and thus leads to greater precision.
Elevation: An experiment is designed to test the effects of a new pesticide on a specific patch of grass. The grass area contains a major elevation change and thus consists of two distinct regions – 'high elevation' and 'low elevat
|
https://en.wikipedia.org/wiki/DNA%20fragmentation
|
DNA fragmentation is the separation or breaking of DNA strands into pieces. It can be done intentionally by laboratory personnel or by cells, or can occur spontaneously. Spontaneous or accidental DNA fragmentation is fragmentation that gradually accumulates in a cell. It can be measured by e.g. the Comet assay or by the TUNEL assay.
Its main units of measurement is the DNA Fragmentation Index (DFI). A DFI of 20% or more significantly reduces the success rates after ICSI.
DNA fragmentation was first documented by Williamson in 1970 when he observed discrete oligomeric fragments occurring during cell death in primary neonatal liver cultures. He described the cytoplasmic DNA isolated from mouse liver cells after culture as characterized by DNA fragments with a molecular weight consisting of multiples of 135 kDa. This finding was consistent with the hypothesis that these DNA fragments were a specific degradation product of nuclear DNA.
Intentional
DNA fragmentation is often necessary prior to library construction or subcloning for DNA sequences. A variety of methods involving the mechanical breakage of DNA have been employed where DNA is fragmented by laboratory personnel. Such methods include sonication, needle shear, nebulisation, point-sink shearing and passage through a pressure cell.
Restriction digest is the intentional laboratory breaking of DNA strands. It is an enzyme-based treatment used in biotechnology to cut DNA into smaller strands in order to study fragment le
|
https://en.wikipedia.org/wiki/Fisher%27s%20equation
|
In mathematics, Fisher's equation (named after statistician and biologist Ronald Fisher) also known as the Kolmogorov–Petrovsky–Piskunov equation (named after Andrey Kolmogorov, Ivan Petrovsky, and Nikolai Piskunov), KPP equation or Fisher–KPP equation is the partial differential equation:It is a kind of reaction–diffusion system that can be used to model population growth and wave propagation.
Details
Fisher's equation belongs to the class of reaction-diffusion equations: in fact, it is one of the simplest semilinear reaction-diffusion equations, the one which has the inhomogeneous term
which can exhibit traveling wave solutions that switch between equilibrium states given by . Such equations occur, e.g., in ecology, physiology, combustion, crystallization, plasma physics, and in general phase transition problems.
Fisher proposed this equation in his 1937 paper The wave of advance of advantageous genes in the context of population dynamics to describe the spatial spread of an advantageous allele and explored its travelling wave solutions.
For every wave speed ( in dimensionless form) it admits travelling wave solutions of the form
where is increasing and
That is, the solution switches from the equilibrium state u = 0 to the equilibrium state u = 1. No such solution exists for c < 2. The wave shape for a given wave speed is unique. The travelling-wave solutions are stable against near-field perturbations, but not to far-field perturbations which can thicken the tail
|
https://en.wikipedia.org/wiki/Choose%20Up%20Sides
|
Choose Up Sides is a children's television game show that aired on NBC from January 7 to March 31, 1956. It was hosted by Gene Rayburn, announced by Don Pardo and produced by Goodson-Todman Productions.
Format
The show had two teams of children compete for points with the winning team earning a prize. Each side was represented by four children, usually three boys and one girl. The boys competed against each other and the girls competed against each other.
The teams were named "Space Pilots" and "Bronco Busters". Each team had an adult assistant who dressed as a space commander or cowboy, respectively. The assistants introduced each contestant to Rayburn. Each player selected a postcard from a pool that had been sent in by children from all over the country. The team that won also won a prize for the child whose postcard they had drawn.
The children competed against each other doing stunts. The stunts were the type one might have seen on Beat the Clock (another Goodson-Todman Production). The winning team for each stunt scored 100 points. The losing team was allowed to do something else to earn 25 or 50 points. Their consolation stunt was dictated to them by a character called "Mr. Mischief", a wall-puppet that was operated and voiced by Pardo. The time limit for the stunt was a whistle that could go off at any time. This was later changed to a balloon in Mr. Mischief's mouth which would inflate until it burst. When it was time for the losing team's child to go meet Mr. Mi
|
https://en.wikipedia.org/wiki/Electron%20crystallography
|
Electron crystallography is a method to determine the arrangement of atoms in solids using a transmission electron microscope (TEM). It can involve the use of high-resolution transmission electron microscopy images, electron diffraction patterns including convergent-beam electron diffraction or combinations of these. It has been successful in determining some bulk structures, and also surface structures. Two related methods are low-energy electron diffraction which has solved the structure of many surfaces, and reflection high-energy electron diffraction which is used to monitor surfaces often during growth.
Comparison with X-ray crystallography
It can complement X-ray crystallography for studies of very small crystals (<0.1 micrometers), both inorganic, organic, and proteins, such as membrane proteins, that cannot easily form the large 3-dimensional crystals required for that process. Protein structures are usually determined from either 2-dimensional crystals (sheets or helices), polyhedrons such as viral capsids, or dispersed individual proteins. Electrons can be used in these situations, whereas X-rays cannot, because electrons interact more strongly with atoms than X-rays do. Thus, X-rays will travel through a thin 2-dimensional crystal without diffracting significantly, whereas electrons can be used to form an image. Conversely, the strong interaction between electrons and protons makes thick (e.g. 3-dimensional > 1 micrometer) crystals impervious to electrons, which
|
https://en.wikipedia.org/wiki/Diffusion%20capacitance
|
Diffusion Capacitance is the capacitance that happens due to transport of charge carriers between two terminals of a device, for example, the diffusion of carriers from anode to cathode in a forward biased diode or from emitter to base in a forward-biased junction of a transistor. In a semiconductor device with a current flowing through it (for example, an ongoing transport of charge by diffusion) at a particular moment there is necessarily some charge in the process of transit through the device. If the applied voltage changes to a different value and the current changes to a different value, a different amount of charge will be in transit in the new circumstances. The change in the amount of transiting charge divided by the change in the voltage causing it is the diffusion capacitance. The adjective "diffusion" is used because the original use of this term was for junction diodes, where the charge transport was via the diffusion mechanism. See Fick's laws of diffusion.
To implement this notion quantitatively, at a particular moment in time let the voltage across the device be . Now assume that the voltage changes with time slowly enough that at each moment the current is the same as the DC current that would flow at that voltage, say (the quasistatic approximation). Suppose further that the time to cross the device is the forward transit time . In this case the amount of charge in transit through the device at this particular moment, denoted , is given by
.
Consequent
|
https://en.wikipedia.org/wiki/Darren%20Powell
|
Darren David Powell (born 10 March 1976) is an English footballer manager and former professional footballer who played as a centre-back, He is the current academy manager for Crystal Palace. During his playing career, he appeared over 250 times in the Football League and Premier League. Powell was known as "a tough-tackling centre-back".
Following his retirement, Powell moved to coaching and managing Hampton & Richmond Borough before taking over as an academy coach for Crystal Palace.
Playing career
Brentford
Powell began his career with Hampton, where he quickly established himself in the starting eleven for the side. His performance attracted interests from Stevenage and Hayes before he joined Brentford for £15,000.
Powell made his debut for Brentford, where he played the whole game, in a 3–0 win over Mansfield Town. He started well for the side at the beginning of the season, winning five out of the six matches by the end of August, including scoring his first goal for the side, in a 2–1 win over Rochdale. He was later in the squad for Brentford when he helped the club win the Division Two by four points. At the end of the 1998–99 season, Powell had made 37 appearances, scoring twice in all competitions. For his performances, he was named the club's Player of the Year.
In the 1999–2000 season, Powell continued to be a first team regular. He then scored his first goal for the club, in a 2–0 win over Luton Town on 16 September 1999. A month later, on 16 October 1999, P
|
https://en.wikipedia.org/wiki/Vinidarius
|
Vinidarius (fl. 5th century AD) was the purported compiler of a small collection of cooking recipes named Apici excerpta a Vinidario. This is preserved in a single 8th‑century uncial manuscript in Latin, claiming to be excerpts from the recipes of Apicius.
About Vinidarius himself nothing is known. If he existed, he may have been a Goth; his Latin name suggests a possible Gothic name of Vinithaharjis.
There is a very abbreviated epitome entitled Apici excerpta a Vinidario, a "pocket Apicius" by "an illustrious man" named Vinidarius, made as late as the Carolingian era. There is in fact very little overlap with the Apician manual, but the recipes are similar in character, and are usually presented today as an appendix to Apicius: they add to our knowledge of late Antique cuisine.
Bibliography
, pp. 309–325
5th-century writers in Latin
5th-century Romans of Gothic descent
Cookbook writers
|
https://en.wikipedia.org/wiki/Cartesian%20tensor
|
In geometry and linear algebra, a Cartesian tensor uses an orthonormal basis to represent a tensor in a Euclidean space in the form of components. Converting a tensor's components from one such basis to another is done through an orthogonal transformation.
The most familiar coordinate systems are the two-dimensional and three-dimensional Cartesian coordinate systems. Cartesian tensors may be used with any Euclidean space, or more technically, any finite-dimensional vector space over the field of real numbers that has an inner product.
Use of Cartesian tensors occurs in physics and engineering, such as with the Cauchy stress tensor and the moment of inertia tensor in rigid body dynamics. Sometimes general curvilinear coordinates are convenient, as in high-deformation continuum mechanics, or even necessary, as in general relativity. While orthonormal bases may be found for some such coordinate systems (e.g. tangent to spherical coordinates), Cartesian tensors may provide considerable simplification for applications in which rotations of rectilinear coordinate axes suffice. The transformation is a passive transformation, since the coordinates are changed and not the physical system.
Cartesian basis and related terminology
Vectors in three dimensions
In 3D Euclidean space, , the standard basis is , , . Each basis vector points along the x-, y-, and z-axes, and the vectors are all unit vectors (or normalized), so the basis is orthonormal.
Throughout, when referring to Cartes
|
https://en.wikipedia.org/wiki/Silverman%E2%80%93Toeplitz%20theorem
|
In mathematics, the Silverman–Toeplitz theorem, first proved by Otto Toeplitz, is a result in summability theory characterizing matrix summability methods that are regular. A regular matrix summability method is a matrix transformation of a convergent sequence which preserves the limit.
An infinite matrix with complex-valued entries defines a regular summability method if and only if it satisfies all of the following properties:
An example is Cesaro summation, a matrix summability method with
References
Citations
Further reading
Toeplitz, Otto (1911) "Über allgemeine lineare Mittelbildungen." Prace mat.-fiz., 22, 113–118 (the original paper in German)
Silverman, Louis Lazarus (1913) "On the definition of the sum of a divergent series." University of Missouri Studies, Math. Series I, 1–96
, 43-48.
Theorems in analysis
Summability methods
Summability theory
|
https://en.wikipedia.org/wiki/Blood%20blister
|
A blood blister is a type of blister that forms when subdermal tissues and blood vessels are damaged without piercing the skin. It consists of a pool of lymph, blood and other body fluids trapped beneath the skin. If punctured, it suppurates a dark fluid. Sometimes the fluids are cut off from the rest of the body and dry up, leaving behind dead cell material inside the blister with a texture like putty. Some blood blisters can be extremely painful due to bruising where the blister occurred.
There are also blood blister-like aneurysms as these are known to be located in the supraclinoid internal carotid artery and have been recognized as having unique pathological and clinical features.
Causes
Blood blisters are commonly caused by accidents in which the skin is pinched by a tool, mechanism, or heavy weight without protective equipment. Blood blisters can also arise from forcible human contact, including grappling.
Blood blisters also may occur with friction caused by constant rubbing of skin against a surface. Ill-fitting shoes that rub on the skin can cause the blood vessels in the skin to break and form a blood clot under the skin, resulting in a blood blister. Certain sports activities that require repeated movement and rubbing of the skin against equipment may also cause this. So baseball pitchers, rowers, and drummers often contract blood blisters on the fingers and palms. They also form as a result of frostbite.
Blood blisters can also occur in the mouth for a var
|
https://en.wikipedia.org/wiki/Warren%20Williams%20%28American%20football%29
|
Warren Williams Jr. (born July 29, 1965) is a former professional American football running back. He played college football at the University of Miami.
College statistics
1984: 29 carries for 140 yards. 13 catches for 154 yards and 1 touchdown.
1985: 89 carries for 522 yards and 4 touchdowns. 14 catches for 131 yards.
1986: 80 carries for 399 yards and 3 touchdowns. 13 catches for 114 yards and 2 touchdowns.
1987: 135 carries for 673 yards and 5 touchdowns. 30 catches for 309 yards and 1 touchdown.
NFL career
Williams played in National Football League (NFL) for the Pittsburgh Steelers (1988–1992) and the Indianapolis Colts. He was drafted by the Steelers in the sixth round of the 1988 NFL Draft.
References
1965 births
Living people
Players of American football from Fort Myers, Florida
American football running backs
Miami Hurricanes football players
Pittsburgh Steelers players
Indianapolis Colts players
|
https://en.wikipedia.org/wiki/Triangle%20of%20U
|
The triangle of U ( ) is a theory about the evolution and relationships among the six most commonly known members of the plant genus Brassica. The theory states that the genomes of three ancestral diploid species of Brassica combined to create three common tetraploid vegetables and oilseed crop species. It has since been confirmed by studies of DNA and proteins.
The theory is summarized by a triangular diagram that shows the three ancestral genomes, denoted by AA, BB, and CC, at the corners of the triangle, and the three derived ones, denoted by AABB, AACC, and BBCC, along its sides.
The theory was first published in 1935 by Woo Jang-choon, a Korean-Japanese botanist (writing under the Japanized name "U Nagaharu"). Woo made synthetic hybrids between the diploid and tetraploid species and examined how the chromosomes paired in the resulting triploids.
U's theory
The six species are
The code in the "Chr.count" column specifies the total number of chromosomes in each somatic cell, and how it relates to the number of chromosomes in each full genome set (which is also the number found in the pollen or ovule), and the number of chromosomes in each component genome. For example, each somatic cell of the tetraploid species Brassica napus, with letter tags AACC and count "2=4=38", contains two copies of the A genome, each with 10 chromosomes, and two copies of the C genome, each with 9 chromosomes, which is 38 chromosomes in total. That is two full genome sets (one A and one C),
|
https://en.wikipedia.org/wiki/Tyrosinase
|
Tyrosinase is an oxidase that is the rate-limiting enzyme for controlling the production of melanin. The enzyme is mainly involved in two distinct reactions of melanin synthesis otherwise known as the Raper Mason pathway. Firstly, the hydroxylation of a monophenol and secondly, the conversion of an o-diphenol to the corresponding o-quinone. o-Quinone undergoes several reactions to eventually form melanin. Tyrosinase is a copper-containing enzyme present in plant and animal tissues that catalyzes the production of melanin and other pigments from tyrosine by oxidation. It is found inside melanosomes which are synthesized in the skin melanocytes. In humans, the tyrosinase enzyme is encoded by the TYR gene.
Catalyzed reaction
Tyrosinase carries out the oxidation of phenols such as tyrosine and dopamine using dioxygen (O2). In the presence of catechol, benzoquinone is formed (see reaction below). Hydrogens removed from catechol combine with oxygen to form water.
The substrate specificity becomes dramatically restricted in mammalian tyrosinase which uses only L-form of tyrosine or DOPA as substrates, and has restricted requirement for L-DOPA as cofactor.
Active site
The two copper atoms within the active site of tyrosinase enzymes interact with dioxygen to form a highly reactive chemical intermediate that then oxidizes the substrate. The activity of tyrosinase is similar to catechol oxidase, a related class of copper oxidase. Tyrosinases and catechol oxidases are collective
|
https://en.wikipedia.org/wiki/Factorial%20experiment
|
In statistics, a full factorial experiment is an experiment whose design consists of two or more factors, each with discrete possible values or "levels", and whose experimental units take on all possible combinations of these levels across all such factors. A full factorial design may also be called a fully crossed design. Such an experiment allows the investigator to study the effect of each factor on the response variable, as well as the effects of interactions between factors on the response variable.
For the vast majority of factorial experiments, each factor has only two levels. For example, with two factors each taking two levels, a factorial experiment would have four treatment combinations in total, and is usually called a 2×2 factorial design. In such a design, the interaction between the variables is often the most important. This applies even to scenarios where a main effect and an interaction are present.
If the number of combinations in a full factorial design is too high to be logistically feasible, a fractional factorial design may be done, in which some of the possible combinations (usually at least half) are omitted.
Other terms for "treatment combinations" are often used, such as runs (of an experiment), points (viewing the combinations as vertices of a graph, and cells (arising as intersections of rows and columns).
History
Factorial designs were used in the 19th century by John Bennet Lawes and Joseph Henry Gilbert of the Rothamsted Experimental Stati
|
https://en.wikipedia.org/wiki/List%20of%20historical%20classifications
|
Historical classification groups the various history topics into different categories according to subject matter as shown below.
Meta-history
Philosophy of history
By geographic region
World
Africa
Americas
Asia
Europe
Oceania
Antarctica
By geographic subregion
North America
South America
Latin America
Central America
Pre-Columbian
Mesoamerica
Caribbean
Eurasia
History of Europe
Prehistoric Europe
Classical antiquity
Late Antiquity
Middle Ages
Early modern period
Modern Europe
Central Asia
South Asia
East Asia
Southeast Asia
Middle East
Ancient Near East
Australasia (Australia, New Guinea, Micronesia, Melanesia, Polynesia)
Pacific Islands
By date
Centuries
Decades
Periodization
List of named time periods
List of timelines
By time period
Prehistory
Ancient history
Modern world
See also Periodization.
By religion
History of religion
History of Christianity
History of Islam
Jewish history
History of Buddhism
Hinduism History of Hinduism
By nation
History of extinct nations and states
By field
Cultural movements
Diaspora studies
Family history
Environmental history
Local history
Maritime history
Microhistory
Confederation
Social History
Urban History
Mathematics and the hard sciences
History of mathematics
History of science and technology
History of astronomy
History of physics
History of chemistry
History of geology
History of biology
History of medicine
History of mental illness
Social sciences
History of art
History of astrology
History of cinema
History of econom
|
https://en.wikipedia.org/wiki/Methylamphetamine
|
Methylamphetamine may refer to:
Phentermine, α-methylamphetamine
2-Phenyl-3-aminobutane, β-methylamphetamine
Methamphetamine, N-methylamphetamine
Methamphetamine hydrochloride, "crystal meth"
Ortetamine, 2-methylamphetamine
3-Methylamphetamine
4-Methylamphetamine
|
https://en.wikipedia.org/wiki/Chinn
|
Chinn is a surname, originating both in England and among overseas Chinese communities.
Origins and statistics
As an English surname, it originated as a nickname for people with prominent chins, from Middle English or . It is also a spelling, based on the pronunciation in some varieties of Chinese including Hakka, of the surname pronounced Chen in Mandarin. The similarly spelled surname Chin also shares both of these origins.
According to statistics cited by Patrick Hanks, 1,316 people on the island of Great Britain and four on the island of Ireland bore the surname Chinn in 2011. In 1881 there were 1,032 people with the surname in Great Britain, primarily at Warwickshire and Cornwall.
The 2010 United States Census found 6,211 people with the surname Chinn, making it the 5,601st-most-common name in the country. This represented an increase in absolute numbers, but a decrease in relative frequency, from 6,146 (5,220th-most-common) in the 2000 Census. In both censuses, about half of the bearers of the surname identified as White, one-quarter as Asian, and one-fifth as Black.
People
Adrienne Chinn (1960), Canadian author
Alva Chinn (), American model
Andrew Chinn (1915–1996), American artist and art educator of Chinese descent
Anthony Chinn (1930–2000), Guyanese-born British actor
Benjamen Chinn (1921–2009), American photographer
Betty Kwan Chinn (), American philanthropist who works with homeless people
Bob Chinn (film director) (born 1943), American pornographic fil
|
https://en.wikipedia.org/wiki/Davisson%E2%80%93Germer%20experiment
|
The Davisson–Germer experiment was a 1923-27 experiment by Clinton Davisson and Lester Germer at Western Electric (later Bell Labs), in which electrons, scattered by the surface of a crystal of nickel metal, displayed a diffraction pattern. This confirmed the hypothesis, advanced by Louis de Broglie in 1924, of wave-particle duality, and also the wave mechanics approach of the Schrödinger equation. It was an experimental milestone in the creation of quantum mechanics.
History and overview
According to Maxwell's equations in the late 19th century, light was thought to consist of waves of electromagnetic fields and matter was thought to consist of localized particles. However, this was challenged in Albert Einstein's 1905 paper on the photoelectric effect, which described light as discrete and localized quanta of energy (now called photons), which won him the Nobel Prize in Physics in 1921. In 1924 Louis de Broglie presented his thesis concerning the wave–particle duality theory, which proposed the idea that all matter displays the wave–particle duality of photons. According to de Broglie, for all matter and for radiation alike, the energy of the particle was related to the frequency of its associated wave by the Planck relation:
And that the momentum of the particle was related to its wavelength by what is now known as the de Broglie relation:
where is Planck's constant.
An important contribution to the Davisson–Germer experiment was made by Walter M. Elsasser in Gött
|
https://en.wikipedia.org/wiki/Iterative%20closest%20point
|
Iterative closest point (ICP) is an algorithm employed to minimize the difference between two clouds of points. ICP is often used to reconstruct 2D or 3D surfaces from different scans, to localize robots and achieve optimal path planning (especially when wheel odometry is unreliable due to slippery terrain), to co-register bone models, etc.
Overview
The Iterative Closest Point algorithm keeps one point cloud, the reference or target, fixed, while transforming the other, the source, to best match the reference. The transformation (combination of translation and rotation) is iteratively estimated in order to minimize an error metric, typically the sum of squared differences between the coordinates of the matched pairs. ICP is one of the widely used algorithms in aligning three dimensional models given an initial guess of the rigid transformation required.
The ICP algorithm was first introduced by Chen and Medioni, and Besl and McKay.
The Iterative Closest Point algorithm contrasts with the Kabsch algorithm and other solutions to the orthogonal Procrustes problem in that the Kabsch algorithm requires correspondence between point sets as an input, whereas Iterative Closest Point treats correspondence as a variable to be estimated.
Inputs: reference and source point clouds, initial estimation of the transformation to align the source to the reference (optional), criteria for stopping the iterations.
Output: refined transformation.
Essentially, the algorithm steps are:
For e
|
https://en.wikipedia.org/wiki/Henry%20Classification%20System
|
The Henry Classification System is a long-standing method by which fingerprints are sorted by physiological characteristics for one-to-many searching. Developed by Hem Chandra Bose, Qazi Azizul Haque and Sir Edward Henry in the late 19th century for criminal investigations in British India, it was the basis of modern-day AFIS (Automated Fingerprint Identification System) classification methods up until the 1990s. In recent years, the Henry Classification System has generally been replaced by ridge flow classification approaches.
History and development
Although fingerprint characteristics were studied as far back as the mid-1600s, the use of fingerprints as a means of identification did not occur until the mid-19th century. In roughly 1859, Sir William James Herschel discovered that fingerprints remain stable over time and are unique across individuals; as Chief Magistrate of the Hooghly district in Jungipoor, India, in 1877 he was the first to institute the use of fingerprints and handprints as a means of identification, signing legal documents, and authenticating transactions. The fingerprint records collected at this time were used for one-to-one verification only; as a means in which records would be logically filed and searched had not yet been invented.
In 1880, Dr. Henry Faulds wrote to Charles Darwin, explaining a system for classifying fingerprints, asking for his assistance in their development. Darwin was unable to assist Dr. Faulds, but agreed to forward the le
|
https://en.wikipedia.org/wiki/Wetlands%20International
|
Wetlands International is a global organisation that works to sustain and restore wetlands and their resources for people and biodiversity. It is an independent, not-for-profit, global organisation, supported by government and NGO membership from around the world.
Based mostly in the developing world, it has 20 regional, national or project offices in all continents and a head office in Ede, the Netherlands.
The NGO works in over 100 countries and at different scales to tackle problems affecting wetlands. With the support of dozens of governmental, NGO and corporate donors and partners, it supports about 80 projects.
Wetlands International's work ranges from research and community-based field projects to advocacy and engagement with governments, corporate and international policy fora and conventions. Wetlands International works through partnerships and is supported by contributions from an extensive specialist expert network and thousands of volunteers.
History
It was founded in 1937 as the International Wildfowl Inquiry and the organisation was focused on the protection of waterbirds. Later, the name became International Waterfowl & Wetlands Research Bureau (IWRB). The scope became wider; besides waterbirds, the organisation was also working on the protection of wetland areas.
Later, organisations with similar objectives emerged in Asia and the Americas: the Asian Wetland Bureau (AWB) (initiated as INTERWADER in 1983) and Wetlands for the Americas (WA) (initiated in 1
|
https://en.wikipedia.org/wiki/Protamine
|
Protamines are small, arginine-rich, nuclear proteins that replace histones late in the haploid phase of spermatogenesis and are believed essential for sperm head condensation and DNA stabilization. They may allow for denser packaging of DNA in the spermatozoon than histones, but they must be decompressed before the genetic data can be used for protein synthesis. However, part of the sperm's genome is packaged by histones (10-15% in humans and other primates) thought to bind genes that are essential for early embryonic development.
Protamine and protamine-like (PL) proteins are collectively known as the sperm-specific nuclear basic proteins (SNBPs). The PL proteins are intermediate in structure between protamine and Histone H1. The C-terminal domain of PL could be the precursor of vertebrate protamine.
Spermatogenesis
During the formation of sperm, protamine binds to the phosphate backbone of DNA using the arginine-rich domain as an anchor. DNA is then folded into a toroid, an O-shaped structure, although the mechanism is not known. A sperm cell can contain up to 50,000 toroid-shaped structures in its nucleus with each toroid containing about 50 kilobases. Before the toroid is formed, histones are removed from the DNA by transition nuclear proteins, so that protamine can condense it. The effects of this change are 1) an increase in sperm hydrodynamics for better flow through liquids by reducing the head size 2) decrease in the occurrence of DNA damage 3) removal of the epi
|
https://en.wikipedia.org/wiki/IEEE%20802.21
|
The IEEE 802.21 refers to Media Independent Handoff (MIH) and is an IEEE standard published in 2008. The standard supports algorithms enabling seamless handover between wired and wireless networks of the same type as well as handover between different wired and wireless network types also called Media independent handover (MIH) or vertical handover. The vertical handover was first introduced by Mark Stemn and Randy Katz at U C Berkeley. The standard provides information to allow handing over to and from wired 802.3 networks to wireless 802.11, 802.15, 802.16, 3GPP and 3GPP2 networks through different handover mechanisms.
The IEEE 802.21 working group started work in March 2004. More than 30 companies have joined the working group. The group produced a first draft of the standard including the protocol definition in May 2005. The standard was published in January 2009.
Reasons for 802.21
Cellular networks and 802.11 networks employ handover mechanisms for handover within the same network type (aka horizontal handover). Mobile IP provides handover mechanisms for handover across subnets of different types of networks, but can be slow in the process. Current 802 standards do not support handover between different types of networks. They also do not provide triggers or other services to accelerate mobile IP-based handovers. Moreover, existing 802 standards provide mechanisms for detecting and selecting network access points, but do not allow for detection and selection of networ
|
https://en.wikipedia.org/wiki/Dihydroquinine
|
Dihydroquinine, also known as hydroquinine, is an organic compound and as a cinchona alkaloid closely related to quinine. The specific rotation is −148° in ethanol. A derivative of this molecule is used as chiral ligand in the AD-mix for Sharpless dihydroxylation.
See also
Dihydroquinidine
References
Secondary alcohols
Phenol ethers
Quinoline alkaloids
Quinuclidine alkaloids
|
https://en.wikipedia.org/wiki/Dihydroquinidine
|
Dihydroquinidine (also called hydroquinidine) is an organic compound, a cinchona alkaloid closely related to quinine. The specific rotation is +226° in ethanol at 2 g/100 ml. A derivative of this molecule is used as chiral ligand in the AD-mix for Sharpless dihydroxylation.
The substance is also a class Ia antiarrhythmic medication.
See also
Dihydroquinine
References
Secondary alcohols
Phenol ethers
Quinolines
Quinuclidine alkaloids
|
https://en.wikipedia.org/wiki/Bairrada%20DOC
|
Bairrada is a Portuguese wine region located in the Beira Litoral Province. The region has Portugal's highest wine classification as a Denominação de Origem Controlada (DOC), and its popularity has surged over the last years. It is small and quite narrow coastal region, part of the broader region of Beira Atlântico, and it is bordered to the northeast by the Lafões IPR and to the east by the Dão DOC.
It is located close to the Atlantic Ocean and the currents have a moderating effect on the climate, resulting in a mild, maritime climate with abundant rainfall. The region is hilly, but the majority of the vineyards are placed on flatter land.
About 2/3 of the national sparkling wine production takes place in this region, and in recent years the city of Anadia received the nickname of "Capital do Espumante", which translates to "Sparkling Wine Capital".
The region is also known for its deep colored tannic red wines, that often have bell pepper and black currant flavors, as well its emerging rosé production.
The boundaries of the Bairrada DOC includes the municipalities of Anadia, Cantanhede, Mealhada and Oliveira do Bairro, and some parishes in the municipalities of Vagos and Coimbra and also the parish of Nariz, in the municipality of Aveiro.
History
Viticulture in the Bairrada has existed since at least the 10th century, when the region gained independence from the Moors. Located just south of the major Port wine producing center of Porto, the fortunes of Bairrada were on
|
https://en.wikipedia.org/wiki/Parametric%20derivative
|
In calculus, a parametric derivative is a derivative of a dependent variable with respect to another dependent variable that is taken when both variables depend on an independent third variable, usually thought of as "time" (that is, when the dependent variables are x and y and are given by parametric equations in t).
First derivative
Let and be the coordinates of the points of the curve expressed as functions of a variable t:
The first derivative implied by these parametric equations is
where the notation denotes the derivative of x with respect to t. This can be derived using the chain rule for derivatives:
and dividing both sides by to give the equation above.
In general all of these derivatives — dy / dt, dx / dt, and dy / dx — are themselves functions of t and so can be written more explicitly as, for example,
Second derivative
The second derivative implied by a parametric equation is given by
by making use of the quotient rule for derivatives. The latter result is useful in the computation of curvature.
Example
For example, consider the set of functions where:
and
Differentiating both functions with respect to t leads to
and
respectively. Substituting these into the formula for the parametric derivative, we obtain
where and are understood to be functions of t.
See also
Generalizations of the derivative
External links
Differential calculus
|
https://en.wikipedia.org/wiki/Tesla%20Experimental%20Station
|
The Tesla Experimental Station was a laboratory in Colorado Springs, Colorado, USA built in 1899 by inventor Nikola Tesla and for his study of the use of high-voltage, high-frequency electricity in wireless power transmission. Tesla used it for only one year, until 1900, and it was torn down in 1904 to pay his outstanding debts.
History
In May 1899, Tesla, several of his assistants, and a local contractor commenced the construction of Tesla's laboratory shortly after arriving in Colorado Springs, a high-altitude location where he would have more room than in his downtown New York City laboratory for his high-voltage, high-frequency experiments. Tesla moved there to study the conductive nature of low pressure air, part of his research into wireless transmission of electrical power. The lab possessed the largest Tesla coil ever built, in diameter, which was a preliminary version of the magnifying transmitter planned for installation in the Wardenclyffe Tower. Upon his arrival, he told reporters that he planned to conduct wireless telegraphy experiments, transmitting signals from Pikes Peak to Paris.
He produced artificial lightning, with discharges consisting of millions of volts and up to long. People walking along the street observed sparks jumping between their feet and the ground. Sparks sprang from water line taps when touched. Light bulbs within of the lab glowed even when turned off. Horses in a livery stable bolted from their stalls after receiving shocks thro
|
https://en.wikipedia.org/wiki/Superhelix
|
A superhelix is a molecular structure in which a helix is itself coiled into a helix. This is significant to both proteins and genetic material, such as overwound circular DNA.
The earliest significant reference in molecular biology is from 1971, by F. B. Fuller:
A geometric invariant of a space curve, the writhing number, is defined and studied. For the central curve of a twisted cord the writhing number measures the extent to which coiling of the central curve has relieved local twisting of the cord. This study originated in response to questions that arise in the study of supercoiled double-stranded DNA rings.</blockquote>
About the writhing number, mathematician W. F. Pohl says:
<blockquote>It is well known that the writhing number is a standard measure of the global geometry of a closed space curve.
Contrary to intuition, a topological property, the linking number, arises from the geometric properties twist and writhe according to the following relationship:
Lk= T + W,
where Lk is the linking number, W is the writhe and T is the twist of the coil.
The linking number refers to the number of times that one strand wraps around the other. In DNA this property does not change and can only be modified by specialized enzymes called topoisomerases.
See also
DNA supercoil (superhelical DNA)
Knot theory
References
External links
DNA Structure and Topology at Molecular Biochemistry II: The Bello Lectures.
Helices
Molecular biology
Molecular topology
|
https://en.wikipedia.org/wiki/Motilin
|
Motilin is a 22-amino acid polypeptide hormone in the motilin family that, in humans, is encoded by the MLN gene.
Motilin is secreted by endocrine Mo cells (also referred to as M cells, which are not the same as the M cells, or microfold cells, found in Peyer's patches) that are numerous in crypts of the small intestine, especially in the duodenum and jejunum. It is released into the general circulation in humans at about 100-min intervals during the inter-digestive state and is the most important factor in controlling the inter-digestive migrating contractions; and it also stimulates endogenous release of the endocrine pancreas. Based on amino acid sequence, motilin is unrelated to other hormones. Because of its ability to stimulate gastric activity, it was named "motilin." Apart from in humans, the motilin receptor has been identified in the gastrointestinal tracts of pigs, rats, cows, and cats, and in the central nervous system of rabbits.
Discovery
Motilin was discovered by J.C. Brown when he introduced alkaline solution into duodena of dogs, which caused strong gastric contractions. Brown et al. predicted that alkali could either release stimulus to activate motor activity or prevent the secretion of inhibitory hormone. They isolated a polypeptide as a by-product from purification of secretin on carboxymethyl cellulose. They named this polypeptide "Motilin."
Structure
Motilin has 22 amino acids and molecular weight of 2698 Daltons. In extract from human gut and plasm
|
https://en.wikipedia.org/wiki/Differential-algebraic%20system%20of%20equations
|
In electrical engineering, a differential-algebraic system of equations (DAE) is a system of equations that either contains differential equations and algebraic equations, or is equivalent to such a system.
In mathematics these are examples of differential algebraic varieties and correspond to ideals in differential polynomial rings (see the article on differential algebra for the algebraic setup).
We can write these differential equations for a dependent vector of variables x in one independent variable t, as
When considering these symbols as functions of a real variable (as is the case in applications in electrical engineering or control theory) we look at as a vector of dependent variables and the system has as many equations, which we consider as functions .
They are distinct from ordinary differential equation (ODE) in that a DAE is not completely solvable for the derivatives of all components of the function x because these may not all appear (i.e. some equations are algebraic); technically the distinction between an implicit ODE system [that may be rendered explicit] and a DAE system is that the Jacobian matrix is a singular matrix for a DAE system. This distinction between ODEs and DAEs is made because DAEs have different characteristics and are generally more difficult to solve.
In practical terms, the distinction between DAEs and ODEs is often that the solution of a DAE system depends on the derivatives of the input signal and not just the signal itself as i
|
https://en.wikipedia.org/wiki/Eubacterium
|
Eubacterium is a genus of Gram-positive bacteria in the family Eubacteriaceae. These bacteria are characterised by a rigid cell wall. They may either be motile or nonmotile. If motile, they have a flagellum. A typical flagellum consists of a basal body, filament, and hook. The long filament is the organ which helps eubacteria move.
Gram-positive bacteria have a thick proteoglycan layer and take up violet Gram stain (whereas Gram-negative bacteria have a thinner proteoglycan layer which is surrounded by a layer of immune response-inducing lipopolysaccharide, and do not take up Gram stain).
References
Eubacteriaceae
Bacterial vaginosis
Bacteria genera
|
https://en.wikipedia.org/wiki/Vesuvianite
|
Vesuvianite, also known as idocrase, is a green, brown, yellow, or blue silicate mineral. Vesuvianite occurs as tetragonal crystals in skarn deposits and limestones that have been subjected to contact metamorphism. It was first discovered within included blocks or adjacent to lavas on Mount Vesuvius, hence its name. Attractive-looking crystals are sometimes cut as gemstones. Localities which have yielded fine crystallized specimens include Mount Vesuvius and the Ala Valley near Turin, Piedmont.
The specific gravity is 3.4 and the Mohs hardness is . The name "vesuvianite" was given by Abraham Gottlob Werner in 1795, because fine crystals of the mineral are found at Vesuvius; these are brown in color and occur in the ejected limestone blocks of Monte Somma. Several other names were applied to this species, one of which, "idocrase" by René Just Haüy in 1796, is now in common use.
A sky bluish variety known as cyprine has been reported from Franklin, New Jersey and other locations; the blue is due to impurities of copper in a complex calcium aluminum sorosilicate. Californite is a name sometimes used for jade-like vesuvianite, also known as California jade, American jade or Vesuvianite jade. Xanthite is a manganese rich variety. Wiluite is an optically positive variety from Wilui, Siberia. Idocrase is an older synonym sometimes used for gemstone-quality vesuvianite.
References
Additional sources
Webmineral data
Vesuvianite at Franklin-Sterling
Mindat - Cyprine variant
|
https://en.wikipedia.org/wiki/Exact%20Equation
|
In mathematics, the term exact equation can refer either of the following:
Exact differential equation
Exact differential form
|
https://en.wikipedia.org/wiki/Nakayama%27s%20lemma
|
In mathematics, more specifically abstract algebra and commutative algebra, Nakayama's lemma — also known as the Krull–Azumaya theorem — governs the interaction between the Jacobson radical of a ring (typically a commutative ring) and its finitely generated modules. Informally, the lemma immediately gives a precise sense in which finitely generated modules over a commutative ring behave like vector spaces over a field. It is an important tool in algebraic geometry, because it allows local data on algebraic varieties, in the form of modules over local rings, to be studied pointwise as vector spaces over the residue field of the ring.
The lemma is named after the Japanese mathematician Tadashi Nakayama and introduced in its present form in , although it was first discovered in the special case of ideals in a commutative ring by Wolfgang Krull and then in general by Goro Azumaya (1951). In the commutative case, the lemma is a simple consequence of a generalized form of the Cayley–Hamilton theorem, an observation made by Michael Atiyah (1969). The special case of the noncommutative version of the lemma for right ideals appears in Nathan Jacobson (1945), and so the noncommutative Nakayama lemma is sometimes known as the Jacobson–Azumaya theorem. The latter has various applications in the theory of Jacobson radicals.
Statement
Let be a commutative ring with identity 1. The following is Nakayama's lemma, as stated in :
Statement 1: Let be an ideal in , and a finitely gener
|
https://en.wikipedia.org/wiki/Ochnaceae
|
Ochnaceae is a family of flowering plants in the order Malpighiales. In the APG III system of classification of flowering plants, Ochnaceae is defined broadly, to include about 550 species, and encompasses what some taxonomists have treated as the separate families Medusagynaceae and Quiinaceae. In a phylogenetic study that was published in 2014, Ochnaceae was recognized in the broad sense, but two works published after APG III have accepted the small families Medusagynaceae and Quiinaceae. These have not been accepted by APG IV (2016).
In this article, "Ochnaceae" will refer to the larger circumscription of the family, which is otherwise known as Ochnaceae sensu lato or as the ochnoids. In this sense the family includes 32 genera with about 550 species.
Ochnaceae, defined broadly or narrowly, is pantropical in distribution, with a few species cultivated outside of this range. Ochnaceae is most diverse in the neotropics, with a second center of diversity in tropical Africa. It consists mostly of shrubs and small trees, and, in Sauvagesia, a few herbaceous species. Many are treelets, with a single, erect trunk, but low in height. The Ochnaceae are notable for their unusual leaves. These are usually shiny, with closely spaced, parallel veins, toothed margins, and conspicuous stipules. Most of the species are buzz pollinated. In eight of the genera in tribe Sauvagesieae, the flower changes form after opening, by continued growth of tissue within the flower.
A few species of O
|
https://en.wikipedia.org/wiki/Riesz%E2%80%93Fischer%20theorem
|
In mathematics, the Riesz–Fischer theorem in real analysis is any of a number of closely related results concerning the properties of the space L2 of square integrable functions. The theorem was proven independently in 1907 by Frigyes Riesz and Ernst Sigismund Fischer.
For many authors, the Riesz–Fischer theorem refers to the fact that the Lp spaces from Lebesgue integration theory are complete.
Modern forms of the theorem
The most common form of the theorem states that a measurable function on is square integrable if and only if the corresponding Fourier series converges in the Lp space This means that if the Nth partial sum of the Fourier series corresponding to a square-integrable function f is given by
where the nth Fourier coefficient, is given by
then
where is the -norm.
Conversely, if is a two-sided sequence of complex numbers (that is, its indices range from negative infinity to positive infinity) such that
then there exists a function f such that f is square-integrable and the values are the Fourier coefficients of f.
This form of the Riesz–Fischer theorem is a stronger form of Bessel's inequality, and can be used to prove Parseval's identity for Fourier series.
Other results are often called the Riesz–Fischer theorem . Among them is the theorem that, if A is an orthonormal set in a Hilbert space H, and then
for all but countably many and
Furthermore, if A is an orthonormal basis for H and x an arbitrary vector, the series
converges (or ) to
|
https://en.wikipedia.org/wiki/Sparse%20conditional%20constant%20propagation
|
In computer science, sparse conditional constant propagation (SCCP) is an optimization frequently applied in compilers after conversion to static single assignment form (SSA). It simultaneously removes some kinds of dead code and propagates constants throughout a program. Moreover, it can find more constant values, and thus more opportunities for improvement, than separately applying dead code elimination and constant propagation in any order or any number of repetitions.
The algorithm operates by performing abstract interpretation of the code in SSA form. During abstract interpretation, it typically uses a flat lattice of constants for values and a global environment mapping SSA variables to values in this lattice. The crux of the algorithm comes in how it handles the interpretation of branch instructions. When encountered, the condition for a branch is evaluated as best possible given the precision of the abstract values bound to variables in the condition. It may be the case that the values are perfectly precise (neither top nor bottom) and hence, abstract execution can decide in which direction to branch. If the values are not constant, or a variable in the condition is undefined, then both branch directions must be taken to remain conservative.
Upon completion of the abstract interpretation, instructions which were never reached are marked as dead code. SSA variables found to have constant values may then be inlined at (propagated to) their point of use.
Note
|
https://en.wikipedia.org/wiki/Copy%20propagation
|
In compiler theory, copy propagation is the process of replacing the occurrences of targets of direct assignments with their values. A direct assignment is an instruction of the form x = y, which simply assigns the value of y to x.
From the following code:
y = x
z = 3 + y
Copy propagation would yield:
z = 3 + x
Copy propagation often makes use of reaching definitions, use-def chains and def-use chains when computing which occurrences of the target may be safely replaced. If all upwards exposed uses of the target may be safely modified, the assignment operation may be eliminated.
Copy propagation is a useful "clean up" optimization frequently used after other compiler passes have already been run. Some optimizations—such as classical implementations of elimination of common sub expressions—require that copy propagation be run afterwards in order to achieve an increase in efficiency.
See also
Copy elision
Constant folding and constant propagation
References
Further reading
Muchnick, Steven S. Advanced Compiler Design and Implementation. Morgan Kaufmann. 1997.
Compiler optimizations
|
https://en.wikipedia.org/wiki/List%20of%20radio%20stations%20in%20Malaysia
|
This is a list of radio stations in Malaysia, ordered by location and frequency. Frequency varies in different states.
There are a total of 24 private and 44 government-owned radio stations in Malaysia. Stations owned by the government operate under the Radio Televisyen Malaysia (RTM) group. Other stations such as BBC World Service, China Radio International and Voice of Vietnam are available in Malaysia via AM.
Stations
FM stations
Amplitude Modulation (AM) stations
Malaysia does not own any AM stations. All of the AM stations able to be received in Malaysia are from other Asian countries with high power transmitters. Reception is much better at night.
Stations (other states)
International radio stations (Defunct)
Defunct radio stations
Notes
References
Malaysia|
|
https://en.wikipedia.org/wiki/Reaching%20definition
|
In compiler theory, a reaching definition for a given instruction is an earlier instruction whose target variable can reach (be assigned to) the given one without an intervening assignment. For example, in the following code:
d1 : y := 3
d2 : x := y
d1 is a reaching definition for d2. In the following, example, however:
d1 : y := 3
d2 : y := 4
d3 : x := y
d1 is no longer a reaching definition for d3, because d2 kills its reach: the value defined in d1 is no longer available and cannot reach d3.
As analysis
The similarly named reaching definitions is a data-flow analysis which statically determines which definitions may reach a given point in the code. Because of its simplicity, it is often used as the canonical example of a data-flow analysis in textbooks. The data-flow confluence operator used is set union, and the analysis is forward flow. Reaching definitions are used to compute use-def chains.
The data-flow equations used for a given basic block in reaching definitions are:
In other words, the set of reaching definitions going into are all of the reaching definitions from 's predecessors, . consists of all of the basic blocks that come before in the control-flow graph. The reaching definitions coming out of are all reaching definitions of its predecessors minus those reaching definitions whose variable is killed by plus any new definitions generated within .
For a generic instruction, we define the and sets as follows:
, a set of locally a
|
https://en.wikipedia.org/wiki/Loop%20optimization
|
In compiler theory, loop optimization is the process of increasing execution speed and reducing the overheads associated with loops. It plays an important role in improving cache performance and making effective use of parallel processing capabilities. Most execution time of a scientific program is spent on loops; as such, many compiler optimization techniques have been developed to make them faster.
Representation of computation and transformations
Since instructions inside loops can be executed repeatedly, it is frequently not possible to give a bound on the number of instruction executions that will be impacted by a loop optimization. This presents challenges when reasoning about the correctness and benefits of a loop optimization, specifically the representations of the computation being optimized and the optimization(s) being performed.
Optimization via a sequence of loop transformations
Loop optimization can be viewed as the application of a sequence of specific loop transformations (listed below or in Compiler transformations for high-performance computing) to the source code or intermediate representation, with each transformation having an associated test for legality. A transformation (or sequence of transformations) generally must preserve the temporal sequence of all dependencies if it is to preserve the result of the program (i.e., be a legal transformation). Evaluating the benefit of a transformation or sequence of transformations can be quite difficult wi
|
https://en.wikipedia.org/wiki/Loop%20interchange
|
In compiler theory, loop interchange is the process of exchanging the order of two iteration variables used by a nested loop. The variable used in the inner loop switches to the outer loop, and vice versa. It is often done to ensure that the elements of a multi-dimensional array are accessed in the order in which they are present in memory, improving locality of reference.
For example, in the code fragment:
for i from 0 to 10
for j from 0 to 20
a[i,j] = i + j
loop interchange would result in:
for j from 0 to 20
for i from 0 to 10
a[i,j] = i + j
On occasion, such a transformation may create opportunities to further optimize, such as automatic vectorization of the array assignments.
The utility of loop interchange
The major purpose of loop interchange is to take advantage of the CPU cache when accessing array elements. When a processor accesses an array element for the first time, it will retrieve an entire block of data from memory to cache. That block is likely to have many more consecutive elements after the first one, so on the next array element access, it will be brought directly from cache (which is faster than getting it from slow main memory). Cache misses occur if the contiguously accessed array elements within the loop come from a different cache block, and loop interchange can help prevent this. The effectiveness of loop interchange depends on and must be considered in light of the cache model used by the underlying hardware and the array m
|
https://en.wikipedia.org/wiki/Dependence%20analysis
|
In compiler theory, dependence analysis produces execution-order constraints between statements/instructions. Broadly speaking, a statement S2 depends on S1 if S1 must be executed before S2. Broadly, there are two classes of dependencies--control dependencies and data dependencies.
Dependence analysis determines whether it is safe to reorder or parallelize statements.
Control dependencies
Control dependency is a situation in which a program instruction executes if the previous instruction evaluates in a way that allows its execution.
A statement S2 is control dependent on S1 (written ) if and only if S2s execution is conditionally guarded by S1. S2 is control dependent on S1 if and only if where is the post dominance frontier of statement . The following is an example of such a control dependence:
S1 if x > 2 goto L1
S2 y := 3
S3 L1: z := y + 1
Here, S2 only runs if the predicate in S1 is false.
See data dependencies for more details.
Data dependencies
A data dependence arises from two statements which access or modify the same resource.
Flow(True) dependence
A statement S2 is flow dependent on S1 (written ) if and only if S1 modifies a resource that S2 reads and S1 precedes S2 in execution. The following is an example of a flow dependence (RAW: Read After Write):
S1 x := 10
S2 y := x + c
Antidependence
A statement S2 is antidependent on S1 (written ) if and only if S2 modifies a resource that S1 reads and S1 precedes
|
https://en.wikipedia.org/wiki/Radar%20jamming%20and%20deception
|
Radar jamming and deception is a form of electronic countermeasures that intentionally sends out radio frequency signals to interfere with the operation of radar by saturating its receiver with noise or false information. Concepts that blanket the radar with signals so its display cannot be read are normally known as jamming, while systems that produce confusing or contradictory signals are known as deception, but it is also common for all such systems to be referred to as jamming.
There are two general classes of radar jamming, mechanical and electronic. Mechanical jamming entails reflecting enemy radio signals in various ways to provide false or misleading target signals to the radar operator. Electronic jamming works by transmitting additional radio signals towards enemy receivers, making it difficult to detect real target signals, or take advantage of known behaviors of automated systems like radar lock-on to confuse the system.
Various counter-countermeasures can sometimes help radar operators maintain target detection despite jamming.
Mechanical jamming
Mechanical jamming is caused by devices that reflect or re-reflect radar energy back to the radar to produce false target returns on the operator's scope. Mechanical jamming devices include chaff, corner reflectors, and decoys.
Chaff is made of different length metallic strips, which reflect different frequencies, to create a large area of false returns in which a real contact would be difficult to detect. Modern cha
|
https://en.wikipedia.org/wiki/Slutsky%20equation
|
The Slutsky equation (or Slutsky identity) in economics, named after Eugen Slutsky, relates changes in Marshallian (uncompensated) demand to changes in Hicksian (compensated) demand, which is known as such since it compensates to maintain a fixed level of utility.
There are two parts of the Slutsky equation, namely the substitution effect, and income effect.
In general, the substitution effect can be negative for consumers as it can limit choices. He designed this formula to explore a consumer's response as the price changes. When the price increases, the budget set moves inward, which also causes the quantity demanded to decrease. In contrast, when the price decreases, the budget set moves outward, which leads to an increase in the quantity demanded. The substitution effect is due to the effect of the relative price change while the income effect is due to the effect of income being freed up. The equation demonstrates that the change in the demand for a good, caused by a price change, is the result of two effects:
a substitution effect: when the price of good changes, as it becomes relatively cheaper, if hypothetically consumer's consumption remains same, income would be freed up which could be spent on a combination of each or more of the goods.
an income effect: the purchasing power of a consumer increases as a result of a price decrease, so the consumer can now afford better products or more of the same products, depending on whether the product itself is a normal
|
https://en.wikipedia.org/wiki/Nucleic%20Acids%20Research
|
Nucleic Acids Research is an open-access peer-reviewed scientific journal published since 1974 by the Oxford University Press. The journal covers research on nucleic acids, such as DNA and RNA, and related work. According to the Journal Citation Reports, the journal's 2021 impact factor is 19.160. The journal publishes two yearly special issues, the first issue of each year is dedicated to biological databases, published in January since 1993, and the other is devoted to papers describing web-based software resources of value to the biological community (web servers), published in July since 2003.
References
External links
Archives | https://www.ncbi.nlm.nih.gov/pmc/journals/4/
Biochemistry journals
Open access journals
Academic journals established in 1974
Oxford University Press academic journals
Biweekly journals
English-language journals
|
https://en.wikipedia.org/wiki/Super-seeding
|
In file sharing, super-seeding is an algorithm developed by John Hoffman for the BitTorrent communications protocol that helps downloaders become uploaders more quickly, but it introduces the danger of total seeding failure if there is only one downloader.
The algorithm applies when there is only one seed in the swarm. By permitting each downloader to download only specific parts of the files listed in a torrent, it allows peers to start seeding more quickly. Peers attached to a seed with super-seeding enabled therefore distribute pieces of the torrent file much more readily before they have completed the download themselves.
In 2003, BitTornado became the first BitTorrent client to implement the algorithm.
Effects
Testing by one group found that super seeding can help save an upload ratio of around 20%. It works best when the upload speed of the seed is greater than that of individual peers.
Super seeding transfers stall when there is only one downloading client. The seeders will not send more data until a second client receives the data. To avoid this, rTorrent continues to offer more pieces to the peers without waiting for confirmation, until it is uploading at its configured capacity.
Supporting clients
References
External links
Description of original super-seed algorithm in BitTornado
Report by Robb Toploski (Issue #4 & 5 are regarding Super Seeding)
BitTorrent
de:BitTorrent#Superseeding
|
https://en.wikipedia.org/wiki/T.%20J.%20Rodgers
|
Thurman John "T. J." Rodgers (born March 15, 1948) is an American billionaire scientist and entrepreneur. He is the founder of Cypress Semiconductor and holds patents ranging from semiconductors to energy to winemaking. Rodgers is known for his public relations acumen, brash personality, and strong advocacy of laissez-faire capitalism. He stepped down as Cypress CEO in April 2016 and Director in August 2016 after serving for 34 years.
Early life
Rodgers was born on March 15, 1948, in Oshkosh, Wisconsin. He goes back to nearby Green Bay, Wisconsin several times a year to attend Green Bay Packers football games. His father was a car salesman and worked for General Motors and his mother was a school teacher, with a master's degree in radio electronics. He was a Sloan scholar at Dartmouth College and played on the Dartmouth Big Green football team. In 1970 he received his bachelor's degree, graduating as salutatorian with majors in chemistry and physics. He received his master's degree (1973) and Ph.D. (1975) in electrical engineering from Stanford University. While pursuing his Ph.D. degree, Rodgers invented the VMOS process technology, which he later licensed to American Microsystems, Inc. He founded Cypress Semiconductor in 1982. He was awarded an honorary doctorate from the Universidad Francisco Marroquín in Guatemala City.
Career
After finishing a doctorate at Stanford, he turned down a job offer from Intel, saying that CEO Andrew S. Grove was unlikely to give him the free
|
https://en.wikipedia.org/wiki/Ecotec
|
Ecotec (capitalized ECOTEC, from "Emissions Control Optimization TEChnology") is a General Motors (GM) and Opel Automobile GmbH (Opel) trademark that refers to a series of emissions technologies that were implemented throughout a range of GM engines. ECOTEC can refer to the following diesel and petrol engines originally produced by General Motors:
Ecotec Family 0 – straight-four DOHC engines produced by Adam Opel AG and GM Powertrain US.
Ecotec Family 1 – straight-four SOHC/DOHC engines produced by Adam Opel AG, GM Korea, and GM do Brasil.
Ecotec Family II – straight-four SOHC/DOHC engines produced by Adam Opel AG, Holden, and GM do Brasil.
Ecotec L850 – straight-four all-aluminium DOHC engines produced by Adam Opel AG, GM Powertrain US, and Saab Automobile Powertrain AB.
Ecotec V6 – a version of the Series II 3800 V6 engine, produced by the Holden Engine Company between 1995 and 2004.
CDTI (Common Rail Diesel Turbo Intercooled) Ecotec – common rail diesel engines for Opel/Vauxhall cars:
Originally designed and produced by Fiat (MultiJet) and currently produced by Adam Opel AG. Also produced by Isuzu (Circle L).
VCDi Ecotec – common rail diesel engines for Chevrolet and Holden cars, produced by GM Korea (a licensed VM Motori RA 420 SOHC and Family Z).
DTI (Diesel Turbo Intercooled) Ecotec – diesel engines for use in Opel/Vauxhall cars, produced by Isuzu (Circle L).
SIDI (Spark Ignition Direct Injection) Ecotec – petrol Medium Gasoline Engine produced by Adam Opel AG
|
https://en.wikipedia.org/wiki/Regression%20dilution
|
Regression dilution, also known as regression attenuation, is the biasing of the linear regression slope towards zero (the underestimation of its absolute value), caused by errors in the independent variable.
Consider fitting a straight line for the relationship of an outcome variable y to a predictor variable x, and estimating the slope of the line. Statistical variability, measurement error or random noise in the y variable causes uncertainty in the estimated slope, but not bias: on average, the procedure calculates the right slope. However, variability, measurement error or random noise in the x variable causes bias in the estimated slope (as well as imprecision). The greater the variance in the x measurement, the closer the estimated slope must approach zero instead of the true value.
It may seem counter-intuitive that noise in the predictor variable x induces a bias, but noise in the outcome variable y does not. Recall that linear regression is not symmetric: the line of best fit for predicting y from x (the usual linear regression) is not the same as the line of best fit for predicting x from y.
Slope correction
Regression slope and other regression coefficients can be disattenuated as follows.
The case of a fixed x variable
The case that x is fixed, but measured with noise, is known as the functional model or functional relationship.
It can be corrected using total least squares and errors-in-variables models in general.
The case of a randomly distributed x var
|
https://en.wikipedia.org/wiki/Conservative%20extension
|
In mathematical logic, a conservative extension is a supertheory of a theory which is often convenient for proving theorems, but proves no new theorems about the language of the original theory. Similarly, a non-conservative extension is a supertheory which is not conservative, and can prove more theorems than the original.
More formally stated, a theory is a (proof theoretic) conservative extension of a theory if every theorem of is a theorem of , and any theorem of in the language of is already a theorem of .
More generally, if is a set of formulas in the common language of and , then is -conservative over if every formula from provable in is also provable in .
Note that a conservative extension of a consistent theory is consistent. If it were not, then by the principle of explosion, every formula in the language of would be a theorem of , so every formula in the language of would be a theorem of , so would not be consistent. Hence, conservative extensions do not bear the risk of introducing new inconsistencies. This can also be seen as a methodology for writing and structuring large theories: start with a theory, , that is known (or assumed) to be consistent, and successively build conservative extensions , , ... of it.
Recently, conservative extensions have been used for defining a notion of module for ontologies: if an ontology is formalized as a logical theory, a subtheory is a module if the whole ontology is a conservative extension of the subtheory.
|
https://en.wikipedia.org/wiki/Genetic%20fuzzy%20systems
|
In computer science and operations research, Genetic fuzzy systems are fuzzy systems constructed by using genetic algorithms or genetic programming, which mimic the process of natural evolution, to identify its structure and parameter.
When it comes to automatically identifying and building a fuzzy system, given the high degree of nonlinearity of the output, traditional linear optimization tools have several limitations. Therefore, in the framework of soft computing, genetic algorithms (GAs) and genetic programming (GP) methods have been used successfully to identify structure and parameters of fuzzy systems.
Fuzzy systems
Fuzzy systems are fundamental methodologies to represent and process linguistic information, with mechanisms to deal with uncertainty and imprecision. For instance, the task of modeling a driver parking a car involves greater difficulty in writing down a concise mathematical model as the description becomes more detailed. However, the level of difficulty is not so much using simple linguistic rules, which are themselves fuzzy. With such remarkable attributes, fuzzy systems have been widely and successfully applied to control, classification and modeling problems (Mamdani, 1974) (Klir and Yuan, 1995) (Pedrycz and Gomide, 1998).
Although simplistic in its design, the identification of a fuzzy system is a rather complex task that comprises the identification
of (a) the input and output variables, (b) the rule base (knowledge base), (c) the membership func
|
https://en.wikipedia.org/wiki/Bully%20algorithm
|
In distributed computing, the bully algorithm is a method for dynamically electing a coordinator or leader from a group of distributed computer processes. The process with the highest process ID number from amongst the non-failed processes is selected as the coordinator.
Assumptions
The algorithm assumes that:
the system is synchronous.
processes may fail at any time, including during execution of the algorithm.
a process fails by stopping and returns from failure by restarting.
there is a failure detector which detects failed processes.
message delivery between processes is reliable.
each process knows its own process id and address, and that of every other process.
Algorithm
The algorithm uses the following message types:
Election Message: Sent to announce election.
Answer (Alive) Message: Responds to the Election message.
Coordinator (Victory) Message: Sent by winner of the election to announce victory.
When a process recovers from failure, or the failure detector indicates that the current coordinator has failed, performs the following actions:
If has the highest process ID, it sends a Victory message to all other processes and becomes the new Coordinator. Otherwise, broadcasts an Election message to all other processes with higher process IDs than itself.
If receives no Answer after sending an Election message, then it broadcasts a Victory message to all other processes and becomes the Coordinator.
If receives an Answer from a process with a highe
|
https://en.wikipedia.org/wiki/Ray%20Lewington
|
Raymond Lewington (born 7 September 1956) is an English football manager and former player who is the current first-team coach of Premier League Club Crystal Palace.
Born in London, he started his playing career in the city at Chelsea. He went on to play for Vancouver Whitecaps, Wimbledon, Sheffield United, and had two spells at Fulham, for whom he made 234 Football League appearances. In his second spell at Fulham Lewington was player-manager.
Following the end of his time as a player he has spent most of the rest of his career as a coach or assistant manager, with spells at Crystal Palace and Fulham, as well as the England national football team. Outside of positions acting as caretaker, he has also been first team manager at Brentford and Watford.
Playing career
Lewington started his career at Chelsea in the 1970s, and played a season at Vancouver Whitecaps in 1979 where he was part of the Whitecaps' championship squad that won the NASL Soccer Bowl '79, before a loan spell at Wimbledon.
In 1980, he transferred to Fulham, and he was to go on and make over 170 League appearances for them before a season at Sheffield United in 1985–86. After that season he returned to Fulham and went on to play another 60 league matches for them.
Managerial career
Lewington became player-manager of Fulham after they were relegated to the Football League Third Division in July 1986. Lewington, still only 29, was the youngest manager in the Football League at the time. Fulham's budget w
|
https://en.wikipedia.org/wiki/Twilight%20Sentinel
|
Twilight Sentinel is a system in General Motors cars that uses a photoelectric cell in the dashboard to sense outside light and darkness and turn the exterior lights off and on accordingly. The Sentinel also includes a timer, which can be used to keep the headlights on after the key is removed from the ignition switch, usually up to three minutes.
The sensor system turns the lights on or off once a change in ambient light has lasted about 10 seconds.
Twilight Sentinel can be turned off. It also can be bypassed by manually turning on the headlights. Older versions did not turn on the exterior lights for daytime driving conditions such as fog and heavy rain; improvements linked the Sentinel to RainSense, GM's automatic windshield-wiper system, to turn lights on automatically when the wipers are on.
Twilight Sentinel was first offered on the 1964 Cadillac lineup and was later expanded to other GM makes.
A related feature offered on many General Motors models in the 1950s and 1960s was the Autronic Eye headlight dimmer. Cadillac later named the system "Guidematic Headlamp Control", which encompassed Twilight Sentinel and Dimming Sentinel controls.
Ford Motor Company offers a nearly identical exterior lights on/off/delay shut-off system, called "AutoLamp". The Ford system could be had with automatic dimming on Lincoln models. Chrysler offered Twilight Sentinel on the Imperial from 1967 to 1975; other full-size C-body Chrysler products from the mid-sixties to the mid-seventie
|
https://en.wikipedia.org/wiki/Von%20Mises%20distribution
|
In probability theory and directional statistics, the von Mises distribution (also known as the circular normal distribution or Tikhonov distribution) is a continuous probability distribution on the circle. It is a close approximation to the wrapped normal distribution, which is the circular analogue of the normal distribution. A freely diffusing angle on a circle is a wrapped normally distributed random variable with an unwrapped variance that grows linearly in time. On the other hand, the von Mises distribution is the stationary distribution of a drift and diffusion process on the circle in a harmonic potential, i.e. with a preferred orientation. The von Mises distribution is the maximum entropy distribution for circular data when the real and imaginary parts of the first circular moment are specified. The von Mises distribution is a special case of the von Mises–Fisher distribution on the N-dimensional sphere.
Definition
The von Mises probability density function for the angle x is given by:
where I0() is the modified Bessel function of the first kind of order 0, with this scaling constant chosen so that the distribution sums to unity:
The parameters μ and 1/ are analogous to μ and σ (the mean and variance) in the normal distribution:
μ is a measure of location (the distribution is clustered around μ), and
is a measure of concentration (a reciprocal measure of dispersion, so 1/ is analogous to σ).
If is zero, the distribution is uniform, and for small , it is cl
|
https://en.wikipedia.org/wiki/Apostatic%20selection
|
Apostatic selection is a form of negative frequency-dependent selection. It describes the survival of individual prey animals that are different (through mutation) from their species in a way that makes it more likely for them to be ignored by their predators. It operates on polymorphic species, species which have different forms. In apostatic selection, the common forms of a species are preyed on more than the rarer forms, giving the rare forms a selective advantage in the population. It has also been discussed that apostatic selection acts to stabilize prey polymorphisms.
The term "apostatic selection" was introduced in 1962 by Bryan Clarke in reference to predation on polymorphic grove snails and since then it has been used as a synonym for negative frequency-dependent selection. The behavioural basis of apostatic selection was initially neglected, but was eventually established by A.B Bond.
Apostatic selection can also apply to the predator if the predator has various morphs. There are multiple concepts that are closely linked with apostatic selection. One is the idea of prey switching, which is another term used to look at a different aspect of the same phenomenon, as well as the concept of a search image. Search images are relevant to apostatic selection as it is how a predator is able to detect an organism as a possible prey. Apostatic selection is important in evolution because it can sustain a stable equilibrium of morph frequencies, and hence maintains large am
|
https://en.wikipedia.org/wiki/Strike%20rate
|
Strike rate refers to two different statistics in the sport of cricket. Batting strike rate is a measure of how quickly a batter achieves the primary goal of batting, namely scoring runs, measured in runs per 100 balls; higher is better. Bowling strike rate is a measure of how quickly a bowler achieves the primary goal of bowling, namely taking wickets (i.e. getting batters out)measured in balls per wicket; lower is better. For bowlers, economy rate is a more frequently discussed statistic.
Both strike rates are relatively new statistics, having only been invented and considered of importance after the introduction of One Day International cricket in the 1970s.
Batting strike rate
Batting strike rate (s/r) is defined for a batter as the average number of runs scored per 100 balls faced. The higher the strike rate, the more effective a batter is at scoring quickly.
In Test cricket, a batter's strike rate is of secondary importance to ability to score runs without getting out. This means a Test batter's most important statistic is generally considered to be batting average, rather than strike rate.
In limited overs cricket, strike rates are of considerably more importance. Since each team only faces a limited number of balls in an innings, the faster a batter scores, the more runs the team will be able to accumulate. Strike rates of over 150 are becoming common in Twenty20 cricket. Strike rate is probably considered by most as the key factor in a batter in one day cricket
|
https://en.wikipedia.org/wiki/Prosthaphaeresis
|
Prosthaphaeresis (from the Greek προσθαφαίρεσις) was an algorithm used in the late 16th century and early 17th century for approximate multiplication and division using formulas from trigonometry. For the 25 years preceding the invention of the logarithm in 1614, it was the only known generally applicable way of approximating products quickly. Its name comes from the Greek prosthesis (πρόσθεσις) and aphaeresis (ἀφαίρεσις), meaning addition and subtraction, two steps in the process.
History and motivation
In 16th-century Europe, celestial navigation of ships on long voyages relied heavily on ephemerides to determine their position and course. These voluminous charts prepared by astronomers detailed the position of stars and planets at various points in time. The models used to compute these were based on spherical trigonometry, which relates the angles and arc lengths of spherical triangles (see diagram, right) using formulas such as
and
where a, b and c are the angles subtended at the centre of the sphere by the corresponding arcs.
When one quantity in such a formula is unknown but the others are known, the unknown quantity can be computed using a series of multiplications, divisions, and trigonometric table lookups. Astronomers had to make thousands of such calculations, and because the best method of multiplication available was long multiplication, most of this time was spent taxingly multiplying out products.
Mathematicians, particularly those who were also ast
|
https://en.wikipedia.org/wiki/A%20Midsummer%20Night%27s%20Gene
|
A Midsummer Night's Gene is a science fiction parody novel of Shakespeare's play A Midsummer Night's Dream, written by Andrew Harman and published in 1997 by Random House. It reflects the plot of the original play only slightly. It concentrates on two of the fairies and follows their attempts to play with genetic engineering in a modern English town.
1997 British novels
1997 science fiction novels
Comic science fiction novels
British science fiction novels
Parody novels
Novels based on A Midsummer Night's Dream
Random House books
Novels about genetic engineering
Modern adaptations of works by William Shakespeare
|
https://en.wikipedia.org/wiki/Prometaphase
|
Prometaphase is the phase of mitosis following prophase and preceding metaphase in eukaryotic somatic cells. In prometaphase, the nuclear membrane breaks apart into numerous "membrane vesicles," and the chromosomes inside form protein structures called kinetochores. Kinetochore microtubules emerging from the centrosomes at the poles (ends) of the spindle reach the chromosomes and attach to the kinetochores, throwing the chromosomes into agitated motion. Other spindle microtubules make contact with microtubules coming from the opposite pole. Forces exerted by protein "motors" associated with spindle microtubules move the chromosomes toward the centre of the cell.
Prometaphase is not always presented as a distinct part of mitosis. In sources that do not use the term, the events described here are instead assigned to late prophase and early metaphase.
Types of microtubules
The microtubules are composed of two types, kinetochore microtubules and non-kinetochore microtubules.
Kinetochore microtubules begin searching for kinetochores to attach to.
A number of non-kinetochore microtubules or polar microtubules find and interact with corresponding nonkinetochore microtubules from the opposite centrosome to form the mitotic spindle.
Transition from prometaphase to metaphase
The role of prometaphase is completed when all of the kinetochore microtubules have attached to their kinetochores, upon which metaphase begins. An unattached kinetochore, and thus a non-aligned chromosome
|
https://en.wikipedia.org/wiki/Municipality%20of%20the%20District%20of%20East%20Hants
|
East Hants, officially named the Municipality of the District of East Hants, is a district municipality in Hants County, Nova Scotia, Canada. Statistics Canada classifies the district municipality as a municipal district.
With its administrative seat in Elmsdale, the district municipality occupies the eastern half of Hants County from the Minas Basin to the boundary with Halifax County, sharing this boundary with the West Hants Regional Municipality. It was made in 1861 from the former townships of Uniacke, Rawdon, Douglas, Walton, Shubenacadie and Maitland. Its most settled area is in the Shubenacadie Valley.
Demographics
In the 2021 Census of Population conducted by Statistics Canada, the Municipality of the District of East Hants had a population of living in of its total private dwellings, a change of from its 2016 population of . With a land area of , it had a population density of in 2021.
Public works
The Public Works division operates two water utility distribution sites and three sewage collection and treatment systems for communities in the serviced areas adjacent to Highway 102 and along the Shubenacadie River. The division also operates an engineered spring which draws additional water from Grand Lake to the Shubenacadie River during low water level events.
Drinking water is distributed across 71.0 kilometers of main distribution lines. Wastewater is distributed through 80.5 kilometers of wastewater collection mains. Please visit the Public Works secti
|
https://en.wikipedia.org/wiki/Meramec%20State%20Park
|
Meramec State Park is a public recreation area located near Sullivan, Missouri, about 60 miles from St. Louis, along the Meramec River. The park has diverse ecosystems such as hardwood forests and glades. There are over 40 caves located throughout the park, the bedrock is dolomite. The most famous is Fisher Cave, located near the campgrounds. The park borders the Meramec Conservation Area.
History
The park was acquired by the state in 1927, then saw active development by the Civilian Conservation Corps (CCC) between 1933 and 1935. At that time, trails were laid out and numerous buildings constructed including a dining hall, recreation hall, concession building, and shelters.
In the late 1970s, as part of the Meramec Basin Project, the U.S. Army Corps of Engineers began work on a dam in the park to impound the river. The resulting reservoir would have permanently flooded much of the park and imperiled many different species, including the endangered Indiana bat. However, in response to direct citizen action against the dam, the project was halted, marking a victory for the environmental movement.
Historic sites
Three surviving CCC-era structures were added to the National Register of Historic Places in 1985:
Meramec State Park Lookout House/Observation Tower: The rustic-style stone and trussed timber octagonal lookout tower was built about 1934.
Meramec State Park Pump House: The rustic-style stone pump house (well house) on the Lodge Trail has a medium-pitched front-gab
|
https://en.wikipedia.org/wiki/Nutritional%20genomics
|
Nutritional genomics, also known as nutrigenomics, is a science studying the relationship between human genome, human nutrition and health. People in the field work toward developing an understanding of how the whole body responds to a food via systems biology, as well as single gene/single food compound relationships. Nutritional genomics or Nutrigenomics is the relation between food and inherited genes, it was first expressed in 2001.
Introduction
The term "nutritional genomics" is an umbrella term including several subcategories, such as nutrigenetics, nutrigenomics, and nutritional epigenetics. Each of these subcategories explain some aspect of how genes react to nutrients and express specific phenotypes, like disease risk. There are several applications for nutritional genomics, for example how much nutritional intervention and therapy can successfully be used for disease prevention and treatment.
Background and preventive health
Nutritional science originally emerged as a field that studied individuals lacking certain nutrients and the subsequent effects, such as the disease scurvy which results from a lack of vitamin C. As other diseases closely related to diet (but not deficiency), such as obesity, became more prevalent, nutritional science expanded to cover these topics as well. Nutritional research typically focuses on preventative measure, trying to identify what nutrients or foods will raise or lower risks of diseases and damage to the human body.
For example
|
https://en.wikipedia.org/wiki/Crank%E2%80%93Nicolson%20method
|
In numerical analysis, the Crank–Nicolson method is a finite difference method used for numerically solving the heat equation and similar partial differential equations. It is a second-order method in time. It is implicit in time, can be written as an implicit Runge–Kutta method, and it is numerically stable. The method was developed by John Crank and Phyllis Nicolson in the mid 20th century.
For diffusion equations (and many other equations), it can be shown the Crank–Nicolson method is unconditionally stable. However, the approximate solutions can still contain (decaying) spurious oscillations if the ratio of time step times the thermal diffusivity to the square of space step, , is large (typically, larger than 1/2 per Von Neumann stability analysis). For this reason, whenever large time steps or high spatial resolution is necessary, the less accurate backward Euler method is often used, which is both stable and immune to oscillations.
Principle
The Crank–Nicolson method is based on the trapezoidal rule, giving second-order convergence in time. For linear equations, the trapezoidal rule is equivalent to the implicit midpoint method the simplest example of a Gauss–Legendre implicit Runge–Kutta method which also has the property of being a geometric integrator. For example, in one dimension, suppose the partial differential equation is
Letting and evaluated for and , the equation for Crank–Nicolson method is a combination of the forward Euler method at and the backwa
|
https://en.wikipedia.org/wiki/Rice%20distribution
|
In probability theory, the Rice distribution or Rician distribution (or, less commonly, Ricean distribution) is the probability distribution of the magnitude of a circularly-symmetric bivariate normal random variable, possibly with non-zero mean (noncentral). It was named after Stephen O. Rice (1907–1986).
Characterization
The probability density function is
where I0(z) is the modified Bessel function of the first kind with order zero.
In the context of Rician fading, the distribution is often also rewritten using the Shape Parameter , defined as the ratio of the power contributions by line-of-sight path to the remaining multipaths, and the Scale parameter , defined as the total power received in all paths.
The characteristic function of the Rice distribution is given as:
where is one of Horn's confluent hypergeometric functions with two variables and convergent for all finite values of and . It is given by:
where
is the rising factorial.
Properties
Moments
The first few raw moments are:
and, in general, the raw moments are given by
Here Lq(x) denotes a Laguerre polynomial:
where is the confluent hypergeometric function of the first kind. When k is even, the raw moments become simple polynomials in σ and ν, as in the examples above.
For the case q = 1/2:
The second central moment, the variance, is
Note that indicates the square of the Laguerre polynomial , not the generalized Laguerre polynomial
Related distributions
if where and are statistically i
|
https://en.wikipedia.org/wiki/Brownian%20bridge
|
A Brownian bridge is a continuous-time stochastic process B(t) whose probability distribution is the conditional probability distribution of a standard Wiener process W(t) (a mathematical model of Brownian motion) subject to the condition (when standardized) that W(T) = 0, so that the process is pinned to the same value at both t = 0 and t = T. More precisely:
The expected value of the bridge at any t in the interval [0,T] is zero, with variance , implying that the most uncertainty is in the middle of the bridge, with zero uncertainty at the nodes. The covariance of B(s) and B(t) is , or s(T − t)/T if s < t.
The increments in a Brownian bridge are not independent.
Relation to other stochastic processes
If W(t) is a standard Wiener process (i.e., for t ≥ 0, W(t) is normally distributed with expected value 0 and variance t, and the increments are stationary and independent), then
is a Brownian bridge for t ∈ [0, T]. It is independent of W(T)
Conversely, if B(t) is a Brownian bridge and Z is a standard normal random variable independent of B, then the process
is a Wiener process for t ∈ [0, 1]. More generally, a Wiener process W(t) for t ∈ [0, T] can be decomposed into
Another representation of the Brownian bridge based on the Brownian motion is, for t ∈ [0, T]
Conversely, for t ∈ [0, ∞]
The Brownian bridge may also be represented as a Fourier series with stochastic coefficients, as
where are independent identically distributed standard normal random
|
https://en.wikipedia.org/wiki/Desert%20ecology
|
Desert ecology is the study of interactions between both biotic and abiotic components of desert environments. A desert ecosystem is defined by interactions between organisms, the climate in which they live, and any other non-living influences on the habitat. Deserts are arid regions that are generally associated with warm temperatures; however, cold deserts also exist. Deserts can be found in every continent, with the largest deserts located in Antarctica, the Arctic, Northern Africa, and the Middle East.
Climate
Deserts experience a wide range of temperatures and weather conditions, and can be classified into four types: hot, semiarid, coastal, and cold. Hot deserts experience warm temperatures year round, and low annual precipitation. Low levels of humidity in hot deserts contribute to high daytime temperatures, and extensive night time heat loss. The average annual temperature in hot deserts is approximately 20 to 25 °C, however, extreme weather conditions can lead to temperatures ranging from -18 to 49 °C.
Rainfall generally occurs, followed by long periods of dryness. Semiarid deserts experience similar conditions to hot deserts, however, the maximum and minimum temperatures tend to be less extreme, and generally range from 10 to 38 °C. Coastal deserts are cooler than hot and semiarid deserts, with average summer temperatures ranging between 13 and 24 °C. They also feature higher total rainfall values. Cold deserts are similar in temperature to coastal deserts, howev
|
https://en.wikipedia.org/wiki/X-linked%20ichthyosis
|
X-linked ichthyosis (abbreviated XLI) is a skin condition caused by the hereditary deficiency of the steroid sulfatase (STS) enzyme that affects 1 in 2000 to 1 in 6000 males. XLI manifests with dry, scaly skin and is due to deletions or mutations in the STS gene. XLI can also occur in the context of larger deletions causing contiguous gene syndromes. Treatment is largely aimed at alleviating the skin symptoms. The term is from the Ancient Greek 'ichthys' meaning 'fish'.
Signs and symptoms
The major symptoms of XLI include scaling of the skin, particularly on the neck, trunk, and lower extremities. The extensor surfaces are typically the most severely affected areas. The >4 mm diameter scales adhere to the underlying skin and can be dark brown or gray in color. Symptoms may subside during the summer.
Associated medical conditions
Aside from the skin scaling, XLI is not typically associated with other major medical problems. Atrial fibrillation or atrial flutter may affect up to 1 in 10 males with XLI. Heart rhythm abnormalities in individuals with XLI tend to co-occur with disorders of the gastrointestinal tract, and are likely to result from steroid sulfatase deficiency. Corneal opacities may be present but do not affect vision. Cryptorchidism is reported in some individuals. Individuals with XLI appear at increased risk of developmental disorders such as autism and Attention Deficit Hyperactivity Disorder and some affected individuals exhibit mood problems Mood pr
|
https://en.wikipedia.org/wiki/Variable-frequency%20drive
|
A variable-frequency drive (VFD, or adjustable-frequency drives, adjustable-speed drives, variable-speed drives, AC drives, micro drives, inverter drives, or drives) is a type of AC motor drive (system incorporating a motor) that controls speed and torque by varying the frequency of the input electricity. Depending on its topology, it controls the associated voltage or current variation.
VFDs are used in applications ranging from small appliances to large compressors. Systems using VFDs can be more efficient than hydraulic systems, such as in systems with pumps and damper control for fans.
Since the 1980s, power electronics technology has reduced VFD cost and size and has improved performance through advances in semiconductor switching devices, drive topologies, simulation and control techniques, and control hardware and software.
VFDs include low- and medium-voltage AC-AC and DC-AC topologies.
History
Pulse Width Modulating (PWM) variable frequency drive project started in the 1960s at Strömberg in Finland. Martti Harmoinen is regarded the inventor of this technology. Strömberg managed to sell the idea of PWM drive to Helsinki metro in 1973 and in 1982 first PWM drive SAMI10 were operational.
System description and operation
A variable-frequency drive is a device used in a drive system consisting of the following three main sub-systems: AC motor, main drive controller assembly, and drive/operator interface.
AC motor
The AC electric motor used in a VFD system is usu
|
https://en.wikipedia.org/wiki/Dilatancy
|
Dilatancy may refer to:
Dilatancy (granular material): an increase in volume under shear
Dilatancy (viscous material): the solidification of viscous fluids under pressure
|
https://en.wikipedia.org/wiki/Supramolecular%20assembly
|
In chemistry, a supramolecular assembly is a complex of molecules held together by noncovalent bonds. While a supramolecular assembly can be simply composed of two molecules (e.g., a DNA double helix or an inclusion compound), or a defined number of stoichiometrically interacting molecules within a quaternary complex, it is more often used to denote larger complexes composed of indefinite numbers of molecules that form sphere-, rod-, or sheet-like species. Colloids, liquid crystals, biomolecular condensates, micelles, liposomes and biological membranes are examples of supramolecular assemblies, and their realm of study is known as supramolecular chemistry. The dimensions of supramolecular assemblies can range from nanometers to micrometers. Thus they allow access to nanoscale objects using a bottom-up approach in far fewer steps than a single molecule of similar dimensions.
The process by which a supramolecular assembly forms is called molecular self-assembly. Some try to distinguish self-assembly as the process by which individual molecules form the defined aggregate. Self-organization, then, is the process by which those aggregates create higher-order structures. This can become useful when talking about liquid crystals and block copolymers.
Templating reactions
As studied in coordination chemistry, metal ions (usually transition metal ions) exist in solution bound to ligands, In many cases, the coordination sphere defines geometries conducive to reactions either betwee
|
https://en.wikipedia.org/wiki/Dopamine%20transporter
|
The dopamine transporter (DAT) also (sodium-dependent dopamine transporter) is a membrane-spanning protein coded for in the human by the SLC6A3 gene, (also known as DAT1), that pumps the neurotransmitter dopamine out of the synaptic cleft back into cytosol. In the cytosol, other transporters sequester the dopamine into vesicles for storage and later release. Dopamine reuptake via DAT provides the primary mechanism through which dopamine is cleared from synapses, although there may be an exception in the prefrontal cortex, where evidence points to a possibly larger role of the norepinephrine transporter.
DAT is implicated in a number of dopamine-related disorders, including attention deficit hyperactivity disorder, bipolar disorder, clinical depression, eating disorders, and substance use disorders. The gene that encodes the DAT protein is located on chromosome 5, consists of 15 coding exons, and is roughly 64 kbp long. Evidence for the associations between DAT and dopamine related disorders has come from a type of genetic polymorphism, known as a variable number tandem repeat, in the SLC6A3 gene, which influences the amount of protein expressed.
Function
DAT is an integral membrane protein that removes dopamine from the synaptic cleft and deposits it into surrounding cells, thus terminating the signal of the neurotransmitter. Dopamine underlies several aspects of cognition, including reward, and DAT facilitates regulation of that signal.
Mechanism
DAT is a symporter that
|
https://en.wikipedia.org/wiki/MBTA%20key%20bus%20routes
|
Key bus routes of the Massachusetts Bay Transportation Authority (MBTA) system are 15 routes that have high ridership and higher frequency standards than other bus lines, according to the 2004 MBTA Service Policy. Together, they account for roughly 40% of the MBTA's total bus ridership. These key bus routes ensure basic geographic coverage with frequent service in the densest areas of Boston, and connect to other MBTA services to give access to other areas throughout the region.
In recognition of their function as part of the backbone MBTA service, the key bus routes have been added to newer basic route maps installed in subway stations and other public locations. These schematic route maps show the rail rapid transit routes, bus rapid transit routes, commuter rail services, and key bus routes. The key routes have been treated as a distinct category for the purpose of service improvement, such as trial runs of late-night service, and due to the high volume of passenger traffic they carry, both individual routes and the category as a whole have been the subjects of urban planning and transportation engineering studies.
History
In November 2006, the MBTA launched a concerted effort to improve service quality on key bus routes. The 2008 Service Plan recommended improvements for various lines, including upgrading the 31 bus to key route standards. A second round of upgrades, entitled the Key Routes Improvement Project and costing $10 million in all, was supported by the Americ
|
https://en.wikipedia.org/wiki/The%20Source%20%28retailer%29
|
The Source (Bell) Electronics Inc., doing business as The Source (), is a Canadian consumer electronics and cell phone retail chain. The chain goes back over 40 years in Canada, initially as Radio Shack and later as The Source by Circuit City. The Source is now owned by BCE Inc., which purchased the assets of InterTAN from its parent, American retailer Circuit City, in 2009. The Source is a unit of 4458729 Canada Inc. and is based in Barrie, Ontario.
Background
The Source began as the Canadian branch of Radio Shack (later "RadioShack"). The chain was originally owned by Radio Shack's American parent company Tandy Corporation, but was spun off in June 1986, along with the rest of Tandy's international operations, as InterTAN. A licensing agreement with what became RadioShack Corporation allowed InterTAN to continue to use the chain's name and logo. InterTAN abandoned its non-profitable West German stores in 1987, left Belgium and France in 1993, sold its British stores to Carphone Warehouse in 1999 and sold its Australian stores to Woolworth subsidiary Dick Smith Electronics in 2002, leaving just the Canadian Radio Shack, Battery Plus, and Rogers Plus stores.
In May 2004, InterTAN was acquired by Circuit City. One week after the acquisition was completed, RadioShack Corporation filed a lawsuit in the 352nd Judicial District Court in Tarrant County, Texas, to end the licensing agreement. RadioShack Corporation claimed that InterTAN had breached the terms of their agreement.
|
https://en.wikipedia.org/wiki/STDM
|
STDM may refer to:
Statistical time-division multiplexing
Sociedade de Turismo e Diversões de Macau
Spread-Transform Dither Modulation
Spatio-temporal Data Mining
Société de Transports Départementaux de la Marne
STEINMETZDEMEYER architectes urbanistes
|
https://en.wikipedia.org/wiki/List%20of%20Dewey%20Decimal%20classes
|
The Dewey Decimal Classification (DDC) is structured around ten main classes covering the entire world of knowledge; each main class is further structured into ten hierarchical divisions, each having ten divisions of increasing specificity. As a system of library classification the DDC is "arranged by discipline, not subject", so a topic like clothing is classed based on its disciplinary treatment (psychological influence of clothing at 155.95, customs associated with clothing at 391, and fashion design of clothing at 746.92) within the conceptual framework. The list below presents the ten main classes, hundred divisions, and thousand sections.
Class 000 – Computer science, information, and general works
000 Computer science, knowledge, and systems
000 Computer science, information and general works
001 Knowledge
002 The book (writing, libraries, and book-related topics)
003 Systems
004 Data processing and computer science
005 Computer programming, programs, and data
006 Special computer methods (e.g. AI, multimedia, VR)
007–009 [Unassigned]
010 Bibliographies
010 Bibliography
011 Bibliographies
012 Bibliographies of individuals
013 [Unassigned]
014 Bibliographies of anonymous and pseudonymous works
015 Bibliographies of works from specific places
016 Bibliographies of works on specific subjects
017 General subject catalogs
018 Catalogs arranged by author, date, etc.
019 Dictionary catalogs
020 Library and information sciences
020 Library and information sciences
021 Lib
|
https://en.wikipedia.org/wiki/Intrathecal%20administration
|
Intrathecal administration is a route of administration for drugs via an injection into the spinal canal, or into the subarachnoid space so that it reaches the cerebrospinal fluid (CSF) and is useful in spinal anesthesia, chemotherapy, or pain management applications. This route is also used to introduce drugs that fight certain infections, particularly post-neurosurgical. The drug needs to be given this way to avoid being stopped by the blood–brain barrier. The same drug given orally must enter the blood stream and may not be able to pass out and into the brain. Drugs given by the intrathecal route often have to be compounded specially by a pharmacist or technician because they cannot contain any preservative or other potentially harmful inactive ingredients that are sometimes found in standard injectable drug preparations.
The route of administration is sometimes simply referred to as "intrathecal"; however, the term is also an adjective that refers to something occurring in or introduced into the anatomic space or potential space inside a sheath, most commonly the arachnoid membrane of the brain or spinal cord (under which is the subarachnoid space). For example, intrathecal immunoglobulin production is production of antibodies in the spinal cord. The abbreviation "IT" is best not used; instead, "intrathecal" is spelled out to avoid medical mistakes.
Intrathecal administration of analgesic agents
Very popular for a single 24-hour dose of analgesia (opioid with local an
|
https://en.wikipedia.org/wiki/The%20Return%20of%20the%20Condor%20Heroes%20%28Singaporean%20TV%20series%29
|
The Return of the Condor Heroes is a Singaporean television series adapted from Louis Cha's novel of the same title. It was broadcast from 2 June to 27 July 1998 on TCS Eighth Frequency in Singapore and was later released on other Asian television networks.
Cast
Christopher Lee as Yang Guo
Cai Yiwei as Yang Guo (young)
Fann Wong as Xiaolongnü
Zhu Houren as Guo Jing
Shirley Ho as Huang Rong
Florence Tan as Guo Fu
He Sixian as Guo Fu (young)
Yvonne Lim as Guo Xiang
Constance Song as Cheng Ying
Chen Xiaoqing as Cheng Ying (young)
Evelyn Tan as Lu Wushuang
Zheng Xin'er as Lu Wushuang (young)
Zheng Geping as Jinlun Fawang
Pan Lingling as Li Mochou
Chen Shucheng as Huang Yaoshi
Yang Yue as Hong Qigong
Richard Low as Ouyang Feng
Liang Tian as Yideng
Mak Hiu-wai as Zhou Botong
Huang Shinan as Gongsun Zhi
Deborah Sim as Gongsun Lü'e
Zhu Xiufeng as Qiu Qianchi
Li Nanxing as Lu Liding
Zoe Tay as Lu Erniang
Ix Shen as Huodu
Yang Tianfu as Da'erba
Yao Wenlong as Yelü Qi
Joey Swee as Yelü Yan
Lau Leng Leng as Wanyan Ping
Andi Lim as Yin Zhiping
Liang Weidong as Zhao Zhijing
Ye Shipin as Qiu Qianren / Ci'en
Yan Bingliang as Wu Santong
Ding Lan as Wu Sanniang
Huang Guoliang as Wu Xiuwen
Lin Bingjun as Wu Xiuwen (young)
Thomas Ng as Wu Dunru
Wang Ruixian as Wu Dunru (young)
Christie Wong as Shagu
Chen Guohua as Lu Youjiao
Aric Ho as Guo Polu
Wu Kaishen as Kublai Khan
Xu Meiluan as Hong Lingbo
Zheng Wen as Xiaoxiangzi
Zhong Shurong as Nimoxing
Fu
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.