source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Ark%20Two%20Shelter
|
The Ark Two Shelter is a nuclear fallout shelter built by Bruce Beach (14 April 1934 – 10 May 2021) in the village of Horning's Mills (north of Toronto, Ontario). The shelter first became habitable in 1980 and has been continuously expanded and improved since then. The shelter is composed of 42 school buses, which were buried underground as patterns for concrete that was then poured over to provide the main structure, onto which up to 5 meters (14 feet) of earth were piled to provide fallout protection.
With construction beginning in the early 1980s (during the cold war), the shelter was designed to accommodate as many as five hundred people for the length of time required to allow the widespread nuclear fallout to decay to a level allowing a safe return to the surface after a cataclysmic nuclear event.
Powered by redundant diesel generators, the heavily fortified ("virtually impenetrable to anything short of a direct nuclear strike") shelter includes two commercial kitchens, full plumbing (including a private well for potable water and a motel-sized septic tank), three months' worth of diesel, a radio-based communications centre, a chapel, and a decontamination room.
Ark Two is equipped with a communications room capable of broadcasting locally on the FM broadcast band and throughout Canada and the United States on the AM and Shortwave bands. A particularly novel feature is a collapsible, weather-balloon-deployed antenna, capable of being launched from within the shelter. All Ark Two communication equipment is EMP-hardened and generator-powered so as to be able to transmit survival information to the general public in the event of nuclear war.
Beach did not charge money for admission to the shelter, instead guaranteeing individuals admission in return for sweat equity and active involvement in the Ark Two communities' various activities. In addition, "Everyone is welcome here, regardless of religion, race, nationality, political views..." In return for the pro
|
https://en.wikipedia.org/wiki/Bluebird%20of%20happiness
|
The symbol of a bluebird as the harbinger of happiness is found in many cultures and may date back thousands of years.
Origins of idiom
Chinese mythology
One of the oldest examples of a blue bird in myth (found on oracle bone inscriptions of the Shang dynasty, 1766–1122 BC) is from pre-modern China, where a blue or green bird (qingniao) was the messenger bird of Xi Wangmu (the 'Queen Mother of the West'), who began life as a fearsome goddess and immortal. By the Tang dynasty (618–906 AD), she had evolved into a Daoist fairy queen and the protector/patron of "singing girls, dead women, novices, nuns, adepts and priestesses...women [who] stood outside the roles prescribed for women in the traditional Chinese family". Depictions of Xi Wangmu often include a bird—the birds in the earliest depictions are difficult to identify, and by the Tang dynasty, most of the birds appear in a circle, often with three legs, as a symbol of the sun.
Native American folklore
Among some Native Americans, the bluebird has mythological or literary significance.
According to the Cochiti tribe, the firstborn son of Sun was named Bluebird. In the tale "The Sun's Children", from Tales of the Cochiti Indians (1932) by Ruth Benedict, the male child of the sun is named Bluebird (Culutiwa).
The Navajo identify the mountain bluebird as a spirit in animal form, associated with the rising sun. The "Bluebird Song" is sung to remind tribe members to wake at dawn and rise to greet the sun:
The "Bluebird Song" is still performed in social settings, including the nine-day Ye'iibicheii winter Nightway ceremony, where it is the final song, performed just before sunrise of the ceremony's last day.
Most O'odham lore associated with the "bluebird" likely refers not to the bluebirds (Sialia) but to the blue grosbeak.
European folklore
In Russian fairy tales, the blue bird is a symbol of hope. More recently, Anton Denikin has characterized the Ice March of the defeated Volunteer Army in the Russian Civi
|
https://en.wikipedia.org/wiki/Biotic%20material
|
Biotic material or biological derived material is any material that originates from living organisms. Most such materials contain carbon and are capable of decay.
The earliest life on Earth arose at least 3.5 billion years ago. Earlier physical evidences of life include graphite, a biogenic substance, in 3.7 billion-year-old metasedimentary rocks discovered in southwestern Greenland, as well as, "remains of biotic life" found in 4.1 billion-year-old rocks in Western Australia. Earth's biodiversity has expanded continually except when interrupted by mass extinctions. Although scholars estimate that over 99 percent of all species of life (over five billion) that ever lived on Earth are extinct, there are still an estimated 10–14 million extant species, of which about 1.2 million have been documented and over 86% have not yet been described.
Examples of biotic materials are wood, straw, humus, manure, bark, crude oil, cotton, spider silk, chitin, fibrin, and bone.
The use of biotic materials, and processed biotic materials (bio-based material) as alternative natural materials, over synthetics is popular with those who are environmentally conscious because such materials are usually biodegradable, renewable, and the processing is commonly understood and has minimal environmental impact. However, not all biotic materials are used in an environmentally friendly way, such as those that require high levels of processing, are harvested unsustainably, or are used to produce carbon emissions.
When the source of the recently living material has little importance to the product produced, such as in the production of biofuels, biotic material is simply called biomass. Many fuel sources may have biological sources, and may be divided roughly into fossil fuels, and biofuel.
In soil science, biotic material is often referred to as organic matter. Biotic materials in soil include glomalin, Dopplerite and humic acid. Some biotic material may not be considered to be organic matte
|
https://en.wikipedia.org/wiki/PELP-1
|
Proline-, glutamic acid- and leucine-rich protein 1 (PELP1) also known as modulator of non-genomic activity of estrogen receptor (MNAR) and transcription factor HMX3 is a protein that in humans is encoded by the PELP1 gene. is a transcriptional corepressor for nuclear receptors such as glucocorticoid receptors and a coactivator for estrogen receptors.
Proline-, glutamic acid-, and leucine-rich protein 1 (PELP1) is transcription coregulator and modulates functions of several hormonal receptors and transcription factors. PELP1 plays essential roles in hormonal signaling, cell cycle progression, and ribosomal biogenesis. PELP1 expression is upregulated in several cancers; its deregulation contributes to hormonal therapy resistance and metastasis; therefore, PELP1 represents a novel therapeutic target for many cancers.
Gene
PELP1 is located on chromosome 17p13.2 and PELP1 is expressed in a wide variety of tissues; its highest expression levels are found in the brain, testes, ovaries, and uterus. Currently, there are two known isoforms (long 3.8 Kb and short 3.4 Kb) and short isoform is widely expressed in cancer cells.
Structure
The PELP1 protein encodes a protein of 1130 amino acids, and exhibits both cytoplasmic and nuclear localization depending on the tissue. PELP1 lacks known enzymatic activity and functions as a scaffolding protein. It contains 10 NR-interacting boxes (LXXLL motifs) and functions as a coregulator of several nuclear receptors via its LXXLL motifs including ESR1, ESR2, ERR-alpha, PR, GR, AR, and RXR. PELP1 also functions as a coregulator of several other transcription factors, including AP1, SP1, NFkB, STAT3, and FHL2.
PELP1 has a histone binding domain and interacts with chromatin-modifying complexes, including CBP/p300, histone deacetylase 2, histones, SUMO2, lysine-specific demethylase 1 (KDM1), PRMT6, and CARM1. PELP1 also interacts with cell cycle regulators such as pRb. E2F1, and p53.
PELP1 is phosphorylated by hormonal and growth
|
https://en.wikipedia.org/wiki/Trace%20amine
|
Trace amines are an endogenous group of trace amine-associated receptor 1 (TAAR1) agonists – and hence, monoaminergic neuromodulators – that are structurally and metabolically related to classical monoamine neurotransmitters. Compared to the classical monoamines, they are present in trace concentrations. They are distributed heterogeneously throughout the mammalian brain and peripheral nervous tissues and exhibit high rates of metabolism. Although they can be synthesized within parent monoamine neurotransmitter systems, there is evidence that suggests that some of them may comprise their own independent neurotransmitter systems.
Trace amines play significant roles in regulating the quantity of monoamine neurotransmitters in the synaptic cleft of monoamine neurons with . They have well-characterized presynaptic amphetamine-like effects on these monoamine neurons via TAAR1 activation; specifically, by activating TAAR1 in neurons they promote the release and prevent reuptake of monoamine neurotransmitters from the synaptic cleft as well as inhibit neuronal firing. Phenethylamine and amphetamine possess analogous pharmacodynamics in human dopamine neurons, as both compounds induce efflux from vesicular monoamine transporter 2 (VMAT2) and activate TAAR1 with comparable efficacy.
Like dopamine, norepinephrine, and serotonin, the trace amines have been implicated in a vast array of human disorders of affect and cognition, such as ADHD, depression and schizophrenia, among others. Trace aminergic hypo-function is particularly relevant to ADHD, since urinary and plasma phenethylamine concentrations are significantly lower in individuals with ADHD relative to controls and the two most commonly prescribed drugs for ADHD, amphetamine and methylphenidate, increase phenethylamine biosynthesis in treatment-responsive individuals with ADHD. A systematic review of ADHD biomarkers also indicated that urinary phenethylamine levels could be a diagnostic biomarker for ADHD.
List of
|
https://en.wikipedia.org/wiki/Bradytroph
|
A bradytroph is a strain of an organism that exhibits slow growth in the absence of an external source of a particular metabolite. This is usually due to a defect in an enzyme required in the metabolic pathway producing this chemical. Such defects are the result of mutations in the genes encoding these enzymes. As the organism can still produce small amounts of the chemical, the mutation is not lethal. In these bradytroph strains, rapid growth occurs when the chemical is present in the cell's growth media and the missing metabolite can be transported into the cell from the external environment. A bradytroph may also be referred to as a "leaky auxotroph".
The first usage of "bradytroph" was to describe Escherichia coli mutants partially defective in arginine biosynthesis. Among many other examples of bradytrophic strains of microorganisms are Bacillus subtilis strains with mutations affecting thiamine production and Saccharomyces cerevisiae strains with mutations that impair arginine biosynthesis.
See also
Autotroph
Auxotrophy
|
https://en.wikipedia.org/wiki/Basal%20shoot
|
Basal shoots, root sprouts, adventitious shoots, and suckers are words for various kinds of shoots that grow from adventitious buds on the base of a tree or shrub, or from adventitious buds on its roots. Shoots that grow from buds on the base of a tree or shrub are called basal shoots; these are distinguished from shoots that grow from adventitious buds on the roots of a tree or shrub, which may be called root sprouts or suckers. A plant that produces root sprouts or runners is described as surculose. Water sprouts produced by adventitious buds may occur on the above-ground stem, branches or both of trees and shrubs. Suckers are shoots arising underground from the roots some distance from the base of a tree or shrub.
In botany and ecology
In botany, a root sprout or sucker is a severable plant that grows not from a seed but from the meristem of a root at the base of or a certain distance from the original tree or shrub. Root sprouts may emerge a substantial distance from the base of the originating plant, are a form of vegetative dispersal, and may form a patch that constitutes a habitat in which that surculose plant is the dominant species. Root sprouts also may grow from the roots of trees that have been felled. Tree roots ordinarily grow outward from their trunks a distance of 1.5 to 2 times their heights, and therefore root sprouts can emerge a substantial distance from the trunk.
This is a phenomenon of natural "asexual reproduction", also denominated "vegetative reproduction". It is a strategy of plant propagation. The complex of clonal individuals and the originating plant comprise a single genetic individual, i. e., a genet. The individual root sprouts are clones of the original plant, and each has a genome that is identical to that of the originating plant from which it grew. Many species of plants reproduce through vegetative reproduction, e. g. Canada thistle, cherry, apple, guava, privet, hazel, lilac, tree of heaven, and Asimina triloba.
The root s
|
https://en.wikipedia.org/wiki/Leaning%20toothpick%20syndrome
|
In computer programming, leaning toothpick syndrome (LTS) is the situation in which a quoted expression becomes unreadable because it contains a large number of escape characters, usually backslashes ("\"), to avoid delimiter collision.
The official Perl documentation introduced the term to wider usage; there, the phrase is used to describe regular expressions that match Unix-style paths, in which the elements are separated by slashes /. The slash is also used as the default regular expression delimiter, so to be used literally in the expression, it must be escaped with a backslash \, leading to frequent escaped slashes represented as \/. If doubled, as in URLs, this yields \/\/ for an escaped //. A similar phenomenon occurs for DOS/Windows paths, where the backslash is used as a path separator, requiring a doubled backslash \\ – this can then be re-escaped for a regular expression inside an escaped string, requiring \\\\ to match a single backslash. In extreme cases, such as a regular expression in an escaped string, matching a Uniform Naming Convention path (which begins \\) requires 8 backslashes \\\\\\\\ due to 2 backslashes each being double-escaped.
LTS appears in many programming languages and in many situations, including in patterns that match Uniform Resource Identifiers (URIs) and in programs that output quoted text. Many quines fall into the latter category.
Pattern example
Consider the following Perl regular expression intended to match URIs that identify files under the pub directory of an FTP site:
m/ftp:\/\/[^\/]*\/pub\//
Perl, like sed before it, solves this problem by allowing many other characters to be delimiters for a regular expression. For example, the following three examples are equivalent to the expression given above:
m{ftp://[^/]*/pub/}
m#ftp://[^/]*/pub/#
m!ftp://[^/]*/pub/!
Or this common translation to convert backslashes to forward slashes:
tr/\\/\//
may be easier to understand when written like this:
tr{\\}{/}
Quoted-text
|
https://en.wikipedia.org/wiki/Soil-plant-atmosphere%20continuum
|
The soil-plant-atmosphere continuum (SPAC) is the pathway for water moving from soil through plants to the atmosphere. Continuum in the description highlights the continuous nature of water connection through the pathway. The low water potential of the atmosphere, and relatively higher (i.e. less negative) water potential inside leaves, leads to a diffusion gradient across the stomatal pores of leaves, drawing water out of the leaves as vapour. As water vapour transpires out of the leaf, further water molecules evaporate off the surface of mesophyll cells to replace the lost molecules since water in the air inside leaves is maintained at saturation vapour pressure. Water lost at the surface of cells is replaced by water from the xylem, which due to the cohesion-tension properties of water in the xylem of plants pulls additional water molecules through the xylem from the roots toward the leaf.
Components
The transport of water along this pathway occurs in components, variously defined among scientific disciplines:
Soil physics characterizes water in soil in terms of tension,
Physiology of plants and animals characterizes water in organisms in terms of diffusion pressure deficit, and
Meteorology uses vapour pressure or relative humidity to characterize atmospheric water.
SPAC integrates these components and is defined as a:
...concept recognising that the field with all its components (soil, plant, animals and the ambient atmosphere taken together) constitutes a physically integrated, dynamic system in which the various flow processes involving energy and matter occur simultaneously and independently like links in the chain.
This characterises the state of water in different components of the SPAC as expressions of the energy level or water potential of each. Modelling of water transport between components relies on SPAC, as do studies of water potential gradients between segments.
See also
Ecohydrology
Evapotranspiration
Hydraulic redistribution; a p
|
https://en.wikipedia.org/wiki/Optimum%20water%20content%20for%20tillage
|
The optimum water content for tillage (OPT) is defined as the moisture content of soil at which tillage produces the largest number of small aggregates.
Overview
The Optimum Water Content of soil is the water content at which a maximum dry unit weight can be achieved after a given compaction effort. A max dry unit weight would have no voids in the soil. If you were trying to compact a hard dry soil to make it more dense, you might want to get it wet. The OPT is the water content of the soil in which you could compact it the most. If there is too much water you would have too much pore water pressure during compression to compact any further. If there is too little water the soil would naturally resist compaction via shear strength/friction/effective stress. The determination of the OPT is important because if tillage is carried out on fields that are wetter or drier than the OPT many problems can be caused, including soil structural damage, through the production of large clods, and an increase in the content of readily dispersible clay which is indicative of the soil stability.
The OPT can be determined in relation to the volumetric water content at the lower Plastic Limit of the soil (PL).
Some examples of suggested OPT:
On a lateritic sandy loam : 0.77 PL
On a sandy loam : 0.9 PL
For several soils the OPT has been found to equal 0.9 PL, although there are a number of limitations with the use of the lower plastic limit in determining the optimal moisture content. Firstly it is a property of a moulded soil and not an undisturbed soil in the field and secondly, many sandy soils are not plastic and do not have a lower plastic limit.
Relationships between water content at field capacity (FC) and Plastic Limit(PL)
When FC < PL
Soil will drain to a water content at which no excessive structural damage will occur on tillage.
When FC > PL
Soil will never drain to a water content ideal for tillage
Many clay soils drain very slowly, and as a result they are
|
https://en.wikipedia.org/wiki/Runoff%20curve%20number
|
The runoff curve number (also called a curve number or simply CN) is an empirical parameter used in hydrology for predicting direct runoff or infiltration from rainfall excess. The curve number method was developed by the USDA Natural Resources Conservation Service, which was formerly called the Soil Conservation Service or SCS — the number is still popularly known as a "SCS runoff curve number" in the literature. The runoff curve number was developed from an empirical analysis of runoff from small catchments and hillslope plots monitored by the USDA. It is widely used and is an efficient method for determining the approximate amount of direct runoff from a rainfall event in a particular area.
Definition
The runoff curve number is based on the area's hydrologic soil group, land use, treatment and hydrologic condition. References, such as from USDA indicate the runoff curve numbers for characteristic land cover descriptions and a hydrologic soil group.
The runoff equation is:
where
is runoff ([L]; in)
is rainfall ([L]; in)
is the potential maximum soil moisture retention after runoff begins ([L]; in)
is the initial abstraction ([L]; in), or the amount of water before runoff, such as infiltration, or rainfall interception by vegetation; historically, it has generally been assumed that , although more recent research has found that may be a more appropriate relationship in urbanized watersheds where the CN is updated to reflect developed conditions.
The runoff curve number, , is then related
has a range from 30 to 100; lower numbers indicate low runoff potential while larger numbers are for increasing runoff potential. The lower the curve number, the more permeable the soil is. As can be seen in the curve number equation, runoff cannot begin until the initial abstraction has been met. It is important to note that the curve number methodology is an event-based calculation, and should not be used for a single annual rainfall value, as this will incorrectly m
|
https://en.wikipedia.org/wiki/Sense%20strand
|
In genetics, a sense strand, or coding strand, is the segment within double-stranded DNA that carries the translatable code in the 5′ to 3′ direction, and which is complementary to the antisense strand of DNA, or template strand, which does not carry the translatable code in the 5′ to 3′ direction. The sense strand is the strand of DNA that has the same sequence as the mRNA, which takes the antisense strand as its template during transcription, and eventually undergoes (typically, not always) translation into a protein. The antisense strand is thus responsible for the RNA that is later translated to protein, while the sense strand possesses a nearly identical makeup to that of the mRNA.
mRNA and "sense"
Note that for each segment of double-stranded DNA, there will possibly be two sets of sense and antisense, depending on which direction one reads (since sense and antisense is relative to perspective). It is ultimately the gene product, or mRNA, that dictates which strand of one segment of dsDNA we call sense or antisense. But keep in mind that sometimes, such as in prokaryotes, overlapping genes on opposite strands means the sense for one mRNA can be the antisense for another mRNA.
The immediate product of this transcription is a resultant initial RNA transcript, which contains a sequence of nucleotides that is identical to that of the sense strand. The exception to this is that uracil is used for nucleotide sequencing of RNA molecules rather than thymine.
Most eukaryotic RNA transcripts undergo additional editing prior to being translated for protein synthesis. This process typically involves the addition of a methylated guanine nucleotide cap at the 5' end, the addition of a poly-A tail at the 3' end, and the removal of introns from the initial RNA transcript (RNA splicing). The end product is known as a mature mRNA. Prokaryotic mRNA does not undergo the same process.
Strictly speaking, only the mRNA makes "sense" with the genetic code, as the translat
|
https://en.wikipedia.org/wiki/Tai%20tou
|
Tai tou () is a typographical East Asian expression of honor that can be divided into two forms, Nuo tai and Ping tai.
Nuo tai
Nuo tai (, literally "move and shift") is a typographical device used in written Chinese to denote respect for the person being mentioned. It leaves a full-width (1 character wide) space before the first character of the person; it can be represented as Unicode character . This is often used in formal writing before using pronouns such as (guì, literally ("precious, expensive", or "noble") to show respect. This is also sometimes still used in Taiwan for important officials, such as Chiang Kai-shek and Sun Yat-sen, although this practice has gradually fallen out of favor.
Examples
(reading left to right)
國父 孫中山先生 - Father of the Nation (space) Mr. Sun Yat-sen
先總統 蔣公 - The Late President (space) Honorable Chiang
起初 神創造天地 - In the beginning (space) God created the heaven and the earth. (In Chinese translations of the Bible, "God" is rendered in competing ways by different Christians: as or , which consists of two characters, or as , with only one. In order to avoid a complete re-typesetting of the entire text for this discrepancy, the publishers commonly shifted so that it also took two spaces and the rest of the text could be typeset identically with .)
Ping tai
Ping tai (, literally "level shift") is another form. The way to express the respect is to shift the name of person directly to the head of the next line. This is now considered old-fashioned, and when it was used it was usually seen in documents sent between emperor and ministers when the minister mentioned the emperor.
Dan tai
Dan tai (, literally ''single shift") is an archaic form where the shifted phrase is moved to a new line and begins one character above a normal line. Traditionally, this is used when the recipient of the letter is addressed.
Shuang tai
Shuang tai (, literally "double shift") as above, but two characters above a normal line. This is used to denote
|
https://en.wikipedia.org/wiki/Trp%20operon
|
The trp operon''' is a group of genes that are transcribed together, encoding the enzymes that produce the amino acid tryptophan in bacteria. The trp operon was first characterized in Escherichia coli, and it has since been discovered in many other bacteria. The operon is regulated so that, when tryptophan is present in the environment, the genes for tryptophan synthesis are repressed.
The trp operon contains five structural genes: trpE, trpD, trpC, trpB, and trpA, which encode the enzymes needed to synthesize tryptophan. It also contains a repressive regulator gene called trpR. When tryptophan is present, the trpR protein binds to the operator, blocking transcription of the trp operon by RNA polymerase.
This operon is an example of repressible negative regulation of gene expression. The repressor protein binds to the operator in the presence of tryptophan (repressing transcription) and is released from the operon when tryptophan is absent (allowing transcription to proceed). The trp operon additionally uses attenuation to control expression of the operon, a second negative feedback control mechanism.
The trp operon is well-studied and is commonly used as an example of gene regulation in bacteria alongside the lac operon.
Genes trp operon contains five structural genes. The roles of their products are:
TrpE (): Anthranilate synthase produces anthranilate.
TrpD (): Cooperates with TrpE.
TrpC (): Phosphoribosylanthranilate isomerase domain first turns N-(5-phospho-β-D-ribosyl)anthranilate into 1-(2-carboxyphenylamino)-1-deoxy-D-ribulose 5-phosphate. The Indole-3-glycerol-phosphate synthase on the same protein then turns the product into (1S,2R)-1-C-(indol-3-yl)glycerol 3-phosphate.
TrpA (), TrpB (): two subunits of tryptophan synthetase. Combines TrpC's product with serine to produce tryptophan.
Repression
The operon operates by a negative repressible feedback mechanism. The repressor for the trp operon is produced upstream by the trpR gene, which is cons
|
https://en.wikipedia.org/wiki/Digital%20Himalaya
|
The Digital Himalaya project was established in December 2000 by Mark Turin, Alan Macfarlane, Sara Shneiderman, and Sarah Harrison. The project's principal goal is to collect and preserve historical multimedia materials relating to the Himalaya, such as photographs, recordings, and journals, and make those resources available over the internet and offline, on external storage media. The project team have digitized older ethnographic collections and data sets that were deteriorating in their analogue formats, so as to protect them from deterioration and make them available and accessible to originating communities in the Himalayan region and a global community of scholars.
The project was founded at the Department of Anthropology of the University of Cambridge, moved to Cornell University in 2002 (when a collaboration with the University of Virginia was initiated), and then back to the University of Cambridge in 2005. From 2011 to 2014, the project was jointly hosted between the University of Cambridge and Yale University. In 2014, the project moved to the University of British Columbia, where it is presently located, and maintains a distant collaboration with Sichuan University.
Project Team
Digital Himalaya has a team of 9 individuals who work together to develop user-friendly and accessible online resources:
Sarah Harrison
Daniel Ho
Hikmat Khadka
Wachiraporn Klungthanaboon
Alan Macfarlane
Pragyajan Rai (Yalamber)
Sara Shneiderman
Komintal Thami
Mark Turin
The project is supported by an active international Advisory Board, including the following individuals:
General Sir Sam Cowan
Richard Feldman
[Martin Gaenszle
Ann Gammie
David Germano
Mark Goodridge
David Holmberg
Michael Hutt
Kathryn March
Christina Monson
Since its establishment, the Digital Himalaya project has benefited from skilled student interns and research assistants in Canada, Nepal, the United Kingdom, and the United States.
Funding
For the first five years of active developme
|
https://en.wikipedia.org/wiki/Pindone
|
Pindone is an anticoagulant drug for agricultural use. It is commonly used as a rodenticide in the management of rat and rabbit populations.
It is pharmacologically analogous to warfarin and inhibits the synthesis of Vitamin K-dependent clotting factors.
See also
1,3-Indandione
|
https://en.wikipedia.org/wiki/Zellweger%20off-peak
|
Zellweger is the brand name of an electric switching device also known as a Ripple Control Receiver used to control off-peak electrical loads such as water heaters by switching these loads OFF over peak energy use times of the day and switching them ON after peak energy use times of the day, hence the term 'off peak' control. It is an example of carrier current signaling. The Ripple Control Signal is generated at substations owned by Electricity Supply Authorities (as distinct from Electricity Generating Authorities) connected to the High Voltage transmission grid and injected into the Medium Voltage transmission grid at 11kV, 22kV, 33kV and 66kV, through a Coupling Cell consisting of a tuned L-C circuit (Tuning Coil - Capacitor). The Coupling Cell enables the Ripple Control Frequency to be superimposed on the 50 Hertz (Hz) mains frequency, which promulgates into the 415 V 3 phase power distribution lines providing energy to industrial and domestic customers of the Electricity Supply Authority. To avoid problems with other equipment connected to the distribution system (); i.e. industrial machinery and domestic appliances, the ripple frequency is selected to be offset from the third harmonic and its multiples, typically starting at 167 Hz and including, 217, 317, 425, 750, 1050, 1650. The choice of frequency depends upon the density of the load into which the ripple frequency is to be injected and the length of the distribution
Power stations transmit a on the main transmission lines when off-peak rates start (often around 10 pm). This ripple noise is picked up by the Zellweger, which after a random delay turns the hot water heater on. The noise is often picked up by other equipment, especially audio amplifiers and stereos and the noise can cause problems with other electrical devices. It is especially audible from ceiling fans running at low speed. Even some telephone lines can pick up the noise. The noise can be particularly obtrusive from some fluorescent ligh
|
https://en.wikipedia.org/wiki/Sialadenitis
|
Sialadenitis (sialoadenitis) is inflammation of salivary glands, usually the major ones, the most common being the parotid gland, followed by submandibular and sublingual glands. It should not be confused with sialadenosis (sialosis) which is a non-inflammatory enlargement of the major salivary glands.
Sialadenitis can be further classed as acute or chronic. Acute sialadenitis is an acute inflammation of a salivary gland which may present itself as a red, painful swelling that is tender to touch. Chronic sialadenitis is typically less painful but presents as recurrent swellings, usually after meals, without redness.
Causes of sialadenitis are varied, including bacterial (most commonly Staphylococcus aureus), viral and autoimmune conditions.
Types
Acute
Predisposing factors
sialolithiasis
decreased flow (dehydration, post-operative, drugs)
poor oral hygiene
exacerbation of low grade chronic sialoadenitis
Clinical features
painful swelling
reddened skin
edema of the cheek, periorbital region and neck
low grade fever
malaise
raised ESR, CRP, leucocytosis
purulent exudate from duct punctum
Chronic
Clinical features
unilateral
mild pain / swelling
common after meals
duct orifice is reddened and flow decreases
may or may not have visible/palpable stone.
Parotid gland
recurrent painful swellings
Submandibular gland
usually secondary to sialolithiasis or stricture
Signs and Symptoms
Sialadenitis is swelling and inflammation of the parotid, submandibular, or sublingual major salivary glands. It may be acute or chronic, infective or autoimmune.
Acute
Acute sialadenitis secondary to obstruction (sialolithiasis) is characterised by increasingly, painful swelling of 24–72 hours, purulent discharge and systemic manifestations.
Chronic
Chronic sialadenitis causes intermittent, recurrent periods of tender swellings. Chronic sclerosing sialadenitis is commonly unilateral and can mimic a tumour.
Autoimmune
Autoimmune sialadenitis (i.e Sjo
|
https://en.wikipedia.org/wiki/Knowledge%20Interchange%20Format
|
Knowledge Interchange Format (KIF) is a computer language designed to enable systems to share and re-use information from knowledge-based systems. KIF is similar to frame languages such as KL-One and LOOM but unlike such language its primary role is not intended as a framework for the expression or use of knowledge but rather for the interchange of knowledge between systems. The designers of KIF likened it to PostScript. PostScript was not designed primarily as a language to store and manipulate documents but rather as an interchange format for systems and devices to share documents. In the same way KIF is meant to facilitate sharing of knowledge across different systems that use different languages, formalisms, platforms, etc.
KIF has a declarative semantics. It is meant to describe facts about the world rather than processes or procedures. Knowledge can be described as objects, functions, relations, and rules. It is a formal language, i.e., it can express arbitrary statements in first order logic and can support reasoners that can prove the consistency of a set of KIF statements. KIF also supports non-monotonic reasoning. KIF was created by Michael Genesereth, Richard Fikes and others participating in the DARPA knowledge sharing Effort.
Although the original KIF group intended to submit to a formal standards body, that did not occur. A later version called Common Logic has since been developed for submission to ISO and has been approved and published. A variant called SUO-KIF is the language in which the Suggested Upper Merged Ontology is written.
A practical application of the Knowledge interchange format is an agent communication language in a multi-agent system.
See also
Knowledge Query and Manipulation Language
|
https://en.wikipedia.org/wiki/VOMS
|
VOMS is an acronym used for Virtual Organization Membership Service in grid computing. It is structured as a simple account database with fixed formats for the information exchange and features single login, expiration time, backward compatibility, and multiple virtual organizations.
The database is manipulated by authorization data that defines specific capabilities and roles for users. Administrative tools can be used by administrators to assign roles and capability information in the database. A command-line tool allows users to generate a local proxy credential based on the contents of the VOMS database. This credential includes the basic authentication information that standard Grid proxy credentials contain, but it also includes role and capability information from the VOMS server.
VOMS-aware applications can use the VOMS data to make authentication decisions regarding user requests. VOMS was originally developed by the European DataGrid and Enabling Grids for E-sciencE projects and is now maintained by the Italian National Institute for Nuclear Physics (INFN).
VOMS is also an acronym for VOucher Management System used for providing recharge management services for Prepaid Systems of Telecom Service Providers.
Typically external Voucher Management Systems are used with Intelligent Network based prepaid systems.
See also
Shibboleth
|
https://en.wikipedia.org/wiki/Polarization%20of%20an%20algebraic%20form
|
In mathematics, in particular in algebra, polarization is a technique for expressing a homogeneous polynomial in a simpler fashion by adjoining more variables. Specifically, given a homogeneous polynomial, polarization produces a unique symmetric multilinear form from which the original polynomial can be recovered by evaluating along a certain diagonal.
Although the technique is deceptively simple, it has applications in many areas of abstract mathematics: in particular to algebraic geometry, invariant theory, and representation theory. Polarization and related techniques form the foundations for Weyl's invariant theory.
The technique
The fundamental ideas are as follows. Let be a polynomial in variables Suppose that is homogeneous of degree which means that
Let be a collection of indeterminates with so that there are variables altogether. The polar form of is a polynomial
which is linear separately in each (that is, is multilinear), symmetric in the and such that
The polar form of is given by the following construction
In other words, is a constant multiple of the coefficient of in the expansion of
Examples
A quadratic example. Suppose that and is the quadratic form
Then the polarization of is a function in and given by
More generally, if is any quadratic form then the polarization of agrees with the conclusion of the polarization identity.
A cubic example. Let Then the polarization of is given by
Mathematical details and consequences
The polarization of a homogeneous polynomial of degree is valid over any commutative ring in which is a unit. In particular, it holds over any field of characteristic zero or whose characteristic is strictly greater than
The polarization isomorphism (by degree)
For simplicity, let be a field of characteristic zero and let be the polynomial ring in variables over Then is graded by degree, so that
The polarization of algebraic forms then induces an isomorphism of vector spaces in
|
https://en.wikipedia.org/wiki/Milk%20allergy
|
Milk allergy is an adverse immune reaction to one or more proteins in cow's milk. Symptoms may take hours to days to manifest, with symptoms including atopic dermatitis, inflammation of the esophagus, enteropathy involving the small intestine and proctocolitis involving the rectum and colon. However, rapid anaphylaxis is possible, a potentially life-threatening condition that requires treatment with epinephrine, among other measures.
In the United States, 90% of allergic responses to foods are caused by eight foods, and cow's milk is the most common. Recognition that a small number of foods are responsible for the majority of food allergies has led to requirements to prominently list these common allergens, including dairy, on food labels. One function of the immune system is to defend against infections by recognizing foreign proteins, but it should not overreact to food proteins. Heating milk proteins can cause them to become denatured, losing their three-dimensional configuration and allergenicity, so baked goods containing dairy products may be tolerated while fresh milk triggers an allergic reaction.
The condition may be managed by avoiding consumption of any dairy products or foods that contain dairy ingredients. For people subject to rapid reactions (IgE-mediated milk allergy), the dose capable of provoking an allergic response can be as low as a few milligrams, so such people must strictly avoid dairy. The declaration of the presence of trace amounts of milk or dairy in foods is not mandatory in any country, with the exception of Brazil.
Milk allergy affects between 2% and 3% of babies and young children. To reduce risk, recommendations are that babies should be exclusively breastfed for at least four months, preferably six months, before introducing cow's milk. If there is a family history of dairy allergy, then soy infant formula can be considered, but about 10 to 15% of babies allergic to cow's milk will also react to soy. The majority of children outg
|
https://en.wikipedia.org/wiki/Shapiro%20inequality
|
In mathematics, the Shapiro inequality is an inequality proposed by Harold S. Shapiro in 1954.
Statement of the inequality
Suppose is a natural number and are positive numbers and:
is even and less than or equal to , or
is odd and less than or equal to .
Then the Shapiro inequality states that
where .
For greater values of the inequality does not hold and the strict lower bound is with .
The initial proofs of the inequality in the pivotal cases (Godunova and Levin, 1976) and (Troesch, 1989) rely on numerical computations. In 2002, P.J. Bushell and J.B. McLeod published an analytical proof for .
The value of was determined in 1971 by Vladimir Drinfeld. Specifically, he proved that the strict lower bound is given by , where the function is the convex hull of and . (That is, the region above the graph of is the convex hull of the union of the regions above the graphs of ' and .)
Interior local minima of the left-hand side are always (Nowosad, 1968).
Counter-examples for higher n
The first counter-example was found by Lighthill in 1956, for :
where is close to 0.
Then the left-hand side is equal to , thus lower than 10 when is small enough.
The following counter-example for is by Troesch (1985):
(Troesch, 1985)
|
https://en.wikipedia.org/wiki/Transcription%20coregulator
|
In molecular biology and genetics, transcription coregulators are proteins that interact with transcription factors to either activate or repress the transcription of specific genes. Transcription coregulators that activate gene transcription are referred to as coactivators while those that repress are known as corepressors. The mechanism of action of transcription coregulators is to modify chromatin structure and thereby make the associated DNA more or less accessible to transcription. In humans several dozen to several hundred coregulators are known, depending on the level of confidence with which the characterisation of a protein as a coregulator can be made. One class of transcription coregulators modifies chromatin structure through covalent modification of histones. A second ATP dependent class modifies the conformation of chromatin.
Histone acetyltransferases
Nuclear DNA is normally tightly wrapped around histones rendering the DNA inaccessible to the general transcription machinery and hence this tight association prevents transcription of DNA. At physiological pH, the phosphate component of the DNA backbone is deprotonated which gives DNA a net negative charge. Histones are rich in lysine residues which at physiological pH are protonated and therefore positively charged. The electrostatic attraction between these opposite charges is largely responsible for the tight binding of DNA to histones.
Many coactivator proteins have intrinsic histone acetyltransferase (HAT) catalytic activity or recruit other proteins with this activity to promoters. These HAT proteins are able to acetylate the amine group in the sidechain of histone lysine residues which makes lysine much less basic, not protonated at physiological pH, and therefore neutralizes the positive charges in the histone proteins. This charge neutralization weakens the binding of DNA to histones causing the DNA to unwind from the histone proteins and thereby significantly increases the rate of tr
|
https://en.wikipedia.org/wiki/BLOSUM
|
In bioinformatics, the BLOSUM (BLOcks SUbstitution Matrix) matrix is a substitution matrix used for sequence alignment of proteins. BLOSUM matrices are used to score alignments between evolutionarily divergent protein sequences. They are based on local alignments. BLOSUM matrices were first introduced in a paper by Steven Henikoff and Jorja Henikoff. They scanned the BLOCKS database for very conserved regions of protein families (that do not have gaps in the sequence alignment) and then counted the relative frequencies of amino acids and their substitution probabilities. Then, they calculated a log-odds score for each of the 210 possible substitution pairs of the 20 standard amino acids. All BLOSUM matrices are based on observed alignments; they are not extrapolated from comparisons of closely related proteins like the PAM Matrices.
Biological background
The genetic instructions of every replicating cell in a living organism are contained within its DNA. Throughout the cell's lifetime, this information is transcribed and replicated by cellular mechanisms to produce proteins or to provide instructions for daughter cells during cell division, and the possibility exists that the DNA may be altered during these processes. This is known as a mutation. At the molecular level, there are regulatory systems that correct most — but not all — of these changes to the DNA before it is replicated.
The functionality of a protein is highly dependent on its structure. Changing a single amino acid in a protein may reduce its ability to carry out this function, or the mutation may even change the function that the protein carries out. Changes like these may severely impact a crucial function in a cell, potentially causing the cell — and in extreme cases, the organism — to die. Conversely, the change may allow the cell to continue functioning albeit differently, and the mutation can be passed on to the organism's offspring. If this change does not result in any significant physical
|
https://en.wikipedia.org/wiki/Canjica
|
Canjica is a white variety of corn typical of Brazilian cuisine. It is mostly used in a special kind of sweet popcorn and in a sweet dish also named "canjica", a popular Festa Junina dish.
See also
List of Brazilian dishes
List of Brazilian sweets and desserts
|
https://en.wikipedia.org/wiki/Substitution%20tiling
|
In geometry, a tile substitution is a method for constructing highly ordered tilings. Most importantly, some tile substitutions generate aperiodic tilings, which are tilings whose prototiles do not admit any tiling with translational symmetry. The most famous of these are the Penrose tilings. Substitution tilings are special cases of finite subdivision rules, which do not require the tiles to be geometrically rigid.
Introduction
A tile substitution is described by a set of prototiles (tile shapes) , an expanding map and a dissection rule showing how to dissect the expanded prototiles to form copies of some prototiles . Intuitively, higher and higher iterations of tile substitution produce a tiling of the plane called a substitution tiling. Some substitution tilings are periodic, defined as having translational symmetry.
Every substitution tiling (up to mild conditions) can be "enforced by matching rules"—that is, there exist a set of marked tiles that can only form exactly the substitution tilings generated by the system. The tilings by these marked tiles are necessarily aperiodic.
A simple example that produces a periodic tiling has only one prototile, namely a square:
By iterating this tile substitution, larger and larger regions of the plane are covered with a square grid. A more sophisticated example with two prototiles is shown below, with the two steps of blowing up and dissecting merged into one step.
One may intuitively get an idea how this procedure yields a substitution tiling of the entire plane. A mathematically rigorous definition is given below. Substitution tilings are notably useful as ways of defining aperiodic tilings, which are objects of interest in many fields of mathematics, including automata theory, combinatorics, discrete geometry, dynamical systems, group theory, harmonic analysis and number theory, as well as crystallography and chemistry. In particular, the celebrated Penrose tiling is an example of an aperiodic substitution
|
https://en.wikipedia.org/wiki/Cultigen
|
A cultigen () or cultivated plant is a plant that has been deliberately altered or selected by humans. Cultigens result from artificial selection. These plants have commercial value in horticulture, agriculture or forestry. Because cultigens are defined by their mode of origin and not by where they grow, plants meeting this definition remain cultigens whether they are naturalised, deliberately planted in the wild, or grown in cultivation.
Cultigens arise in the following ways:
through the selection of variants from the wild or cultivation, including vegetative sports (aberrant growth that can be reproduced reliably in cultivation);
from plants that are the result of plant breeding and selection programs;
from genetically modified plants (plants modified by the deliberate implantation of genetic material);
or from graft-chimaeras (plants grafted to produce mixed tissue with graft material from wild plants, special selections, or hybrids).
Naming
Cultigens may be named in any of a number of ways. The traditional method of scientific naming is under the International Code of Nomenclature for algae, fungi, and plants, and many of the most important cultigens, like maize (Zea mays) and banana (Musa acuminata), are so named. Although it is perfectly in order to give a cultigen a botanical name, in any rank desired, now or at any other time, these days it is more common for cultigens to be given names in accordance with the principles, rules and recommendations laid down in the International Code of Nomenclature for Cultivated Plants (ICNCP) which provides for the names of cultigens in three classification categories, the cultivar, the Group (formerly cultivar-group), and the grex. With this, one could say that there is a separate discipline of cultivated plant taxonomy, which forms one of the ways to look at cultigens. The ICNCP does not recognize the use of trade designations and other marketing devices as scientifically acceptable names, but does provide advice o
|
https://en.wikipedia.org/wiki/Between%20Silk%20and%20Cyanide
|
Between Silk and Cyanide: A Codemaker's War 1941–1945 is a memoir of public interest by former Special Operations Executive (SOE) cryptographer Leo Marks, describing his work including memorable events, actions and omissions of his colleagues during the Second World War. It was first published in 1998.
Date
The book was written in the early 1980s. It was published on UK Government approval in 1998.
Title
The title is derived from an incident related in the book, when Marks was asked why agents in occupied Europe should have their cryptographic material printed on silk (which was in very short supply). He summed his reply up by saying that it was "between silk and cyanide", meaning that it was a choice between the agent's surviving by making reliable coded radio transmissions with the help of the printed silk, and having to take a suicide pill to avoid being tortured into revealing the code and other secret information. Unlike paper, given away by rustling, silk is not detected by a casual or typical body search if concealed in the lining of clothing.
SOE
A major theme is Marks's inability to convince his superiors in the Special Operations Executive (SOE) that apparent mistakes made in radio transmissions from agents working with or in an alike role as the Dutch resistance were their prearranged duress codes, which it transpired they were as he alleged, and which fact haunted him. SOE management, unwilling to face the possibility that their Dutch network was compromised, insisted that the errors were attributable to poor operation by the recently trained Morse code operators and continued to parachute in new agents to sites prearranged with the compromised network, leading to their immediate capture and later execution by the order of the command of Nazi Germany.
Marks' interest in cryptography arose from reading Edgar Allan Poe's The Gold-Bug as a child. Furthermore, his father Benjamin was a partner in bookshop Marks & Co at 84 Charing Cross Road. As a boy,
|
https://en.wikipedia.org/wiki/Stuart%20Kingsley
|
Stuart Kingsley (born May 15, 1948, in Stoke Newington, London, England) is considered a pioneer in the Optical Search For Extraterrestrial Intelligence, also known as Optical SETI (OSETI).
While traditional SETI efforts survey the sky in hopes of finding radio transmissions from a nearby civilization, the optical approach to SETI seeks to detect pulsed and continuous wave laser beacons signals in the visible and infrared spectrums. In other words, instead of "listening" for extraterrestrial intelligence, Optical SETI "looks" for it.
Kingsley received a B.Sc. Honors and Ph.D. in electronic and electrical engineering from The City University (London, England), and University College London, England in 1972 and 1984, respectively. He moved to the United States in 1981 and went to work for Battelle Columbus Division as a principal research scientist, becoming a senior research scientist in 1985. He left Battelle in 1987 and established his own photonics consultancy business, Fiberdyne Optoelectronics.
Kingsley is the Director of The Columbus Optical SETI Observatory, which is currently working to achieve nonprofit status. Since 1992 he has been the VP for Engineering at SRICO, Inc. Kingsley retired from Srico, Inc. and returned to England in 2008, meaning that the Columbus Optical SETI Observatory has effectively moved to Bournemouth.
External links
Introduction to the COSETI
The SETI League's Optical SETI Committee Chair
The Bournemouth Optical SETI Observatory
Astrobiologists
Search for extraterrestrial intelligence
1948 births
Living people
|
https://en.wikipedia.org/wiki/StarFire%20%28navigation%20system%29
|
StarFire is a wide-area differential GPS developed by John Deere's NavCom and precision farming groups. StarFire broadcasts additional "correction information" over satellite L-band frequencies around the world, allowing a StarFire-equipped receiver to produce position measurements accurate to well under one meter, with typical accuracy over a 24-hour period being under 4.5 cm. StarFire is similar to the FAA's differential GPS Wide Area Augmentation System (WAAS), but considerably more accurate due to a number of techniques that improve its receiver-end processing.
Background
StarFire came about after a meeting in 1994 among John Deere engineers who were attempting to chart a course for future developments. At the time, a number of smaller companies were attempting to introduce yield-mapping systems combining a GPS receiver with a grain counter, which produced maps of a field showing its yield. The engineers felt this was one of the most interesting developments in the industry, but the accuracy of GPS, then still using Selective Availability, was simply too low to produce a useful map. The various providers went bankrupt over the next few years.
In 1997, a team was formed to solve the problem of providing a more accurate GPS fix. Along with members of John Deere's engineering team, a small project at Stanford University also took part, along with NASA engineers at the Jet Propulsion Laboratory. They decided to produce a dGPS system that differed fairly dramatically from similar systems like WAAS.
Addressing GPS Inaccuracy
In theory the GPS signal with Selective Availability turned off offers accuracy on the order of 3 m. In practice, typical accuracy is about 15 m.
Of this 12 m, about 5 m is due to distortion from "billows" in the ionosphere, which introduce propagation delays that makes the satellite appear farther away than it really is. Another 3 to 4 m is accounted for by errors in the satellite ephemeris data, which is used to calculate the positions of t
|
https://en.wikipedia.org/wiki/Sambucus%20nigra
|
Sambucus nigra is a species complex of flowering plants in the family Adoxaceae native to most of Europe. Common names include elder, elderberry, black elder, European elder, European elderberry, and European black elderberry. It grows in a variety of conditions including both wet and dry fertile soils, primarily in sunny locations. The plant is widely grown as an ornamental shrub or small tree. Both the flowers and the berries have a long tradition of culinary use, primarily for cordial and wine.
Although elderberry is commonly used in dietary supplements and traditional medicine, there is no scientific evidence that it provides any benefit for maintaining health or treating diseases.
Description
Elderberry is a deciduous shrub or small tree growing to tall and wide, rarely reaching tall. The bark, light gray when young, changes to a coarse gray outer bark with lengthwise furrowing, lenticels prominent. The leaves are arranged in opposite pairs, long, pinnate with five to seven (rarely nine) leaflets, the leaflets long and broad, with a serrated margin. The young stems are hollow.
The hermaphroditic flowers have five stamens, which are borne in large, flat corymbs 10–25 cm in diameter in late spring to mid-summer. The individual flowers are ivory white, in diameter, with five petals, and are pollinated by flies.
The fruit is a glossy, dark purple to black berry 3–5 mm diameter, produced in drooping clusters in late autumn. The dark color of elderberry fruit occurs from its rich phenolic content, particularly from anthocyanins.
Taxonomy
Subspecies
There are several other closely related species, native to Asia and North America, which are similar, and sometimes treated as subspecies of Sambucus nigra, including S. nigra subsp. canadensis and S. nigra subsp. cerulea.
Etymology
The Latin specific epithet nigra means "black", and refers to the deeply dark colour of the berries. The English term for the tree is not believed to come from the word "old",
|
https://en.wikipedia.org/wiki/Relaxometry
|
Relaxometry refers to the study and/or measurement of relaxation variables in Nuclear Magnetic Resonance and Magnetic Resonance Imaging. Often referred to as Time-Domain NMR. In NMR, nuclear magnetic moments are used to measure specific physical and chemical properties of materials.
Relaxation of the nuclear spin system is crucial for all NMR applications. The relaxation rate depends strongly on the mobility (fluctuations, diffusion) of the microscopic environment and the strength of the applied magnetic field. As a rule of thumb, strong magnetic fields lead to increased sensitivity on fast dynamics while low fields lead to increased sensitivity on slow dynamics. Thus, the relaxation rate as a function of the magnetic field strength is a fingerprint of the microscopic dynamics.
Key Materials science properties are often described in different fields using the terms mobility / dynamics / stiffness / viscosity / rigidity of the sample. These properties are usually dependent on atomic and molecular motion in the sample, which may be measured using time-domain NMR and fast field cycling relaxometry.
Equipment
Apparatus and technological support of the method is constantly developed. An NMR relaxometer is a device for relaxation time measuring. Laboratory NMR relaxometers for NMR signal registration are available in small sizes.
In NMR relaxometry (NMRR) only one specific NMRR parameter is measured, not the whole spectrum (which is not always needed). This helps to save time and resources and makes it possible to use an NMR relaxometer as a portable express analyzer in different branches of industry, science and technology, environmental protection, etc.
|
https://en.wikipedia.org/wiki/Jawaharlal%20Nehru%20Tropical%20Botanic%20Garden%20and%20Research%20Institute
|
"Jawaharlal Nehru Tropical Botanic Garden and Research Institute", (formerly Tropical Botanic Garden and Research Institute) () renamed in the fond memory of visionary Prime Minister of India Shri Pandit Jawaharlal Nehru is an autonomous Institute established by the Government of Kerala on 17 November 1979 at Thiruvananthapuram, the capital city of Kerala. It functions under the umbrella of the Kerala State Council for Science, Technology and Environment (KSCSTE), Government of Kerala. The Royal Botanic Gardens (RBG), Kew played an exemplary and significant role in shaping and designing the lay out of the JNTBGRI garden in its formative stages.
The Institute undertakes research in conservation biology, Biotechnology, plant taxonomy, microbiology, phytochemistry, ethnomedicine and ethnophamacology, which are the main areas considered to have immediate relevance to the development of the garden. While taxonomists prepared a flora of the garden documenting the native plant wealth before mass introduction and face lift which subsequently followed, the bio-technologists mass multiplied plants of commercial importance, especially orchids for cultivation and distribution to the public.
JNTBGRI makes a comprehensive survey of the economic plant wealth of Kerala, to conserve, preserve and sustainably utilize the plant wealth. The institute carries out botanical, horticultural and chemical research for plant improvement and utilization; and offers facilities for the improvement of ornamental plants and propagation in the larger context of the establishment of nursery and flower trade. The cultivation and culturing of plants of India/other countries with comparable climatic condition for the economic benefit of Kerala/India is also taken care. Activities to help botanical teaching and to create public understanding of the value of plant research is initiated by JNTBGRI. JNTBGRI gardens medicinal plants, ornamental plants and various introduced plants of economic or aestheti
|
https://en.wikipedia.org/wiki/Spectral%20splatter
|
In radio electronics or acoustics, spectral splatter (also called switch noise) refers to spurious emissions that result from an abrupt change in the transmitted signal, usually when transmission is started or stopped.
For example, a device transmitting a sine wave produces a single peak in the frequency spectrum; however, if the device abruptly starts or stops transmitting this sine wave, it will emit noise at frequencies other than the frequency of the sine wave. This noise is known as spectral splatter.
When the signal is represented in the time domain, an abrupt change may not be visually apparent; in the frequency domain, however, the abrupt change causes the appearance of spikes at various frequencies.
A sharper change in the time domain usually results in more spikes or stronger spikes in the frequency domain. Spectral splatter can thus be reduced by making the change more smooth. Controlling the power ramp shape (i.e. the way in which the signal increases ("power-on ramp") or falls off ("power-down ramp")) can help reduce the splatter. In some cases one can use a filter to remove unwanted emissions. Note that a completely abrupt change (in the mathematical sense) is not possible in physical reality; the change is always somewhat smoothed naturally, for example due to the capacitance (in electronics) or inertia (in acoustics) of the components involved.
In radio electronics, the need to minimize spectral splatter arises because signals are usually required by government regulations to be contained in a particular frequency band, defined by a spectral mask. Spectral splatter can cause emissions that violate this mask.
See also
Gibbs phenomenon
Radio electronics
Acoustics
|
https://en.wikipedia.org/wiki/234%20%28number%29
|
234 (two hundred [and] thirty-four) is the integer following 233 and preceding 235.
Additionally:
234 is a practical number.
There are 234 ways of grouping six children into rings of at least two children with one child at the center of each ring.
|
https://en.wikipedia.org/wiki/Protein%20A
|
Protein A is a 42 kDa surface protein originally found in the cell wall of the bacteria Staphylococcus aureus. It is encoded by the spa gene and its regulation is controlled by DNA topology, cellular osmolarity, and a two-component system called ArlS-ArlR. It has found use in biochemical research because of its ability to bind immunoglobulins. It is composed of five homologous Ig-binding domains that fold into a three-helix bundle. Each domain is able to bind proteins from many mammalian species, most notably IgGs. It binds the heavy chain within the Fc region of most immunoglobulins and also within the Fab region in the case of the human VH3 family. Through these interactions in serum, where IgG molecules are bound in the wrong orientation (in relation to normal antibody function), the bacteria disrupts opsonization and phagocytosis.
History
As a by-product of his work on type-specific staphylococcus antigens, Verwey reported in
1940 that a protein fraction prepared from extracts of these bacteria non-specifically precipitated rabbit antisera raised against different staphylococcus types. In 1958, Jensen confirmed Verwey’s finding and showed that rabbit pre-immunization sera as well as normal human sera bound to the active component in the staphylococcus extract; he designated this component Antigen A (because it was found in fraction A of the extract) but thought it was a polysaccharide. The misclassification of the protein was the result of faulty tests but it was not long thereafter (1962) that Löfkvist and Sjöquist corrected the error and confirmed that Antigen A was in fact a surface protein on the bacterial wall of certain strains of S. aureus. The Bergen group from Norway named the protein "Protein A" after the antigen fraction isolated by Jensen.
Protein A antibody binding
It has been shown via crystallographic refinement that the primary binding site for protein A is on the Fc region, between the CH2 and CH3 domains. In addition, protein A has been s
|
https://en.wikipedia.org/wiki/LM13700
|
The LM13700 is an integrated circuit consisting of two current controlled operational transconductance amplifiers (OTA), each having differential inputs and a push-pull output. The LM13700 is like a standard op-amp: each has a pair of differential inputs and a single output, but an OTA is voltage in and current out rather than voltage in and voltage out; and OTAs are programmable via the IABC pin. Linearizing diodes at the input reduce distortion and allow increased input levels. The darlington output buffers provided are specifically designed to complement the wide dynamic range of the OTA. This chip is very useful in audio electronics especially in analog synthesizer circuits like voltage controlled oscillators, voltage controlled filters, and voltage controlled amplifiers. The darlington output buffers on the LM13700 are different from those on the LM13600 in that their bias currents (and hence their output DC levels) are independent of IABC pin. This may result in performance superior to that of the LM13600 in audio applications.
See also
Transconductance
Operational amplifier
Transconductance amplifier
current mirror
|
https://en.wikipedia.org/wiki/Real%20analytic%20Eisenstein%20series
|
In mathematics, the simplest real analytic Eisenstein series is a special function of two variables. It is used in the representation theory of SL(2,R) and in analytic number theory. It is closely related to the Epstein zeta function.
There are many generalizations associated to more complicated groups.
Definition
The Eisenstein series E(z, s) for z = x + iy in the upper half-plane is defined by
for Re(s) > 1, and by analytic continuation for other values of the complex number s. The sum is over all pairs of coprime integers.
Warning: there are several other slightly different definitions. Some authors omit the factor of ½, and some sum over all pairs of integers that are not both zero; which changes the function by a factor of ζ(2s).
Properties
As a function on z
Viewed as a function of z, E(z,s) is a real-analytic eigenfunction of the Laplace operator on H with the eigenvalue s(s-1). In other words, it satisfies the elliptic partial differential equation
where
The function E(z, s) is invariant under the action of SL(2,Z) on z in the upper half plane by fractional linear transformations. Together with the previous property, this means that the Eisenstein series is a Maass form, a real-analytic analogue of a classical elliptic modular function.
Warning: E(z, s) is not a square-integrable function of z with respect to the invariant Riemannian metric on H.
As a function on s
The Eisenstein series converges for Re(s)>1, but can be analytically continued to a meromorphic function of s on the entire complex plane, with in the half-plane Re(s) 1/2 a unique pole of residue 3/π at s = 1 (for all z in H) and infinitely many poles in the strip 0 < Re(s) < 1/2 at where corresponds to a non-trivial zero of the Riemann zeta-function. The constant term of the pole at s = 1 is described by the Kronecker limit formula.
The modified function
satisfies the functional equation
analogous to the functional equation for the Riemann zeta function ζ(s).
Scalar produ
|
https://en.wikipedia.org/wiki/Miscellaneous%20Technical
|
Miscellaneous Technical is a Unicode block ranging from U+2300 to U+23FF, which contains various common symbols which are related to and used in the various technical, programming language, and academic professions. For example:
Symbol ⌂ (HTML hexadecimal code is ⌂) represents a house or a home.
Symbol ⌘ (⌘) is a "place of interest" sign. It may be used to represent the Command key on a Mac keyboard.
Symbol ⌚ (⌚) is a watch (or clock).
Symbol ⏏ (⏏) is the "Eject" button symbol found on electronic equipment.
Symbol ⏚ (⏚) is the "Earth Ground" symbol found on electrical or electronic manual, tag and equipment.
It also includes most of the uncommon symbols used by the APL programming language.
Miscellaneous Technical (2300–23FF) in Unicode
In Unicode, Miscellaneous Technical symbols placed in the hexadecimal range 0x2300–0x23FF, (decimal 8960–9215), as described below.
(2300–233F)
1.Unicode code points U+2329 & U+232A are deprecated.
(2340–237F)
(2380–23BF)
(23C0–23FF)
Block
Emoji
The Miscellaneous Technical block contains eighteen emoji: U+231A–U+231B, U+2328, U+23CF, U+23E9–U+23F3 and U+23F8–U+23FA.
All of these characters have standardized variants defined, to specify emoji-style (U+FE0F VS16) or text presentation (U+FE0E VS15) for each character, for a total of 36 variants.
History
The following Unicode-related documents record the purpose and process of defining specific characters in the Miscellaneous Technical block:
See also
Unicode mathematical operators and symbols
Unicode symbols
Media control symbols
|
https://en.wikipedia.org/wiki/Eclipse%20process%20framework
|
The Eclipse process framework (EPF) is an open source project that is managed by the Eclipse Foundation. It lies under the top-level Eclipse Technology Project, and has two goals:
To provide an extensible framework and exemplary tools for software process engineering - method and process authoring, library management, configuring and publishing a process.
To provide exemplary and extensible process content for a range of software development and management processes supporting iterative, agile, and incremental development, and applicable to a broad set of development platforms and applications. For instance, EPF provides the OpenUP, an agile software development process optimized for small projects.
By using EPF Composer, engineers can create their own software development process by structuring it using a predefined schema. This schema is an evolution of the SPEM 1.1 OMG specification referred to as the unified method architecture (UMA). Major parts of UMA went into the adopted revision of SPEM, SPEM 2.0. EPF is aiming to fully support SPEM 2.0 in the near future. The UMA and SPEM schemata support the organization of large amounts of descriptions for development methods and processes. Such method content and processes do not have to be limited to software engineering, but can also cover other design and engineering disciplines, such as mechanical engineering, business transformation, and sales cycles.
IBM supplies a commercial version, IBM Rational Method Composer.
Limitations
The "content variability" capability severely limits users to one-to-one mappings. Processes trying to integrate various aspects may require block-copy-paste style clones to get around this limitation. This may be a limitation of the SPEM model and might be based on presumption that agile methods are being described as these methods tend not to have deep dependencies.
See also
Meta-process modeling
|
https://en.wikipedia.org/wiki/Honda%20P%20series
|
The P series is a series of prototype humanoid robots developed by Honda between 1993 and 2000. They were preceded by the Honda E series (whose development was not revealed to the public at the time) and followed by the ASIMO series, then the world's most advanced humanoid robots. Honda Motor's President and CEO Hiroyuki Yoshino, at the time, described Honda's humanoid robotics program as consistent with its direction to enhance human mobility.
History
Work to develop an advanced humanoid robot began in 1986, when Honda established a research center focused on fundamental technologies, including humanoid robotics.
Honda engineers had to research how humans walk, using the human skeleton for reference to create a replica and have it function like a human being. In 1986, the first two-legged robot was made to walk, used by Honda engineers to establish stable walking technology, including steps and sloped surfaces.
In 1993, Honda began developing "Prototype" models ("P" series), attaching the legs to a torso with arms that could perform basic tasks. P2, the second prototype model, debuted in December 1996, using wireless techniques making it the first self-regulating, two-legged walking robot. P2 weighed 463 pounds with a height of six feet tall. In September 1997, P3 was introduced as the first completely independent bi-pedal humanoid walking robot, standing five feet, four inches tall and weighing 287 pounds.
Features
Honda engineers determined a robot should be easy to operate and small in size, enabling it to help people—for instance, to look eye to eye with someone sitting in a chair.
ASIMO can be controlled by a portable controller whereas P3 was controlled from a workstation.
P1 developed in 1993
P2 unveiled in 1996
P3 unveiled in 1997
P4 developed in 2000
Notes:
1. – The P1 was developed in 1993 but was not unveiled and Honda kept its existence a secret until the announcement of the P2 in 1996.
2. – The P4 was developed in 2000 and originally unveiled
|
https://en.wikipedia.org/wiki/Foliar%20feeding
|
Foliar feeding is a technique of feeding plants by applying liquid fertilizer directly to the leaves. Plants are able to absorb essential elements through their leaves. For example,the higher consistently results in a lower plant foliar nitrogen. The absorption takes place through their stomata and also through their epidermis. Transport is usually faster through the stomata, but total absorption may be as great through the epidermis. Plants are also able to absorb nutrients through their bark.
Foliar feeding was earlier thought to damage tomatoes, but has become standard practice.
Effectiveness
H. B. Tukey was head of Michigan State University (MSU) Department of Horticulture in the 1950s. Working with S. H. Wittwer, they demonstrated that foliar feeding is effective. Radioactive phosphorus and potassium were applied to foliage. A Geiger counter was used to observe absorption, movement and nutrient utilization. The nutrients were transported at the rate of about one foot per hour to all parts of the plants.
A spray enhancer, called a surfactant, can help nutrients stick to the leaf and then penetrate the leaves' cuticle.
Foliar application has been shown to avoid the problem of leaching-out in soils and prompts a quick reaction in the plant. Foliar application of phosphorus, zinc and iron brings the greatest benefit in comparison with addition to soil where phosphorus becomes fixed in a form inaccessible to the plant and where zinc and iron are less available.
Use
Foliar feeding is generally done in the early morning or late evening, preferably at temperatures below , because heat causes the pores on some species' leaves to close.
|
https://en.wikipedia.org/wiki/Hypervariable%20region
|
A hypervariable region (HVR) is a location within nuclear DNA or the D-loop of mitochondrial DNA in which base pairs of nucleotides repeat (in the case of nuclear DNA) or have substitutions (in the case of mitochondrial DNA). Changes or repeats in the hypervariable region are highly polymorphic.
Mitochondrial
There are two mitochondrial hypervariable regions used in human mitochondrial genealogical DNA testing. HVR1 is considered a "low resolution" region and HVR2 is considered a "high resolution" region. Getting HVR1 and HVR2 DNA tests can help determine one's haplogroup. In the revised Cambridge Reference Sequence of the human mitogenome, the most variable sites of HVR1 are numbered 16024-16383 (this subsequence is called HVR-I), and the most variable sites of HVR2 are numbered 57-372 (i.e., HVR-II) and 438-574 (i.e., HVR-III).
In some bony fishes, for example certain Protacanthopterygii and Gadidae, the mitochondrial control region evolves remarkably slowly. Even functional mitochondrial genes accumulate mutations faster and more freely. It is not known whether such hypovariable control regions are more widespread. In the Ayu (Plecoglossus altivelis), an East Asian protacanthopterygian, control region mutation rate is not markedly lowered, but sequence differences between subspecies are far lower in the control region than elsewhere. This phenomenon completely defies explanation at present.
Antibodies
In antibodies, hypervariable regions form the antigen-binding site and are found on both light and heavy chains. They also contribute to the specificity of each antibody. In a variable domain, the 3 HV segments of each heavy or light chain fold together at the N-terminus to form an antigen binding pocket.
See also
Cambridge Reference Sequence
Genealogical DNA test
Human mitochondrial DNA haplogroup
mtDNA control region
|
https://en.wikipedia.org/wiki/Squircle
|
A squircle is a shape intermediate between a square and a circle. There are at least two definitions of "squircle" in use, the most common of which is based on the superellipse. The word "squircle" is a portmanteau of the words "square" and "circle". Squircles have been applied in design and optics.
Superellipse-based squircle
In a Cartesian coordinate system, the superellipse is defined by the equation
where and are the semi-major and semi-minor axes, and are the and coordinates of the centre of the ellipse, and is a positive number. The squircle is then defined as the superellipse with and . Its equation is:
where is the minor radius of the squircle. Compare this to the equation of a circle. When the squircle is centred at the origin, then , and it is called Lamé's special quartic.
The area inside the squircle can be expressed in terms of the gamma function as
where is the minor radius of the squircle, and is the lemniscate constant.
p-norm notation
In terms of the -norm on , the squircle can be expressed as:
where , is the vector denoting the centre of the squircle, and . Effectively, this is still a "circle" of points at a distance from the centre, but distance is defined differently. For comparison, the usual circle is the case , whereas the square is given by the case (the supremum norm), and a rotated square is given by (the taxicab norm). This allows a straightforward generalization to a spherical cube, or sphube, in , or hypersphube in higher dimensions.
Fernández-Guasti squircle
Another squircle comes from work in optics. It may be called the Fernández-Guasti squircle, after one of its authors, to distinguish it from the superellipse-related squircle above. This kind of squircle, centred at the origin, can be defined by the equation:
where is the minor radius of the squircle, is the squareness parameter, and and are in the interval . If , the equation is a circle; if , this is a square. This equation allows a smooth paramet
|
https://en.wikipedia.org/wiki/Hyperbolic%20tree
|
A hyperbolic tree (often shortened as hypertree) is an information visualization and graph drawing method inspired by hyperbolic geometry.
Displaying hierarchical data as a tree suffers from visual clutter as the number of nodes per level can grow exponentially. For a simple binary tree, the maximum number of nodes at a level n is 2n, while the number of nodes for trees with more branching grows much more quickly. Drawing the tree as a node-link diagram thus requires exponential amounts of space to be displayed.
One approach is to use a hyperbolic tree, first introduced by Lamping et al. Hyperbolic trees employ hyperbolic space, which intrinsically has "more room" than Euclidean space. For instance, linearly increasing the radius of a circle in Euclidean space increases its circumference linearly, while the same circle in hyperbolic space would have its circumference increase exponentially. Exploiting this property allows laying out the tree in hyperbolic space in an uncluttered manner: placing a node far enough from its parent gives the node almost the same amount of space as its parent for laying out its own children.
Displaying a hyperbolic tree commonly utilizes the Poincaré disk model of hyperbolic geometry, though the Klein-Beltrami model can also be used. Both display the entire hyperbolic plane within a unit disk, making the entire tree visible at once. The unit disk gives a fish-eye lens view of the plane, giving more emphasis to nodes which are in focus and displaying nodes further out of focus closer to the boundary of the disk. Traversing the hyperbolic tree requires Möbius transformations of the space, bringing new nodes into focus and moving higher levels of the hierarchy out of view.
Hyperbolic trees were patented in the U.S. by Xerox in 1996, but the patent has since expired.
See also
Hyperbolic geometry
Binary tiling
Information visualization
Radial tree – is also circular, but uses linear geometry.
Tree (data structure)
Tree (gr
|
https://en.wikipedia.org/wiki/Gregor%20and%20the%20Curse%20of%20the%20Warmbloods
|
Gregor and the Curse of the Warmbloods is an epic fantasy children's novel by Suzanne Collins. It is the third book in The Underland Chronicles, and was first published by Scholastic in 2005. The novel takes place a few months after the events of the preceding book, in the same subterranean world known as the Underland. In this installment, the young protagonist Gregor is once again recruited by the Underland's inhabitants, this time to help cure a rapidly-spreading plague.
Gregor and the Curse of the Warmbloods has been published as stand-alone hardcovers and paperbacks, as well as part of a boxed set. It was released as an audiobook on December 13, 2005, read by Paul Boehmer. In August 2010, it was released in ebook form. It has been lauded for "[addressing] a number of political issues ... in a manner accessible to upper elementary and middle school readers".
Development
Collins has listed two main sources of influence in her writing of The Underland Chronicles. First is her M.F.A. in dramatic writing and her experience as a screenwriter. This writing experience resulted in her structuring books "like a three-act play", and paying close attention to the plot's pacing. Gregor and the Curse of the Warmbloods came third in "a series of narratives that are interrelated yet can stand on their own", a fact not missed by reviewers. Collins' other source of inspiration was her father Michael Collins, a lieutenant colonel in the United States Air Force, who provided her with advice about the war tactics used in her books, and also instilled in her an "impulse to educate young people about the realities of war".
Plot summary
Despite the difficulties it has caused for his family, Gregor finds it hard to distance himself from the Underland. When he receives word that a plague has broken out and his bond Ares is one of the victims, he heads down to help with yet another of Bartholomew of Sandwich's prophecies. His mother, however, hates the Underland and only allows Boots
|
https://en.wikipedia.org/wiki/Specific%20physical%20preparedness
|
Specific Physical Preparedness (abbreviated SPP), also referred to by Sports-specific Physical Preparedness is the status of being prepared for the movements in a specific activity (usually a sport).
Specific training includes movements specific to a sport that can only be learned through repetition of those movements. For instance, shooting a free throw, running a marathon, and performing a handstand all require dedicated work on those skills. An SPP phase generally follows a phase of General Physical Preparedness, or GPP, which lays out an athletic base from which to build.
Related movements that mimic certain aspects of the movement which can be specialized in and put together to form it are also part of specific training.
External links
Clubbell mention of SPP
Exercise physiology
|
https://en.wikipedia.org/wiki/General%20physical%20preparedness
|
General Physical Preparation, also known as GPP, lays the groundwork for later Specific Physical Preparation, or SPP. In the GPP phase, athletes work on general conditioning to improve strength, speed, endurance, flexibility, structure and skill. GPP is generally performed in the off-season, with a lower level of GPP-maintenance during the season, when SPP is being pursued. GPP helps prevent imbalances and boredom with both specific and non-specific exercises by conditioning the body to work.
Purpose
GPP is the initial stage of training. It starts every cycle of training from the macro-, meso- and microcycle after restoration and recovery. It consists primarily of general preparatory and some specialized conditioning exercises to work all the major muscles and joints. This preparation prepares the athlete for the more intense training such as explosive plyometrics. This period is also used for rehabilitation of injured muscles and joints, strengthening or bringing up to par the lagging muscles and improvement of technique.
Specific example
For the high-level and elite athlete in endurance sport like cycling or long distance trail, GPP counts more or less for 70-80% of the training time through Long Slow Distance.
|
https://en.wikipedia.org/wiki/Ecoprovince
|
An ecoprovince is a biogeographic unit smaller than an ecozone that contains one or more ecoregions. According to Demarchi (1996), an ecoprovince encompasses areas of uniform climate, geological history and physiography (i.e. mountain ranges, large valleys, plateaus). Their size and broad internal uniformity make them ideal units for the implementation of natural resource policies.
See also
Bioregion
Ecological land classification
|
https://en.wikipedia.org/wiki/Microsoft%20Exchange%20Hosted%20Services
|
Microsoft Exchange Hosted Services, also known as FrontBridge, is an email filtering system owned by Microsoft. It was acquired in 2005 from Frontbridge Inc. FrontBridge Technologies began in 2000 as Bigfish Communications in Marina del Rey, California. The service is sold directly and through partnership with Sprint Nextel.
On 30 March 2006, Microsoft announced new branding, a new licensing model and the road map for Microsoft Exchange Hosted Services (EHS), formerly known as FrontBridge Technologies Inc. With Microsoft Exchange Hosted Services (EHS), four new products was introduced.
EHS Filtering
The Filter was to actively help protect inbound and outbound e-mail from spam, viruses, phishing scams and e-mail policy violations.
EHS Archive
Message archiving system for e-mail and instant messages.
EHS Continuity
Security-enhanced Web interface that allowed ongoing access to e-mail during and after unplanned outages of an on-premises e-mail environment.
EHS Encryption
Preserve e-mail confidentiality by allowing users to send and receive encrypted e-mail
See also
Microsoft Forefront Online Protection for Exchange
Hosted desktop
|
https://en.wikipedia.org/wiki/First-order%20hold
|
First-order hold (FOH) is a mathematical model of the practical reconstruction of sampled signals that could be done by a conventional digital-to-analog converter (DAC) and an analog circuit called an integrator. For FOH, the signal is reconstructed as a piecewise linear approximation to the original signal that was sampled. A mathematical model such as FOH (or, more commonly, the zero-order hold) is necessary because, in the sampling and reconstruction theorem, a sequence of Dirac impulses, xs(t), representing the discrete samples, x(nT), is low-pass filtered to recover the original signal that was sampled, x(t). However, outputting a sequence of Dirac impulses is impractical. Devices can be implemented, using a conventional DAC and some linear analog circuitry, to reconstruct the piecewise linear output for either predictive or delayed FOH.
Even though this is not what is physically done, an identical output can be generated by applying the hypothetical sequence of Dirac impulses, xs(t), to a linear time-invariant system, otherwise known as a linear filter with such characteristics (which, for an LTI system, are fully described by the impulse response) so that each input impulse results in the correct piecewise linear function in the output.
Basic first-order hold
First-order hold is the hypothetical filter or LTI system that converts the ideally sampled signal
{|
|-
|
|
|-
|
|
|}
to the piecewise linear signal
resulting in an effective impulse response of
where is the triangular function.
The effective frequency response is the continuous Fourier transform of the impulse response.
{|
|-
|
|
|-
|
|
|-
|
|
|}
where is the normalized sinc function.
The Laplace transform transfer function of FOH is found by substituting s = i 2 π f:
{|
|-
|
|
|-
|
|
|}
This is an acausal system in that the linear interpolation function moves toward the value of the next sample before such sample is applied to the hypothetical FOH filter.
Delayed first-order ho
|
https://en.wikipedia.org/wiki/Isomorph
|
An isomorph is an organism that does not change in shape during growth. The implication is that its volume is proportional to its cubed length, and its surface area to its squared length. This holds for any shape it might have; the actual shape determines the proportionality constants.
The reason why the concept is important in the context of the Dynamic Energy Budget (DEB) theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. Since volume grows faster than surface area, this controls the ultimate size of the organism. Alfred Russel Wallace wrote this in a letter to E. B. Poulton in 1865. The surface area that is of importance is the part that is involved in substrate uptake (e.g. the gut surface), which is typically a fixed fraction of the total surface area in an isomorph. The DEB theory explains why isomorphs grow according to the von Bertalanffy curve if food availability is constant.
Organisms can also change in shape during growth, which affects the growth curve and the ultimate size, see for instance V0-morphs and V1-morphs. Isomorphs can also be called V2/3-morphs.
Most animals approximate isomorphy, but plants in a vegetation typically start as V1-morphs, then convert to isomorphs, and end up as V0-morphs (if neighbouring plants affect their uptake).
See also
Dynamic energy budget
V0-morph
V1-morph
shape correction function
|
https://en.wikipedia.org/wiki/Definable%20set
|
In mathematical logic, a definable set is an n-ary relation on the domain of a structure whose elements satisfy some formula in the first-order language of that structure. A set can be defined with or without parameters, which are elements of the domain that can be referenced in the formula defining the relation.
Definition
Let be a first-order language, an -structure with domain , a fixed subset of , and a natural number. Then:
A set is definable in with parameters from if and only if there exists a formula and elements such that for all ,
if and only if
The bracket notation here indicates the semantic evaluation of the free variables in the formula.
A set is definable in without parameters if it is definable in with parameters from the empty set (that is, with no parameters in the defining formula).
A function is definable in (with parameters) if its graph is definable (with those parameters) in .
An element is definable in (with parameters) if the singleton set is definable in (with those parameters).
Examples
The natural numbers with only the order relation
Let be the structure consisting of the natural numbers with the usual ordering. Then every natural number is definable in without parameters. The number is defined by the formula stating that there exist no elements less than x:
and a natural number is defined by the formula stating that there exist exactly elements less than x:
In contrast, one cannot define any specific integer without parameters in the structure consisting of the integers with the usual ordering (see the section on automorphisms below).
The natural numbers with their arithmetical operations
Let be the first-order structure consisting of the natural numbers and their usual arithmetic operations and order relation. The sets definable in this structure are known as the arithmetical sets, and are classified in the arithmetical hierarchy. If the structure is considered in second-order logic instead o
|
https://en.wikipedia.org/wiki/Logic%20Control
|
Logic Control is a control surface originally designed by Emagic in cooperation with Mackie.
History
Logic Control was designed by Emagic as a dedicated control surface for their Logic digital audio workstation software. It was manufactured by Mackie, but distributed by Emagic.
About 6 months later, Mackie introduced a physically identical product called "Mackie Control" which included support for most major DAW applications, but not Logic. The Emagic Logic Control was still available and would only work with Logic.
Later, Mackie Control's firmware was revised to include compatibility with Logic, combining together Mackie Control, Logic Control and Human User Interface (HUI) into a single protocol. As a result, the name was changed to "Mackie Control Universal" (MCU). Out of the box, MCU included Lexan overlays with different button legends to support control of other DAWs such as Pro Tools and Cubase.
Description
Logic Control (and now MCU) allows control of almost all Logic parameters with hardware faders, buttons and "V-Pots" (rotary knobs). Its touch-sensitive, motorized faders react to track automation. All transport functions and wheel scrubbing are also available. The unit also controls plug-in parameters. Visual feedback including current parameters being edited, parameter values, project location (SMPTE time code or bars/beats/divisions/ticks) are conveyed by a two-line LCD and red 7-segment LED displays.
See also
Logic Pro
Mackie
|
https://en.wikipedia.org/wiki/V1-morph
|
An V1-morph is an organism that changes in shape during growth such that its surface area is proportional to its volume. In most cases both volume and surface area are proportional to length
The reason the concept is important in the context of the Dynamic Energy Budget theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. The surface area that is of importance is that part that is involved in substrate uptake. Since uptake is proportional to maintenance for V1-morphs, there is no size control, and an organism grows exponentially at constant food (substrate) availability.
Filaments, such as fungi that form hyphae growing in length, but not in diameter, are examples of V1-morphs. Sheets that extend, but do not change in thickness, like some colonial bacteria and algae, are another example.
An important property of V1-morphs is that the distinction between the individual and the population level disappears; a single long filament grows as fast as many small ones of the same diameter and the same total length.
See also
Dynamic Energy Budget
V0-morph
isomorph
shape correction function
|
https://en.wikipedia.org/wiki/V0-morph
|
A V0-morph is an organism whose surface area remains constant as the organism grows.
The reason why the concept is important in the context of the Dynamic Energy Budget theory is that food (substrate) uptake is proportional to surface area, and maintenance to volume. The surface area that is of importance is that part that is involved in substrate uptake.
Biofilms on a flat solid substrate are examples of V0-morphs; they grow in thickness, but not in surface area that is involved in nutrient exchange. Other examples are dinophyta and diatoms that have a cell wall that does not change during the cell cycle. During cell-growth, when the amounts of protein and carbohydrates increase, the vacuole shrinks. The outer membrane that is involved in nutrient uptake remains constant. At cell division, the daughter cells rapidly take up water, complete a new cell wall and the cycle repeats.
Rods (bacteria that have the shape of a rod and grow in length, but not in diameter) are a static mixture between a V0- and a V1-morph, where the caps act as V0-morphs and the cylinder between the caps as V1-morph.The mixture is called static because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function are constant during growth.
Crusts, such as lichens that grow on a solid substrate, are a dynamic mixture between a V0- and a V1-morph, where the inner part acts as V0-morph, and the outer annulus as V1-morph.The mixture is called dynamic because the weight coefficients of the contributions of the V0- and V1-morph terms in the shape correction function change during growth. The Dynamic Energy Budget theory explains why the diameter of crusts grow linearly in time at constant substrate availability.
|
https://en.wikipedia.org/wiki/GRE%20Biochemistry%2C%20Cell%20and%20Molecular%20Biology%20Test
|
GRE Subject Biochemistry, Cell and Molecular Biology was a standardized exam provided by ETS (Educational Testing Service) that was discontinued in December 2016. It is a paper-based exam and there are no computer-based versions of it. ETS places this exam three times per year: once in April, once in October and once in November. Some graduate programs in the United States recommend taking this exam, while others require this exam score as a part of the application to their graduate programs. ETS sends a bulletin with a sample practice test to each candidate after registration for the exam. There are 180 questions within the biochemistry subject test.
Scores are scaled and then reported as a number between 200 and 990; however, in recent versions of the test, the maximum and minimum reported scores have been 760 (corresponding to the 99 percentile) and 320 (1 percentile) respectively. The mean score for all test takers from July, 2009, to July, 2012, was 526 with a standard deviation of 95.
After learning that test content from editions of the GRE® Biochemistry, Cell and Molecular Biology (BCM) Test has been compromised in Israel, ETS made the decision not to administer this test worldwide in 2016–17.
Content specification
Since many students who apply to graduate programs in biochemistry do so during the first half of their fourth year, the scope of most questions is largely that of the first three years of a standard American undergraduate biochemistry curriculum. A sampling of test item content is given below:
Biochemistry (36%)
A Chemical and Physical Foundations
Thermodynamics and kinetics
Redox states
Water, pH, acid-base reactions and buffers
Solutions and equilibria
Solute-solvent interactions
Chemical interactions and bonding
Chemical reaction mechanisms
B Structural Biology: Structure, Assembly, Organization and Dynamics
Small molecules
Macromolecules (e.g., nucleic acids, polysaccharides, proteins and complex lipids)
Supramolecular complexes (e.g.
|
https://en.wikipedia.org/wiki/Terricolous%20lichen
|
A terricolous lichen is a lichen that grows on the soil as a substrate. An example is some members of the genus Peltigera.
|
https://en.wikipedia.org/wiki/Indefinite%20inner%20product%20space
|
In mathematics, in the field of functional analysis, an indefinite inner product space
is an infinite-dimensional complex vector space equipped with both an indefinite inner product
and a positive semi-definite inner product
where the metric operator is an endomorphism of obeying
The indefinite inner product space itself is not necessarily a Hilbert space; but the existence of a positive semi-definite inner product on implies that one can form a quotient space on which there is a positive definite inner product. Given a strong enough topology on this quotient space, it has the structure of a Hilbert space, and many objects of interest in typical applications fall into this quotient space.
An indefinite inner product space is called a Krein space (or -space) if is positive definite and possesses a majorant topology. Krein spaces are named in honor of the Soviet mathematician Mark Grigorievich Krein (3 April 1907 – 17 October 1989).
Inner products and the metric operator
Consider a complex vector space equipped with an indefinite hermitian form . In the theory of Krein spaces it is common to call such an hermitian form an indefinite inner product. The following subsets are defined in terms of the square norm induced by the indefinite inner product:
("neutral")
("positive")
("negative")
("non-negative")
("non-positive")
A subspace lying within is called a neutral subspace. Similarly, a subspace lying within () is called positive (negative) semi-definite, and a subspace lying within () is called positive (negative) definite. A subspace in any of the above categories may be called semi-definite, and any subspace that is not semi-definite is called indefinite.
Let our indefinite inner product space also be equipped with a decomposition into a pair of subspaces , called the fundamental decomposition, which respects the complex structure on . Hence the corresponding linear projection operators coincide with the identity on and annihilate
|
https://en.wikipedia.org/wiki/Instability%20strip
|
The unqualified term instability strip usually refers to a region of the Hertzsprung–Russell diagram largely occupied by several related classes of pulsating variable stars: Delta Scuti variables, SX Phoenicis variables, and rapidly oscillating Ap stars (roAps) near the main sequence; RR Lyrae variables where it intersects the horizontal branch; and the Cepheid variables where it crosses the supergiants.
RV Tauri variables are also often considered to lie on the instability strip, occupying the area to the right of the brighter Cepheids (at lower temperatures), since their stellar pulsations are attributed to the same mechanism.
Position on the HR diagram
The Hertzsprung–Russell diagram plots the real luminosity of stars against their effective temperature (their color, given by the temperature of their photosphere). The instability strip intersects the main sequence, (the prominent diagonal band that runs from the upper left to the lower right) in the region of A and F stars (1–2 solar mass ()) and extends to G and early K bright supergiants (early M if RV Tauri stars at minimum are included). Above the main sequence, the vast majority of stars in the instability strip are variable. Where the instability strip intersects the main sequence, the vast majority of stars are stable, but there are some variables, including the roAp stars.
Pulsations
Stars in the instability strip pulsate due to He III (doubly ionized helium), in a process based on the Kappa–mechanism. In normal A-F-G class stars, He in the stellar photosphere is neutral. Deeper below the photosphere, where the temperature reaches 25,000–, begins the He II layer (first He ionization). Second ionization of helium (He III) starts at depths where the temperature is 35,000–.
When the star contracts, the density and temperature of the He II layer increases. The increased energy is sufficient to remove the lone remaining electron in the He II, transforming it into He III (second ionization). This causes
|
https://en.wikipedia.org/wiki/Hertzsprung%20gap
|
The Hertzsprung Gap is a feature of the Hertzsprung–Russell diagram for a star cluster. This diagram is a plot of effective temperature versus luminosity for a population of stars. The gap is named after Ejnar Hertzsprung, who first noticed the absence of stars in the region of the Hertzsprung–Russell diagram between A5 and G0 spectral type and between +1 and −3 absolute magnitudes. This gap lies between the top of the main sequence and the base of red giants for stars above roughly 1.5 solar mass. When a star during its evolution crosses the Hertzsprung gap, it means that it has finished core hydrogen burning.
Stars do exist in the Hertzsprung gap region, but because they move through this section of the Hertzsprung–Russell diagram very quickly in comparison to the lifetime of the star (thousands of years, compared to millions or billions of years for the lifetime of the star), that portion of the diagram is less densely populated.
Full Hertzsprung–Russell diagrams of the 11,000 Hipparcos mission targets show a handful of stars in that region.
See also
Subgiant
|
https://en.wikipedia.org/wiki/NeXTdimension
|
The NeXTdimension (ND) is an accelerated 32-bit color board manufactured and sold by NeXT from 1991 that gave the NeXTcube color capabilities with PostScript planned. The NeXTBus (NuBus-like) card was a full size card for the NeXTcube, filling one of four slots, another one being filled with the main board itself. The NeXTdimension featured S-Video input and output, RGB output, an Intel i860 64-bit RISC processor at 33 MHz for Postscript acceleration, 8 MB main memory (expandable to 64 MB via eight 72-pin SIMM slots) and 4 MB VRAM for a resolution of 1120x832 at 24-bit color plus 8-bit alpha channel. An onboard C-Cube CL550 chip for MJPEG video compression was announced, but never shipped. A handful of engineering prototypes for the MJPEG daughterboard exist.
A stripped down Mach kernel was used as the operating system for the card. Due to the supporting processor, 32-bit color on the NeXTdimension was faster than 2-bit greyscale Display PostScript on the NeXTcube. Display PostScript never actually ran on the board so the Intel i860 never did much more than move blocks of color data around. The Motorola 68040 did the crunching and the board, while fast for its time, never lived up to the hype. Since the main board always included the greyscale video logic, each NeXTdimension allowed the simultaneous use of an additional monitor. List price for a NeXTdimension sold as an add-on to the NeXTcube was , and for the MegaPixel Color Display.
See also
NeXT character set
NeXTcube
|
https://en.wikipedia.org/wiki/Universal%20wavefunction
|
The universal wavefunction or the wavefunction of the universe is the wavefunction or quantum state of the entire universe. It is regarded as the basic physical entity in the many-worlds interpretation of quantum mechanics, and finds applications in quantum cosmology. It evolves deterministically according to a wave equation.
The concept of universal wavefunction was introduced by Hugh Everett in his 1956 PhD thesis draft The Theory of the Universal Wave Function, It later received investigation from James Hartle and Stephen Hawking who derived the Hartle–Hawking solution to the Wheeler–deWitt equation to explain the initial conditions of the Big Bang cosmology.
Role of observers
Hugh Everett's universal wavefunction supports the idea that observed and observer are all mixed together:
Eugene Wigner and John Archibald Wheeler take issue with this stance. Wigner writes
Wheeler says:
See also
Heisenberg cut
|
https://en.wikipedia.org/wiki/Capital%20recovery%20factor
|
A capital recovery factor is the ratio of a constant annuity to the present value of receiving that annuity for a given length of time. Using an interest rate i, the capital recovery factor is:
where is the number of annuities received.
This is related to the annuity formula, which gives the present value in terms of the annuity, the interest rate, and the number of annuities.
If , the reduces to . Also, as , the .
Example
With an interest rate of i = 10%, and n = 10 years, the CRF = 0.163. This means that a loan of $1,000 at 10% interest will be paid back with 10 annual payments of $163.
Another reading that can be obtained is that the net present value of 10 annual payments of $163 at 10% discount rate is $1,000.
|
https://en.wikipedia.org/wiki/Sampson%20%28horse%29
|
Sampson (later renamed Mammoth) was a Shire horse gelding born in 1846 and bred by Thomas Cleaver at Toddington Mills, Bedfordshire, England. According to Guinness World Records (1986) he was the tallest horse ever recorded, by 1850 measuring or 21.25 hands in height. His peak weight was estimated at
See also
List of historical horses
|
https://en.wikipedia.org/wiki/Morse%20code%20mnemonics
|
Morse code mnemonics are systems to represent the sound of Morse characters in a way intended to be easy to remember. Since every one of these mnemonics requires a two-step mental translation between sound and character, none of these systems are useful for using manual Morse at practical speeds. Amateur radio clubs can provide resources to learn Morse code.
Cross-linguistic
Visual mnemonic
Visual mnemonic charts have been devised over the ages. Baden-Powell included one in the Girl Guides handbook in 1918.
Here is a more up-to-date version, ca. 1988:
Other visual mnemonic systems have been created for Morse code, mapping the elements of the Morse code characters onto pictures for easy memorization. For instance, "R" () might be represented as a "racecar" seen in a profile view, with the two wheels of the racecar being the dits and the body being the dah.
English
Syllabic mnemonics
Syllabic mnemonics are based on the principle of associating a word or phrase to each Morse code letter, with stressed syllables standing for a dah and unstressed ones for a dit. There is no well-known complete set of syllabic mnemonics for English, but various mnemonics do exist for individual letters.
Slavic languages
In Czech, the mnemonic device to remember Morse codes lies in remembering words that begin with each appropriate letter and has so called long vowel (i.e. á é í ó ú ý) for every dash and short vowel (a e i o u y) for every dot. Additionally, some other theme-related sets of words have been thought out as Czech folklore.
In Polish, which does not distinguish long and short vowels, Morse mnemonics are also words or short phrases that begin with each appropriate letter, but dash is coded as a syllable containing an "o" (or "ó"), while a syllable containing another vowel codes for dot. For some letters, multiple mnemonics are in use; the table shows one example.
Hebrew
Invented in 1922 by Zalman Cohen, a communication soldier in the Haganah organization.
Indone
|
https://en.wikipedia.org/wiki/CygnusEd
|
CygnusEd is a text editor for the AmigaOS and MorphOS. It was first developed in 1986-1987 by Bruce Dawson, Colin Fox and Steve LaRocque who were working for CygnusSoft Software. It was the first Amiga text editor with an undo/redo feature and one of the first Amiga programs that had an AREXX scripting port by which it was possible to integrate the editor with AREXX enabled C compilers and build a semi-integrated development environment. Many Amiga programmers grew up with CygnusEd and a considerable part of the Amiga software library was created with CygnusEd. It is still one of very few text editors that support jerkyless soft scrolling.
It remained popular even after Commodore's bankruptcy in 1994. In 1997 version 4 was developed by Olaf Barthel and was ported to MorphOS by Ralph Schmidt in 2000 and made available for users having the original CygnusED 4 CDROM. In 2007 version 5 was finished by Olaf Barthel again, which runs natively on AmigaOS 2 and AmigaOS 4.
|
https://en.wikipedia.org/wiki/Witt%20vector
|
In mathematics, a Witt vector is an infinite sequence of elements of a commutative ring. Ernst Witt showed how to put a ring structure on the set of Witt vectors, in such a way that the ring of Witt vectors over the finite field of order is isomorphic to , the ring of -adic integers. They have a highly non-intuitive structure upon first glance because their additive and multiplicative structure depends on an infinite set of recursive formulas which do not behave like addition and multiplication formulas for standard p-adic integers.
The main idea behind Witt vectors is instead of using the standard -adic expansionto represent an element in , we can instead consider an expansion using the Teichmüller characterwhich sends each element in the solution set of in to an element in the solution set of in . That is, we expand out elements in in terms of roots of unity instead of as profinite elements in . We can then express a -adic integer as an infinite sumwhich gives a Witt vectorThen, the non-trivial additive and multiplicative structure in Witt vectors comes from using this map to give an additive and multiplicative structure such that induces a commutative ring morphism.
History
In the 19th century, Ernst Eduard Kummer studied cyclic extensions of fields as part of his work on Fermat's Last Theorem. This led to the subject now known as Kummer theory. Let be a field containing a primitive -th root of unity. Kummer theory classifies degree cyclic field extensions of . Such fields are in bijection with order cyclic groups , where corresponds to .
But suppose that has characteristic . The problem of studying degree extensions of , or more generally degree extensions, may appear superficially similar to Kummer theory. However, in this situation, cannot contain a primitive -th root of unity. Indeed, if is a -th root of unity in , then it satisfies . But consider the expression . By expanding using binomial coefficients we see that the operation of ra
|
https://en.wikipedia.org/wiki/Carboxypeptidase
|
A carboxypeptidase (EC number 3.4.16 - 3.4.18) is a protease enzyme that hydrolyzes (cleaves) a peptide bond at the carboxy-terminal (C-terminal) end of a protein or peptide. This is in contrast to an aminopeptidases, which cleave peptide bonds at the N-terminus of proteins. Humans, animals, bacteria and plants contain several types of carboxypeptidases that have diverse functions ranging from catabolism to protein maturation. At least two mechanisms have been discussed.
Functions
Initial studies on carboxypeptidases focused on pancreatic carboxypeptidases A1, A2, and B in the digestion of food. Most carboxypeptidases are not, however, involved in catabolism. Instead they help to mature proteins, for example Post-translational modification. They also regulate biological processes, such as the biosynthesis of neuroendocrine peptides such as insulin requires a carboxypeptidase. Carboxypeptidases also function in blood clotting, growth factor production, wound healing, reproduction, and many other processes.
Mechanism
Carboxypeptidases hydrolyze peptides at the first amide or polypeptide bond on the C-terminal end of the chain. Carboxypeptidases act by replacing the substrate water with a carbonyl (C=O) group. The carboxypeptidase A hydrolysis reaction has two mechanistic hypotheses, via a nucleophilic water and via an anhydride.
In the first proposed mechanism, a promoted-water pathway is favoured as Glu270 deprotonates the nucleophilic water. The Zn2+ ion, along with positively charged residues, decreases the pKa of the bound water to approximately 7. Glu 270 has a dual role in this mechanism as it acts as a base to allow for the attack at the amide carbonyl group during nucleophilic addition. It acts as an acid during elimination when the water proton is transferred to the leaving nitrogen group. The oxygen on the amide carbonyl group does not coordinate to the Zn2+ until the addition of the water. The deprotonation of the Zn2+ coordinated water by Glu 270 pro
|
https://en.wikipedia.org/wiki/Pancreatic%20elastase
|
Pancreatic elastase is a form of elastase that is produced in the acinar cells of the pancreas, initially produced as an inactive zymogen and later activated in the duodenum by trypsin. Elastases form a subfamily of serine proteases, characterized by a distinctive structure consisting of two beta barrel domains converging at the active site that hydrolyze amides and esters amongst many proteins in addition to elastin, a type of connective tissue that holds organs together. Pancreatic elastase 1 is a serine endopeptidase, a specific type of protease that has the amino acid serine at its active site. Although the recommended name is pancreatic elastase, it can also be referred to as elastase-1, pancreatopeptidase, PE, or serine elastase.
The first isozyme, pancreatic elastase 1, was initially thought to be expressed in the pancreas. However it was later discovered that it was the only chymotrypsin-like elastase that was not expressed in the pancreas. In fact, pancreatic elastase is expressed in basal layers of epidermis (at protein level). Hence pancreatic elastase 1 has been renamed elastase 1 (ELA1) or chymotrypsin-like elastase family, member 1 (CELA1). For a period of time, it was thought that ELA1 / CELA1 was not transcribed into a protein. However it was later discovered that it was expressed in skin keratinocytes.
Clinical literature that describes human elastase 1 activity in the pancreas or fecal material is actually referring to chymotrypsin-like elastase family, member 3B (CELA3B).
Structure
Pancreatic elastase is a compact globular protein with a hydrophobic core. This enzyme is formed by three subunits. Each subunit binds one calcium ion (cofactor). There are three important metal-binding sites in amino acids 77, 82, 87. The catalytic triad , located in the active site is formed by three hydrogen-bonded amino acid residues (H71, D119, S214), and plays an essential role in the cleaving ability of all proteases. It is composed of a single peptide chai
|
https://en.wikipedia.org/wiki/GPS%C2%B7C
|
GPS·C (GPS Correction) was a Differential GPS data source for most of Canada maintained by the Canadian Active Control System, part of Natural Resources Canada. When used with an appropriate receiver, GPS·C improved real-time accuracy to about 1–2 meters, from a nominal 15 m accuracy.
Real-time data was collected at fourteen permanent ground stations spread across Canada, and forwarded to the central station, "NRC1", in Ottawa for processing.
Visiting the external webpage for this service on 2011-11-04, there is only a note saying that the service had been discontinued on 2011-04-01. There is a PDF link on that page to possible alternatives.
CDGPS
GPS·C information was broadcast Canada-wide on MSAT by the Canada-Wide DGPS Correction Service (CDGPS). CDGPS required a separate MSAT receiver, which output correction information in the RTCM format for input into any suitably equipped GPS receiver. The need for a separate receiver made it less cost-effective than solutions like WAAS or StarFire, which receive their correction information using the same antenna and receiver.
Shutdown
On April 9, 2010, it was announced that the service would be discontinued by March 31, 2011.
The service was decommissioned on March 31, 2011 and finally terminated on April 1, 2011, at 9:00 EDT.
|
https://en.wikipedia.org/wiki/MSAT
|
MSAT (Mobile Satellite) is a satellite-based mobile telephony service developed by the National Research Council Canada (NRC). Supported by a number of companies in the US and Canada, MSAT hosts a number of services, including the broadcast of CDGPS signals. The MSAT satellites were built by Hughes (now owned by Boeing) with a 3 kilowatt solar array power capacity and sufficient fuel for a design life of twelve years. TMI Communications of Canada referred to its MSAT satellite as MSAT-1, while American Mobile Satellite Consortium (now Ligado Networks) referred to its MSAT as AMSC-1, with each satellite providing backup for the other.
History
April 7, 1995 - MSAT-2 (a.k.a. AMSC-1, COSPAR 1995-019A, SATCAT 23553) launched from Cape Canaveral, Launch Complex 36, Pad A, aboard Atlas IIA
May 1995 - testing causes overheating and damage to one of eight hybrid matrix amplifier output ports aboard MSAT-2
April 20, 1996 - MSAT-1 (sometimes AMSC 2, COSPAR 1996-022A, SATCAT 23846) launched from Kourou, French Guiana aboard Ariane 42P
May 15, 1996 - Reported failures of two solid state power amplifiers (SSPAs) and one L-band receiver on separate occasions aboard MSAT-2.
May 4, 2003 - MSAT-1 loses two power amplifiers.
Phaseout
MSAT-1 and MSAT-2 have had their share of problems. Mobile Satellite Ventures placed the AMSC-1 satellite into a 2.5 degree inclined orbit operations mode in November 2004, reducing station-keeping fuel usage and extending the satellite's useful life.
On January 11, 2006, Mobile Satellite Ventures (MSVLP) (changed name to SkyTerra, then became by acquisition LightSquared, then after bankruptcy Ligado Networks) announced plans to launch a new generation of satellites (in a 3 satellite configuration) to replace the MSAT satellites by 2010. MSV has said that all old MSAT gear would be compatible with the new satellites.
MSV-1 (U.S.)
MSV-2 (Canada)
MSV-SA (South America)
Services Delivered via MSAT
The following services are singularly dependent
|
https://en.wikipedia.org/wiki/Cathepsin%20C
|
Cathepsin C (CTSC) also known as dipeptidyl peptidase I (DPP-I) is a lysosomal exo-cysteine protease belonging to the peptidase C1 protein family, a subgroup of the cysteine cathepsins. In humans, it is encoded by the CTSC gene.
Function
Cathepsin C appears to be a central coordinator for activation of many serine proteases in immune/inflammatory cells.
Cathepsin C catalyses excision of dipeptides from the N-terminus of protein and peptide substrates, except if (i) the amino group of the N-terminus is blocked, (ii) the site of cleavage is on either side of a proline residue, (iii) the N-terminal residue is lysine or arginine, or (iv) the structure of the peptide or protein prevents further digestion from the N-terminus.
Structure
The cDNAs encoding rat, human, murine, bovine, dog and two Schistosome cathepsin Cs have been cloned and sequenced and show that the enzyme is highly conserved. The human and rat cathepsin C cDNAs encode precursors (prepro-cathepsin C) comprising signal peptides of 24 residues, pro-regions of 205 (rat cathepsin C) or 206 (human cathepsin C) residues and catalytic domains of 233 residues which contain the catalytic residues and are 30-40% identical to the mature amino acid sequences of papain and a number of other cathepsins including cathepsins, B, H, K, L, and S.
The translated prepro-cathepsin C is processed into the mature form by at least four cleavages of the polypeptide chain. The signal peptide is removed during translocation or secretion of the pro-enzyme (pro-cathepsin C) and a large N-terminal proregion fragment (also known as the exclusion domain), which is retained in the mature enzyme, is separated from the catalytic domain by excision of a minor C-terminal part of the pro-region, called the activation peptide. A heavy chain of about 164 residues and a light chain of about 69 residues are generated by cleavage of the catalytic domain.
Unlike the other members of the papain family, mature cathepsin C consists of four su
|
https://en.wikipedia.org/wiki/Parallel%20Virtual%20File%20System
|
The Parallel Virtual File System (PVFS) is an open-source parallel file system. A parallel file system is a type of distributed file system that distributes file data across multiple servers and provides for concurrent access by multiple tasks of a parallel application. PVFS was designed for use in large scale cluster computing. PVFS focuses on high performance access to large data sets. It consists of a server process and a client library, both of which are written entirely of user-level code. A Linux kernel module and pvfs-client process allow the file system to be mounted and used with standard utilities. The client library provides for high performance access via the message passing interface (MPI). PVFS is being jointly developed between The Parallel Architecture Research Laboratory at Clemson University and the Mathematics and Computer Science Division at Argonne National Laboratory, and the Ohio Supercomputer Center. PVFS development has been funded by NASA Goddard Space Flight Center, The DOE Office of Science Advanced Scientific Computing Research program, NSF PACI and HECURA programs, and other government and private agencies. PVFS is now known as OrangeFS in its newest development branch.
History
PVFS was first developed in 1993 by Walt Ligon and Eric Blumer as a parallel file system for Parallel Virtual Machine (PVM) as part of a NASA grant to study the I/O patterns of parallel programs. PVFS version 0 was based on Vesta, a parallel file system developed at IBM T. J. Watson Research Center. Starting in 1994 Rob Ross re-wrote PVFS to use TCP/IP and departed from many of the original Vesta design points. PVFS version 1 was targeted to a cluster of DEC Alpha workstations networked using switched FDDI. Like Vesta, PVFS striped data across multiple servers and allowed I/O requests based on a file view that described a strided access pattern. Unlike Vesta, the striping and view were not dependent on a common record size. Ross' research focused on scheduling
|
https://en.wikipedia.org/wiki/Minix%203
|
Minix 3 is a small, Unix-like operating system. It is published under a BSD-3-Clause license and is a successor project to the earlier versions, Minix 1 and 2.
The project's main goal is for the system to be fault-tolerant by detecting and repairing its faults on the fly, with no user intervention. The main uses of the system are envisaged to be embedded systems and education.
, Minix 3 supports IA-32 and ARM architecture processors. It can also run on emulators or virtual machines, such as Bochs, VMware Workstation, Microsoft Virtual PC, Oracle VirtualBox, and QEMU. A port to PowerPC architecture is in development.
The distribution comes on a live CD and does not support live USB installation.
Minix 3 is believed to have inspired the Intel Management Engine (ME) OS found in Intel's Platform Controller Hub, starting with the introduction of ME 11, which is used with Skylake and Kaby Lake processors.
It was debated that Minix could have been the most widely used OS on x86/AMD64 processors, with more installations than Microsoft Windows, Linux, or macOS, because of its use in the Intel ME.
The project has been dormant since 2018, and the latest release is 3.4.0 rc6 from 2017, although the Minix 3 discussion group is still active.
Goals of the project
Reflecting on the nature of monolithic kernel based systems, where a driver (which has, according to Minix creator Tanenbaum, approximately 3–7 times as many bugs as a usual program) can bring down the whole system, Minix 3 aims to create an operating system that is a "reliable, self-healing, multiserver Unix clone".
To achieve that, the code running in kernel must be minimal, with the file server, process server, and each device driver running as separate user-mode processes. Each driver is carefully monitored by a part of the system named the reincarnation server. If a driver fails to respond to pings from this server, it is shut down and replaced by a fresh copy of the driver.
In a monolithic system, a bug i
|
https://en.wikipedia.org/wiki/WURFL
|
WURFL (Wireless Universal Resource FiLe) is a set of proprietary application programming interfaces (APIs) and an XML configuration file which contains information about device capabilities and features for a variety of mobile devices, focused on mobile device detection. Until version 2.2, WURFL was released under an "open source / public domain" license. Prior to version 2.2, device information was contributed by developers around the world and the WURFL was updated frequently, reflecting new wireless devices coming on the market. In June 2011, the founder of the WURFL project, Luca Passani, and Steve Kamerman, the author of Tera-WURFL, a popular PHP WURFL API, formed ScientiaMobile, Inc to provide commercial mobile device detection support and services using WURFL. As of August 30, 2011, the ScientiaMobile WURFL APIs are licensed under a dual-license model, using the AGPL license for non-commercial use and a proprietary commercial license. The current version of the WURFL database itself is no longer open source.
Solution approaches
There have been several approaches to this problem, including developing very primitive content and hoping it works on a variety of devices, limiting support to a small subset of devices or bypassing the browser solution altogether and developing a Java ME or BREW client application.
WURFL solves this by allowing development of content pages using abstractions of page elements (buttons, links and textboxes for example). At run time, these are converted to the appropriate, specific markup types for each device. In addition, the developer can specify other content decisions be made at runtime based on device specific capabilities and features (which are all in the WURFL).
WURFL Cloud
In March 2012, ScientiaMobile has announced the launch of the WURFL Cloud. While the WURFL Cloud is a paid service, a free offer is made available to hobbyists and micro-companies for use on mobile sites with limited traffic. Currently, the WURFL
|
https://en.wikipedia.org/wiki/Gfarm%20file%20system
|
Gfarm file system is an open-source distributed file system, generally used for large-scale cluster computing and wide-area data sharing, and provides features to manage replica location explicitly. The name is derived from the Grid Data Farm architecture it implements.
Grid Datafarm is a petascale data-intensive computing project initiated in Japan. The project is a collaboration among High Energy Accelerator Research Organization (KEK), National Institute of Advanced Industrial Science and Technology (AIST), the University of Tokyo, Tokyo Institute of Technology and University of Tsukuba. The challenge involves construction of a Peta- to Exascale parallel filesystem exploiting local storage of PCs spread over the worldwide Grid.
See also
Distributed file system
List of file systems, the distributed parallel fault-tolerant file system section
|
https://en.wikipedia.org/wiki/Social%20psychology%20%28sociology%29
|
In sociology, social psychology (also known as sociological social psychology) studies the relationship between the individual and society. Although studying many of the same substantive topics as its counterpart in the field of psychology, sociological social psychology places relatively more emphasis on the influence of social structure and culture on individual outcomes, such as personality, behavior, and one's position in social hierarchies. Researchers broadly focus on higher levels of analysis, directing attention mainly to groups and the arrangement of relationships among people. This subfield of sociology is broadly recognized as having three major perspectives: Symbolic interactionism, social structure and personality, and structural social psychology.
Some of the major topics in this field include social status, structural
power, sociocultural change, social inequality and prejudice, leadership and intra-group behavior, social exchange, group conflict, impression formation and management, conversation structures, socialization, social constructionism, social norms and deviance, identity and roles, and emotional labor.
The primary methods of data collection are sample surveys, field observations, vignette studies, field experiments, and controlled experiments.
History
Sociological social psychology is understood to have emerged in 1902 with a landmark study by sociologist Charles Cooley, entitled Human Nature and the Social Order, in which he introduces the concept of the looking-glass self. Sociologist Edward Alsworth Ross would subsequently publish the first sociological textbook in social psychology, known as Social Psychology, in 1908. Following a few decades later, Jacob L. Moreno would go on to found the field's major academic journal in 1937, entitled Sociometry—though its name would change in 1978 to Social Psychology and to its current title, Social Psychology Quarterly, the year after.
Foundational concepts
Symbolic interactionism
In th
|
https://en.wikipedia.org/wiki/Symbolic%20simulation
|
In computer science, a simulation is a computation of the execution of some appropriately modelled state-transition system. Typically this process models the complete state of the system at individual points in a discrete linear time frame, computing each state sequentially from its predecessor. Models for computer programs or VLSI logic designs can be very easily simulated, as they often have an operational semantics which can be used directly for simulation.
Symbolic simulation is a form of simulation where many possible executions of a system are considered simultaneously. This is typically achieved by augmenting the domain over which the simulation takes place. A symbolic variable can be used in the simulation state representation in order to index multiple executions of the system. For each possible valuation of these variables, there is a concrete system state that is being indirectly simulated.
Because symbolic simulation can cover many system executions in a single simulation, it can greatly reduce the size of verification problems. Techniques such as symbolic trajectory evaluation (STE) and generalized symbolic trajectory evaluation (GSTE) are based on this idea of symbolic simulation.
See also
Symbolic execution
Symbolic computation
Electronic design automation
Formal methods
|
https://en.wikipedia.org/wiki/Electrojet
|
An electrojet is an electric current which travels around the E region of the Earth's ionosphere. There are three electrojets: one above the magnetic equator (the equatorial electrojet), and one each near the Northern and Southern Polar Circles (the Auroral Electrojets). Electrojets are Hall currents carried primarily by electrons at altitudes from 100 to 150 km. In this region the electron gyro frequency (Larmor frequency) is much greater than the electron-neutral collision frequency. In contrast, the principal E region ions (O2+ and NO+) have gyrofrequencies much lower than the ion-neutral collision frequency.
Kristian Birkeland was the first to suggest that polar electric currents (or auroral electrojets) are connected to a system of filaments (now called "Birkeland currents") that flow along geomagnetic field lines into and away from the polar region.
Equatorial Electrojet
The worldwide solar-driven wind results in the so-called Sq (solar quiet) current system in the E region of the Earth's ionosphere (100–130 km altitude). Resulting from this current is an electrostatic field directed E-W (dawn-dusk) in the equatorial day side of the ionosphere. At the magnetic dip equator, where the geomagnetic field is horizontal, this electric field results in an enhanced eastward current within ± 3 degrees of the magnetic equator, known as the equatorial electrojet.
Auroral Electrojet
The term 'auroral electrojet' is the name given to the large horizontal currents that flow in the D and E regions of the auroral ionosphere. Although horizontal ionospheric currents can be expected to flow at any latitude where horizontal ionospheric electric fields are present, the auroral electrojet currents are remarkable for their strength and persistence. There are two main factors in the production of the electrojet. First of all, the conductivity of the auroral ionosphere is generally larger than that at lower latitudes. Secondly, the horizontal electric field in the auroral iono
|
https://en.wikipedia.org/wiki/Cathar%20yellow%20cross
|
In the Middle Ages, the Cathar yellow cross was a distinguishing mark worn by repentant Cathars, who were ordered to wear it by the Roman Catholic Church.
Background
Catharism was a religious movement with dualistic and Gnostic elements that appeared in the Languedoc region of France around the middle of the 12th century. Cathars were dualist in their beliefs, and the Catholic symbol of the crucifix was, to the Cathars, a negative symbol. In the words of one 14th century Cathar Perfect Pierre Authié:
...just as a man should with an axe break the gallows on which his father was hanged, so you ought to try and break crucifixes, because Christ was suspended from it, albeit only in seeming.
The Albigensian Heresy and the Inquisition
The office of the Inquisition was formulated in response to Catharism, and a crusade was ultimately declared against Catharism.
Repentant first offenders (who admitted to having been Cathars), when released on licence by the inquisition were ordered to:
...carry from now on and forever two yellow crosses on all their clothes except their shirts and one arm shall be two palms long while the other transversal arm shall be a palm and a half long and each shall be three digits wide with one to be worn in front on the chest and the other between the shoulders.
In addition they were ordered "...not to move about either inside or outside" their houses and were required to "...redo or renew the crosses if they are torn or are destroyed by age."
At the time these crosses were known locally as "las debanadoras" - which in Occitan literally meant reels or winding machines. It is thought that this name is derived from the fact that the Cathars compared the cross to a reel and line to which the wearer was tied, and by which the wearer could be reeled in at any time, for a second offense meant the death penalty.
Montaillou
An example of this type of punishment is to be found in the French village of Montaillou, one of the last bastions of the
|
https://en.wikipedia.org/wiki/Index%20arbitrage
|
Index arbitrage is a subset of statistical arbitrage focusing on index components.
An index (such as S&P 500) is made up of several components (in the case of the S&P 500, 500 large US stocks picked by S&P to represent the US market), and the value of the index is typically computed as a linear function of the component prices, where the details of the computation (such as the weights of the linear function) are determined in accordance with the index methodology.
The idea of index arbitrage is to exploit discrepancies between the market price of a product that tracks the index (such as a Stock market index future or Exchange-traded fund) and the market prices of the underlying index components, which are typically stocks. For example, an arbitrageur could take the current prices of traded stocks, calculate a synthetic index value using the relevant index methodology, and then apply an interest rate and dividend adjustment to calculate the "fair value" of the stock market index future. If the stock market index future is trading above its "fair value", the arbitrageur can buy the component stocks and sell the index future. Likewise, if the stock market index futures is trading below its "fair value", the arbitrageur can short the component stocks and buy the index future. In both cases, then the arbitrageur would be exposed to Basis risk if the interest rate and dividend yield risks are left unhedged.
In a different example, the arbitrageur can take the current prices of traded stocks, calculate the "fair value" of an ETF (based on its holdings, which are chosen to track the index) and arbitrage between the market price of the ETF and the market prices of the stock holdings. In this scenario, the arbitrageur would use the ETF creation and redemption process to net-out the offsetting ETF and stock positions.
See also
Algorithmic trading
Complex event processing
Dark pool
Electronic trading
Implementation shortfall
Investment strategy
Quantitative trading
|
https://en.wikipedia.org/wiki/North%20American%20Datum
|
The North American Datum (NAD) is the horizontal datum now used to define the geodetic network in North America. A datum is a formal description of the shape of the Earth along with an "anchor" point for the coordinate system. In surveying, cartography, and land-use planning, two North American Datums are in use for making lateral or "horizontal" measurements: the North American Datum of 1927 (NAD 27) and the North American Datum of 1983 (NAD 83). Both are geodetic reference systems based on slightly different assumptions and measurements.
Vertical measurements, based on distances above or below Mean High Water (MHW), are calculated using the North American Vertical Datum of 1988 (NAVD 88).
NAD 83, along with NAVD 88, is set to be replaced with a new GPS- and gravimetric geoid model-based geometric reference frame and geopotential datum in 2022.
First North American Datum of 1901
In 1901 the United States Coast and Geodetic Survey adopted a national horizontal datum called the United States Standard Datum, based on the Clarke Ellipsoid of 1866. It was fitted to data previously collected for regional datums, which by that time had begun to overlap. In 1913, Canada and Mexico adopted that datum, so it was also renamed the North American Datum.
North American Datum of 1927
As more data were gathered, discrepancies appeared, so the datum was recomputed in 1927, using the same spheroid and origin as its predecessor.
The North American Datum of 1927 (NAD 27) was based on surveys of the entire continent from a common reference point that was chosen in 1901, because it was as near the center of the contiguous United States as could be calculated: It was based on a triangulation station at the junction of the transcontinental triangulation arc of 1899 on the 39th parallel north and the triangulation arc along the 98th meridian west that was near the geographic center of the contiguous United States.
The datum declares the Meades Ranch Triangulation Station in Osborne C
|
https://en.wikipedia.org/wiki/Vaporific%20effect
|
Vaporific effect is a flash fire resulting from the impact of high velocity projectiles with metallic objects. Impacts produce particulate matter originating from either the projectile, the target, or both. Particles heated from the force of impact can burn in the presence of air (oxidizer). An explosion can result from the mixture of metal-dust and air, the resulting dust explosion causing significant overpressure within metallic enclosures (aircraft, vehicles, metallic enclosures, etc.). The vaporific effect is particularly pronounced when these enclosures are constructed of pyrophoric metals (metals that react upon contact with air, such as aluminium, magnesium, or their alloys). Depleted uranium is a pyrophoric material used in kinetic penetrator ammunition.
|
https://en.wikipedia.org/wiki/DECmate
|
DECmate was the name of a series of PDP-8-compatible computers produced by the Digital Equipment Corporation in the late 1970s and early 1980s. All of the models used an Intersil 6100 (later known as the Harris 6100) or Harris 6120 (an improved Intersil 6100) microprocessor which emulated the 12-bit DEC PDP-8 CPU. They were text-only and used the OS/78 or OS/278 operating systems, which were extensions of OS/8 for the PDP-8. Aimed at the word processing market, they typically ran the WPS-8 word-processing program. Later models optionally had Intel 8080 or Z80 microprocessors which allowed them to run CP/M. The range was a development of the VT78 which was introduced in July 1977.
VT78
Introduced in July 1977, this machine was built into a VT52 case and had an Intersil 6100 microprocessor running at 2.2 MHz. The standard configuration included an RX02 dual 8-inch floppy disk unit which was housed in the pedestal that the computer rested on.
DECmate
Introduced in 1980, this machine was built into a VT100 case. It had a 10 MHz clock and 32 Kwords of memory. It was also known as the VT278.
DECmate II
As part of a three-pronged strategy against IBM, the company released this model in 1982 at the same time as the PDP-11-based PRO-350 and the Intel 8088-based Rainbow 100. The DECmate II resembles the Rainbow 100 but uses the 6120 processor. Its two operating systems are the WPS-8 word processing system, and the COS-310 Commercial Operating System running DIBOL. Like the others it had a monochrome VR201 (VT220-style) monitor, an LK201 keyboard and dual 400 KB single-sided quad-density 5.25-inch RX50 floppy disk drives. It had 32 Kwords of RAM for use by programs, and a further 32 Kwords containing code which was used for device emulation. Code running in this second bank was nicknamed "slushware", in contrast to firmware since it was loaded from floppy disk as the machine booted. It was also known as the PC278.
The model could be expanded, either by adding anot
|
https://en.wikipedia.org/wiki/Squashed%20entanglement
|
Squashed entanglement, also called CMI entanglement (CMI can be pronounced "see me"), is an information theoretic measure of quantum entanglement for a bipartite quantum system. If is the density matrix of a system composed of two subsystems and , then the CMI entanglement of system is defined by
where is the set of all density matrices for a tripartite system such that . Thus, CMI entanglement is defined as an extremum of a functional of . We define , the quantum Conditional Mutual Information (CMI), below. A more general version of Eq.(1) replaces the ``min" (minimum) in Eq.(1) by an ``inf" (infimum). When is a pure state,
, in agreement with the definition of entanglement of formation for pure states. Here is the Von Neumann entropy of density matrix .
Motivation for definition of CMI entanglement
CMI entanglement has its roots in classical (non-quantum) information theory, as we explain next.
Given any two random variables , classical information theory defines the mutual information, a measure of correlations, as
For three random variables , it defines the CMI as
It can be shown that .
Now suppose is the density matrix for a tripartite system . We will represent the partial trace of with respect to one or two of its subsystems by with the symbol for the traced system erased. For example, . One can define a quantum analogue of Eq.(2) by
and a quantum analogue of Eq.(3) by
It can be shown that . This inequality is often called the strong-subadditivity property of quantum entropy.
Consider three random variables with probability distribution , which we will abbreviate as . For those special of the form
it can be shown that . Probability distributions of the form Eq.(6) are in fact described by the Bayesian network shown in Fig.1.
One can define a classical CMI entanglement by
where is the set of all probability distributions in three random variables , such that for all . Because, given a probability distribution , one can always
|
https://en.wikipedia.org/wiki/Polar%20surface%20area
|
The polar surface area (PSA) or topological polar surface area (TPSA) of a molecule is defined as the surface sum over all polar atoms or molecules, primarily oxygen and nitrogen, also including their attached hydrogen atoms.
PSA is a commonly used medicinal chemistry metric for the optimization of a drug's ability to permeate cells. Molecules with a polar surface area of greater than 140 angstroms squared (Å2) tend to be poor at permeating cell membranes. For molecules to penetrate the blood–brain barrier (and thus act on receptors in the central nervous system), a PSA less than 90 Å2 is usually needed.
See also
Biopharmaceutics Classification System
Cheminformatics
Chemistry Development Kit
JOELib
Implicit solvation
Lipinski's rule of five
|
https://en.wikipedia.org/wiki/Tracks%20Ahead
|
Tracks Ahead is a television series about railroading, produced by Milwaukee PBS, originally solely for their station WMVS, then syndicated to public television stations, starting in 1990.
In general, the series examines all aspects of railroading, both in the United States and in the rest of the world. Content covers a wide range of railroad-related materials. This includes scenic rail journeys, short-line railroads, layouts (in various gauges of model, tinplate, scale, garden), artists, photographers, and other railroad related material.
Background
At the dawn of cable television, Chuck Zehner, a Milwaukee train enthusiast, began producing and hosting the interview format show Just Trains on Milwaukee's local access channel on Viacom Cable. Eventually the show was picked up on the cable network around Milwaukee, After 72 shows Milwaukee's WMVS Channel 10 (PBS) agreed to air a new magazine format show On Track in the Milwaukee market. For the second season it was renamed Tracks Ahead and expanded to the PBS network.
History
The first season (released 1990) was hosted by Charles E. "Chuck" Zehner and the second season (released 1992) by Ward Kimball. Both were later repackaged and re-released with Spencer Christian as the host. All subsequent series have featured Christian.
The primary audience for the series is women (ages 25–63) and children (ages 3–18) by 63.4%. The remaining audience is railroad interest groups.
Season 5 was the first in 1080i high definition; season 6 was the first to incorporate 5.1 enhanced (surround sound) audio.
Tracks Ahead 7 started airing in January 2009. As with the previous two seasons, it is in high definition, with digital 5.1 surround sound. The 14-part season includes segments from Japan, the Caribbean, Patagonia, and all around the United States.
Tracks Ahead 8 starting airing in 2011 and marked the final season produced by series originator David K. Baule.
Tracks Ahead 9 began production in 2012, with the final s
|
https://en.wikipedia.org/wiki/Witt%20group
|
In mathematics, a Witt group of a field, named after Ernst Witt, is an abelian group whose elements are represented by symmetric bilinear forms over the field.
Definition
Fix a field k of characteristic not equal to two. All vector spaces will be assumed to be finite-dimensional. We say that two spaces equipped with symmetric bilinear forms are equivalent if one can be obtained from the other by adding a metabolic quadratic space, that is, zero or more copies of a hyperbolic plane, the non-degenerate two-dimensional symmetric bilinear form with a norm 0 vector. Each class is represented by the core form of a Witt decomposition.
The Witt group of k is the abelian group W(k) of equivalence classes of non-degenerate symmetric bilinear forms, with the group operation corresponding to the orthogonal direct sum of forms. It is additively generated by the classes of one-dimensional forms. Although classes may contain spaces of different dimension, the parity of the dimension is constant across a class and so rk : W(k) → Z/2Z is a homomorphism.
The elements of finite order in the Witt group have order a power of 2; the torsion subgroup is the kernel of the functorial map from W(k) to W(kpy), where kpy is the Pythagorean closure of k; it is generated by the Pfister forms with a non-zero sum of squares. If k is not formally real, then the Witt group is torsion, with exponent a power of 2. The height of the field k is the exponent of the torsion in the Witt group, if this is finite, or ∞ otherwise.
Ring structure
The Witt group of k can be given a commutative ring structure, by using the tensor product of quadratic forms to define the ring product. This is sometimes called the Witt ring W(k), though the term "Witt ring" is often also used for a completely different ring of Witt vectors.
To discuss the structure of this ring we assume that k is of characteristic not equal to 2, so that we may identify symmetric bilinear forms and quadratic forms.
The kernel of the
|
https://en.wikipedia.org/wiki/Ferrier%20Lecture
|
The Ferrier Lecture is a Royal Society lectureship given every three years "on a subject related to the advancement of natural knowledge on the structure and function of the nervous system". It was created in 1928 to honour the memory of Sir David Ferrier, a neurologist who was the first British scientist to electronically stimulate the brain for the purpose of scientific study.
In its 90-year history, the Lecture has been given 30 times. It has never been given more than once by the same person. The first female to be awarded the honour was Prof. Christine Holt in 2017. The first lecture was given in 1929 by Charles Scott Sherrington, and was titled "Some functional problems attaching to convergence". The most recent lecturer was provided by Prof. Christine Holt, who presented a lecture in 2017 titled "understanding of the key molecular mechanisms involved in nerve growth, guidance and targeting which has revolutionised our knowledge of growing axon tips". In 1971, the lecture was given by two individuals (David Hunter Hubel and Torsten Nils Wiesel) on the same topic, with the title "The function and architecture of the visual cortex".
List of Lecturers
|
https://en.wikipedia.org/wiki/Drinfeld%20module
|
In mathematics, a Drinfeld module (or elliptic module) is roughly a special kind of module over a ring of functions on a curve over a finite field, generalizing the Carlitz module. Loosely speaking, they provide a function field analogue of complex multiplication theory. A shtuka (also called F-sheaf or chtouca) is a sort of generalization of a Drinfeld module, consisting roughly of a vector bundle over a curve, together with some extra structure identifying a "Frobenius twist" of the bundle with a "modification" of it.
Drinfeld modules were introduced by , who used them to prove the Langlands conjectures for GL2 of an algebraic function field in some special cases. He later invented shtukas and used shtukas of rank 2 to prove
the remaining cases of the Langlands conjectures for GL2. Laurent Lafforgue proved the Langlands conjectures for GLn of a function field by studying the moduli stack of shtukas of rank n.
"Shtuka" is a Russian word штука meaning "a single copy", which comes from the German noun “Stück”, meaning “piece, item, or unit". In Russian, the word "shtuka" is also used in slang for a thing with known properties, but having no name in a speaker's mind.
Drinfeld modules
The ring of additive polynomials
We let be a field of characteristic . The ring is defined to be the ring of noncommutative (or twisted) polynomials over , with the multiplication given by
The element can be thought of as a Frobenius element: in fact, is a left module over , with elements of acting as multiplication and acting as the Frobenius endomorphism of . The ring can also be thought of as the ring of all (absolutely) additive polynomials
in , where a polynomial is called additive if (as elements of ). The ring of additive polynomials is generated as an algebra over by the polynomial . The multiplication in the ring of additive polynomials is given by composition of polynomials, not by multiplication of commutative polynomials, and is not commutative.
Defin
|
https://en.wikipedia.org/wiki/Ethernet%20in%20the%20first%20mile
|
Ethernet in the first mile (EFM) refers to using one of the Ethernet family of computer network technologies between a telecommunications company and a customer's premises. From the customer's point of view, it is their first mile, although from the access network's point of view it is known as the last mile.
A working group of the Institute of Electrical and Electronics Engineers (IEEE) produced the standards known as IEEE 802.3ah-2004, which were later included in the overall standard IEEE 802.3-2008.
Although it is often used for businesses, it can also be known as Ethernet to the home (ETTH). One family of standards known as Ethernet passive optical network (EPON) uses a passive optical network.
History
With wide, metro, and local area networks using various forms of Ethernet, the goal was to eliminate non-native transport such as Ethernet over Asynchronous Transfer Mode (ATM) from access networks.
One early effort was the EtherLoop technology invented at Nortel Networks in 1996, and then spun off into the company Elastic Networks in 1998. Its principal inventor was Jack Terry. The hope was to combine the packet-based nature of Ethernet with the ability of digital subscriber line (DSL) technology to work over existing telephone access wires. The name comes from local loop, which traditionally describes the wires from a telephone company office to a subscriber. The protocol was half-duplex with control from the provider side of the loop. It adapted to line conditions with a peak of 10 Mbit/s advertised, but 4-6 Mbit/s more typical, at a distance of about . Symbol rates were 1 megabaud or 1.67 megabaud, with 2, 4, or 6 bits per symbol. The EtherLoop product name was registered as a trademark in the US and Canada. The EtherLoop technology was eventually purchased by Paradyne Networks in 2002, which was in turn purchased by Zhone Technologies in 2005.
Another effort was the concept promoted by Michael Silverton of using Ethernet variants that used fiber optic c
|
https://en.wikipedia.org/wiki/Amorphous%20carbonia
|
Amorphous carbonia, also called a-carbonia or a-CO2, is an exotic amorphous solid form of carbon dioxide that is analogous to amorphous silica glass. It was first made in the laboratory in 2006 by subjecting dry ice to high pressures (40-48 gigapascal, or 400,000 to 480,000 atmospheres), in a diamond anvil cell. Amorphous carbonia is not stable at ordinary pressures—it quickly reverts to normal CO2.
While normally carbon dioxide forms molecular crystals, where individual molecules are bound by Van der Waals forces, in amorphous carbonia a covalently bound three-dimensional network of atoms is formed, in a structure analogous to silicon dioxide or germanium dioxide glass.
Mixtures of a-carbonia and a-silica may be a prospective very hard and stiff glass material stable at room temperature. Such glass may serve as protective coatings, e.g. in microelectronics.
The discovery has implications for astrophysics, as interiors of massive planets may contain amorphous solid carbon dioxide.
Notes
|
https://en.wikipedia.org/wiki/Proliferating%20cell%20nuclear%20antigen
|
Proliferating cell nuclear antigen (PCNA) is a DNA clamp that acts as a processivity factor for DNA polymerase δ in eukaryotic cells and is essential for replication. PCNA is a homotrimer and achieves its processivity by encircling the DNA, where it acts as a scaffold to recruit proteins involved in DNA replication, DNA repair, chromatin remodeling and epigenetics.
Many proteins interact with PCNA via the two known PCNA-interacting motifs PCNA-interacting peptide (PIP) box and AlkB homologue 2 PCNA interacting motif (APIM). Proteins binding to PCNA via the PIP-box are mainly involved in DNA replication whereas proteins binding to PCNA via APIM are mainly important in the context of genotoxic stress.
Function
The protein encoded by this gene is found in the nucleus and is a cofactor of DNA polymerase delta. The encoded protein acts as a homotrimer and helps increase the processivity of leading strand synthesis during DNA replication. In response to DNA damage, this protein is ubiquitinated and is involved in the RAD6-dependent DNA repair pathway. Two transcript variants encoding the same protein have been found for this gene. Pseudogenes of this gene have been described on chromosome 4 and on the X chromosome.
PCNA is also found in archaea, as a processivity factor of polD, the single multi-functional DNA polymerase in this domain of life.
Expression in the nucleus during DNA synthesis
PCNA was originally identified as an antigen that is expressed in the nuclei of cells during the DNA synthesis phase of the cell cycle. Part of the protein was sequenced and that sequence was used to allow isolation of a cDNA clone. PCNA helps hold DNA polymerase delta (Pol δ) to DNA. PCNA is clamped to DNA through the action of replication factor C (RFC), which is a heteropentameric member of the AAA+ class of ATPases. Expression of PCNA is under the control of E2F transcription factor-containing complexes.
Role in DNA repair
Since DNA polymerase epsilon is involved in resynth
|
https://en.wikipedia.org/wiki/Penis%20fencing
|
Penis fencing is a mating behavior engaged in by many species of flatworm, such as Pseudobiceros hancockanus. Species which engage in the practice are hermaphroditic; each individual has both egg-producing ovaries and sperm-producing testes.
The flatworms "fence" using extendable two-headed dagger-like stylets. These stylets are pointed (and in some species hooked) in order to pierce their mate's epidermis and inject sperm into the haemocoel in an act known as intradermal hypodermic insemination, or traumatic insemination. Pairs can either compete, with only one individual transferring sperm to the other, or the pair can transfer sperm bilaterally. Both forms of sperm transfer can occur in the same species, depending on various factors.
Unilateral sperm transfer
One organism will inseminate the other, with the inseminating individual acting as the "father". The sperm is absorbed through pores or sometimes wounds in the skin from the partner's stylet, causing fertilization in the other, who becomes the "mother". The battle may last for up to an hour in some species.
Parturition, while necessary for successful offspring production, requires a considerable parental investment in time and energy, and according to Bateman's principle, almost always burdens the "mother". Thus, from an optimality model it is usually preferable for an organism to inseminate than to be inseminated. However, in many species that engage in this form of copulatory competition, each "father" will continue to fence with other partners until it is inseminated. In Alderia modesta, individuals will store sperm from several "fencing matches" before laying their eggs, and smaller individuals will more often inseminate a larger partner, with larger individuals spending more energy on laying eggs when paired with a smaller partner on the occasion that they transfer sperm unilaterally.
In the absence of potential mates, some species such as Neobenedenia melleni are capable of reproducing thr
|
https://en.wikipedia.org/wiki/Henry%20Wilbraham
|
Henry Wilbraham (25 July 1825 – 13 February 1883) was an English mathematician. He is known for discovering and explaining the Gibbs phenomenon nearly fifty years before J. Willard Gibbs did. Gibbs and Maxime Bôcher, as well as nearly everyone else, were unaware of Wilbraham's paper on the Gibbs phenomenon.
Biography
Henry Wilbraham was born to George and Lady Anne Wilbraham at Delamere, Cheshire. His family was privileged, with his father a parliamentarian and his mother the daughter of the Earl Fortescue. He attended Harrow School before being admitted to Trinity College, Cambridge at the age of 16. He received a BA in 1846 and an MA in 1849 from Cambridge. At the age of 22 he published his paper on the Gibbs phenomenon. He remained at Trinity as a Fellow until 1856. In 1864 he married Mary Jane Marriott, and together they had seven children. In the last years of his life, he was the District Registrar of the Chancery Court at Manchester.
|
https://en.wikipedia.org/wiki/Flight%20information%20display%20system
|
A flight information display system (FIDS) is a computer system used in airports to display flight information to passengers, in which a computer system controls mechanical or electronic display boards or monitors in order to display arriving and departing flight information in real-time. The displays are located inside or around an airport terminal. A virtual version of a FIDS can also be found on most airport websites and teletext systems. In large airports, there are different sets of FIDS for each terminal or even each major airline. FIDS are used to inform passengers of boarding gates, departure/arrival times, destinations, notifications of flight delays/flight cancellations, and partner airlines, et al.
Each line on an FIDS indicates a different flight number accompanied by:
the airline name/logo and/or its IATA or ICAO airline designator (can also include names/logos of interlining/codesharing airlines or partner airlines, e.g. HX252/BR2898.)
the city of origin or destination, and any intermediate points
the expected arrival or departure time and/or the updated time (reflecting any delays)
the status of the flight, such as "Landed", "Delayed", "Boarding", etc.
And in the case of departing flights:
the check-in counter numbers or the name of the airline handling the check-in
the gate number
Due to code sharing, a flight may be represented by a series of different flight numbers. For example, LH 474 and AC 9099, both partners of Star Alliance, codeshare on a route using a single aircraft, either Lufthansa or Air Canada, to operate that route at that given time. Lines may be sorted by time, airline name, or city.
Most FIDS are now displayed on LCD or LED screen, although some airports still use split-flap displays.
Display technology
Airport infrastructure
|
https://en.wikipedia.org/wiki/XenMan
|
XenMan is a Xen Hypervisor management tool with a graphical user interface that allows a user to perform the standard set of operations (start, stop, pause, kill, shutdown, reboot, snapshot, etc...) in addition to some higher level operations such as the creation of a guest domain (which includes the creation of the configuration file, the retrieval of appropriate kernels and initial ram disks, as well as the starting of the domain) in one single operation. The goal is to create a graphical management tool that fulfills all the Xen management needs both a novice and advanced user may require.
The application is developed in the python programming language, uses the gtk widget set and is released under the GPL.
External links
XenMan sourceforge project page.
XenMan screenshot.
Virtualization software
|
https://en.wikipedia.org/wiki/Goodput
|
In computer networks, goodput (a portmanteau of good and throughput) is the application-level throughput of a communication; i.e. the number of useful information bits delivered by the network to a certain destination per unit of time. The amount of data considered excludes protocol overhead bits as well as retransmitted data packets. This is related to the amount of time from the first bit of the first packet sent (or delivered) until the last bit of the last packet is delivered.
For example, if a file is transferred, the goodput that the user experiences corresponds to the file size in bits divided by the file transfer time. The goodput is always lower than the throughput (the gross bit rate that is transferred physically), which generally is lower than network access connection speed (the channel capacity or bandwidth).
Examples of factors that cause lower goodput than throughput are:
Protocol overhead: Typically, transport layer, network layer and sometimes datalink layer protocol overhead is included in the throughput, but is excluded from the goodput.
Transport layer flow control and congestion avoidance: For example, TCP slow start may cause a lower goodput than the maximum throughput.
Retransmission of lost or corrupt packets due to transport layer automatic repeat request (ARQ), caused by bit errors or packet dropping in congested switches and routers, is included in the datalink layer or network layer throughput but not in the goodput.
Example
Over Ethernet files are broken down into individual chunks for transmission. These chunks are no larger than the maximum transmission unit of IP over Ethernet, or 1500 bytes. Each packet requires 20 bytes of IPv4 header information and 20 bytes of TCP header information, leaving 1460 bytes per packet for file data (Linux and macOS are further limited to 1448 bytes as they also carry a 12-byte time stamp). The data is transmitted over Ethernet in a frame, which imposes a 26 byte overhead per packet. Given these
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.