source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/EncFS
|
EncFS is a Free (LGPL) FUSE-based cryptographic filesystem. It transparently encrypts files, using an arbitrary directory as storage for the encrypted files.
Two directories are involved in mounting an EncFS filesystem: the source directory, and the mountpoint. Each file in the mountpoint has a specific file in the source directory that corresponds to it. The file in the mountpoint provides the unencrypted view of the one in the source directory. Filenames are encrypted in the source directory.
Files are encrypted using a volume key, which is stored either within or outside the encrypted source directory. A password is used to decrypt this key.
Common uses
In Linux, allows encryption of home folders as an alternative to eCryptfs.
Allows encryption of files and folders saved to cloud storage (Dropbox, Google Drive, OneDrive, etc.).
Allows portable encryption of file folders on removable disks.
Available as a cross-platform folder encryption mechanism.
Increases storage security by adding two-factor authentication (2FA). When the EncFS volume key is stored outside the encrypted source directory and into a physically separated location from the actual encrypted data, it significantly increases security by adding a two-factor authentication (2FA). For example, EncFS is able to store each unique volume key anywhere else than the actual encrypted data, such as on a USB flash drive, network mount, optical disc or cloud. In addition to that a password could be required to decrypt this volume key.
Advantages
EncFS offers several advantages over other disk encryption software simply because each file is stored individually as an encrypted file elsewhere in the host's directory tree.
Cross-platform
EncFS is available on multiple platforms, whereas eCryptfs is tied to the Linux kernel
Bitrot detection
EncFS implements bitrot detection on top of any underlying filesystem
Scalable storage
EncFS has no "volumes" that occupy a fixed size — encrypted directories gro
|
https://en.wikipedia.org/wiki/Gravity%20of%20Earth
|
The gravity of Earth, denoted by , is the net acceleration that is imparted to objects due to the combined effect of gravitation (from mass distribution within Earth) and the centrifugal force (from the Earth's rotation).
It is a vector quantity, whose direction coincides with a plumb bob and strength or magnitude is given by the norm .
In SI units this acceleration is expressed in metres per second squared (in symbols, m/s2 or m·s−2) or equivalently in newtons per kilogram (N/kg or N·kg−1). Near Earth's surface, the acceleration due to gravity, accurate to 2 significant figures, is . This means that, ignoring the effects of air resistance, the speed of an object falling freely will increase by about per second every second. This quantity is sometimes referred to informally as little (in contrast, the gravitational constant is referred to as big ).
The precise strength of Earth's gravity varies with location. The agreed upon value for is by definition. This quantity is denoted variously as , (though this sometimes means the normal gravity at the equator, ), , or simply (which is also used for the variable local value).
The weight of an object on Earth's surface is the downwards force on that object, given by Newton's second law of motion, or (). Gravitational acceleration contributes to the total gravity acceleration, but other factors, such as the rotation of Earth, also contribute, and, therefore, affect the weight of the object. Gravity does not normally include the gravitational pull of the Moon and Sun, which are accounted for in terms of tidal effects.
Variation in magnitude
A non-rotating perfect sphere of uniform mass density, or whose density varies solely with distance from the centre (spherical symmetry), would produce a gravitational field of uniform magnitude at all points on its surface. The Earth is rotating and is also not spherically symmetric; rather, it is slightly flatter at the poles while bulging at the Equator: an oblate spheroid.
|
https://en.wikipedia.org/wiki/Optical%20mineralogy
|
Optical mineralogy is the study of minerals and rocks by measuring their optical properties. Most commonly, rock and mineral samples are prepared as thin sections or grain mounts for study in the laboratory with a petrographic microscope. Optical mineralogy is used to identify the mineralogical composition of geological materials in order to help reveal their origin and evolution.
Some of the properties and techniques used include:
Refractive index
Birefringence
Michel-Lévy Interference colour chart
Pleochroism
Extinction angle
Conoscopic interference pattern (Interference figure)
Becke line test
Optical relief
Sign of elongation (Length fast vs. length slow)
Wave plate
History
William Nicol, whose name is associated with the creation of the Nicol prism, is likely the first to prepare thin slices of mineral substances, and his methods were applied by Henry Thronton Maire Witham (1831) to the study of plant petrifactions. This method, of significant importance in petrology, was not at once made use of for the systematic investigation of rocks, and it was not until 1858 that Henry Clifton Sorby pointed out its value. Meanwhile, the optical study of sections of crystals had been advanced by Sir David Brewster and other physicists and mineralogists and it only remained to apply their methods to the minerals visible in rock sections.
Sections
A rock-section should be about one-thousandth of an inch (30 micrometres) in thickness, and is relatively easy to make. A thin splinter of the rock, about 1 centimetre may be taken; it should be as fresh as possible and free from obvious cracks. By grinding it on a plate of planed steel or cast iron with a little fine carborundum it is soon rendered flat on one side, and is then transferred to a sheet of plate glass and smoothed with the finest grained emery until all roughness and pits are removed, and the surface is a uniform plane. The rock chip is then washed, and placed on a copper or iron plate which is heated
|
https://en.wikipedia.org/wiki/Equations%20for%20a%20falling%20body
|
A set of equations describing the trajectories of objects subject to a constant gravitational force under normal Earth-bound conditions. Assuming constant acceleration g due to Earth’s gravity, Newton's law of universal gravitation simplifies to F = mg, where F is the force exerted on a mass m by the Earth’s gravitational field of strength g. Assuming constant g is reasonable for objects falling to Earth over the relatively short vertical distances of our everyday experience, but is not valid for greater distances involved in calculating more distant effects, such as spacecraft trajectories.
History
Galileo was the first to demonstrate and then formulate these equations. He used a ramp to study rolling balls, the ramp slowing the acceleration enough to measure the time taken for the ball to roll a known distance. He measured elapsed time with a water clock, using an "extremely accurate balance" to measure the amount of water.
The equations ignore air resistance, which has a dramatic effect on objects falling an appreciable distance in air, causing them to quickly approach a terminal velocity. The effect of air resistance varies enormously depending on the size and geometry of the falling object—for example, the equations are hopelessly wrong for a feather, which has a low mass but offers a large resistance to the air. (In the absence of an atmosphere all objects fall at the same rate, as astronaut David Scott demonstrated by dropping a hammer and a feather on the surface of the Moon.)
The equations also ignore the rotation of the Earth, failing to describe the Coriolis effect for example. Nevertheless, they are usually accurate enough for dense and compact objects falling over heights not exceeding the tallest man-made structures.
Overview
Near the surface of the Earth, the acceleration due to gravity = 9.807 m/s2 (metres per second squared, which might be thought of as "metres per second, per second"; or 32.18 ft/s2 as "feet per second per second") approxima
|
https://en.wikipedia.org/wiki/Structure%E2%80%93activity%20relationship
|
The structure–activity relationship (SAR) is the relationship between the chemical structure of a molecule and its biological activity. This idea was first presented by Crum-Brown and Fraser in 1865.
The analysis of SAR enables the determination of the chemical group responsible for evoking a target biological effect in the organism. This allows modification of the effect or the potency of a bioactive compound (typically a drug) by changing its chemical structure. Medicinal chemists use the techniques of chemical synthesis to insert new chemical groups into the biomedical compound and test the modifications for their biological effects.
This method was refined to build mathematical relationships between the chemical structure and the biological activity, known as quantitative structure–activity relationships (QSAR). A related term is structure affinity relationship (SAFIR).
Structure-biodegradability relationship
The large number of synthetic organic chemicals currently in production presents a major challenge for timely collection of detailed environmental data on each compound. The concept of structure biodegradability relationships (SBR) has been applied to explain variability in persistence among organic chemicals in the environment. Early attempts generally consisted of examining the degradation of a homologous series of structurally related compounds under identical conditions with a complex "universal" inoculum, typically derived from numerous sources. This approach revealed that the nature and positions of substituents affected the apparent biodegradability of several chemical classes, with resulting general themes, such as halogens generally conferring persistence under aerobic conditions. Subsequently, more quantitative approaches have been developed using principles of QSAR and often accounting for the role of sorption (bioavailability) in chemical fate.
See also
Combinatorial chemistry
Congener
Conformation activity relationship
Quantitative struc
|
https://en.wikipedia.org/wiki/Wu%20Wenjun
|
Wu Wenjun (; 12 May 1919 – 7 May 2017), also commonly known as Wu Wen-tsün, was a Chinese mathematician, historian, and writer. He was an academician at the Chinese Academy of Sciences (CAS), best known for Wu class, Wu formula, and Wu's method of characteristic set.
Biography
Wu's ancestral hometown was Jiashan, Zhejiang. He was born in Shanghai and graduated from Shanghai Jiao Tong University in 1940. In 1945, Wu taught several months at Hangchow University (later merged into Zhejiang University) in Hangzhou.
In 1947, he went to France for further study at the University of Strasbourg. In 1949, he received his PhD, for his thesis Sur les classes caractéristiques des structures fibrées sphériques, written under the direction of Charles Ehresmann. Afterwards, he did some work in Paris with René Thom and discovered the Wu class and Wu formula in algebraic topology. In 1951 he was appointed to a post at Peking University. However, Wu may have been among a wave of recalls of Chinese academics working in the West following Chiang Kai-shek's ouster from the mainland in 1949, according to eyewitness testimony by Marcel Berger, as he disappeared from France one day, without saying a word to anyone.
Honors and awards
In 1957, he was elected as an academician of the Chinese Academy of Sciences. In 1986 he was an Invited Speaker of the ICM in Berkeley. In 1990, he was elected as an academician of The World Academy of Sciences (TWAS).
Along with Yuan Longping, he was awarded the State Preeminent Science and Technology Award by President Jiang Zemin in 2000, when this highest scientific and technological prize in China began to be awarded. He also received the TWAS Prize in 1990 and the Shaw Prize in 2006. He was the President of the Chinese society of mathematics. He died on May 7, 2017, 5 days before his 98th birthday.
Research
The research of Wu includes the following fields: algebraic topology, algebraic geometry, game theory, history of mathematics, automated theor
|
https://en.wikipedia.org/wiki/Astrovirus
|
Astroviruses are a type of virus that was first discovered in 1975 using electron microscopes following an outbreak of diarrhea in humans. In addition to humans, astroviruses have now been isolated from numerous mammalian animal species (and are classified as genus Mamastrovirus) and from avian species such as ducks, chickens, and turkey poults (classified as genus Avastrovirus). Astroviruses are 28–35 nm diameter, icosahedral viruses that have a characteristic five- or six-pointed star-like surface structure when viewed by electron microscopy. Along with the Picornaviridae and the Caliciviridae, the Astroviridae comprise a third family of nonenveloped viruses whose genome is composed of plus-sense, single-stranded RNA. Astrovirus has a non-segmented, single stranded, positive sense RNA genome within a non-enveloped icosahedral capsid. Human astroviruses have been shown in numerous studies to be an important cause of gastroenteritis in young children worldwide. In animals, Astroviruses also cause infection of the gastrointestinal tract but may also result in encephalitis (humans and cattle), hepatitis (avian) and nephritis (avian).
Microbiology
Taxonomy
The International Committee on Taxonomy of Viruses (ICTV) established Astroviridae as a viral family in 1995. There have been over 50 astroviruses reported, although the ICTV officially recognizes 22 species. The genus Avastrovirus comprises three species; Chicken astrovirus (Avian nephritis virus types 1–3), Duck astrovirus (Duck astrovirus C-NGB), and Turkey astrovirus (Turkey astrovirus 1). The genus Mamastrovirus includes Bovine astroviruses 1 and 2, Human astrovirus (types 1-8), Feline astrovirus 1, Porcine astrovirus 1, Mink astrovirus 1 and Ovine astrovirus 1.
Structure
Astroviruses have a star-like appearance with five or six points. Their name is derived from the Greek word "astron" meaning star. They are non-enveloped RNA viruses with cubic capsids, approximately 28–35 nm in diameter with T=3 symmetry.
|
https://en.wikipedia.org/wiki/TreeDL
|
Tree Description Language (TreeDL) is a computer language for description of strictly-typed tree data structures and operations on them. The main use of TreeDL is in the development of language-oriented tools (compilers, translators, etc.) for the description of a structure of abstract syntax trees.
Tree description can be used as
a documentation of interface between parser and other subsystems;
a source for generation of data types representing a tree in target programming languages;
a source for generation of various support code: visitors, walkers, factories, etc.
TreeDL can be used with any parser generator that allows custom actions during parsing (for example, ANTLR, JavaCC).
Language overview
Tree description lists the node types allowed in a tree. Node types support single inheritance. Node types have children and attributes. Children must be of defined node type. Attributes may be of primitive type (numeric, string, boolean), enum type or node type. Attributes are used to store literals during tree construction and additional information gathered during tree analysis (for example, links between reference and definition, to represent higher-order abstract syntax).
Operations over a tree are defined as multimethods. Advantages of this approach are described in the article Treecc: An Aspect-Oriented Approach to Writing Compilers
Tree descriptions support inheritance to allow modularity and reuse of base language tree descriptions for language extensions.
See also
ANTLR - parser generator that offers a different approach to tree processing: tree grammars.
SableCC - parser generator that generates strictly-typed abstract syntax trees.
External links
old TreeDL home
Programming languages
Domain-specific knowledge representation languages
|
https://en.wikipedia.org/wiki/Herbert%20Fr%C3%B6hlich
|
Herbert Fröhlich (9 December 1905 – 23 January 1991) FRS was a German-born British physicist.
Career
In 1927, Fröhlich entered Ludwig-Maximilians University in Munich to study physics, and received his doctorate under Arnold Sommerfeld in 1930. His first position was as Privatdozent at the University of Freiburg. Due to rising anti-Semitism and the Deutsche Physik movement under Adolf Hitler, and at the invitation of Yakov Frenkel, Fröhlich went to the Soviet Union, in 1933, to work at the Ioffe Physico-Technical Institute in Leningrad. During the Great Purge following the murder of Sergei Kirov, he fled to England in 1935. Except for a short visit to the Netherlands and a brief internment during World War II, he worked in Nevill Francis Mott's department, at the University of Bristol, until 1948, rising to the position of Reader. At the invitation of James Chadwick, he took the Chair for Theoretical Physics at the University of Liverpool.
In 1950, Bell Telephone Laboratories offered Fröhlich their endowed professorial position at Princeton University. However, at Liverpool he had a purely research post which was attractive to him. He was then newly married to an American, Fanchon Angst, who was studying linguistic philosophy at Somerville College, Oxford under P. F. Strawson, and who did not want to return to the United States at that time.
From 1973, he was Professor of Solid State Physics at the University of Salford, however, all the while maintaining an office at the University of Liverpool, where he gained emeritus status in 1976 and remained there until his death. During 1981, he was a visiting professor at Purdue University. He was nominated for the Nobel Prize in Physics in 1963 and in 1964.
Fröhlich, who pursued theoretical research notably in the fields of superconductivity and bioelectrodynamics, proposed a theory of coherent excitations in biological systems known as Fröhlich coherence. A system that attains this coherent state is known as a Fröhli
|
https://en.wikipedia.org/wiki/Glossary%20of%20wine%20terms
|
The glossary of wine terms lists the definitions of many general terms used within the wine industry. For terms specific to viticulture, winemaking, grape varieties, and wine tasting, see the topic specific list in the "See also" section below.
A
Abboccato
An Italian term for full-bodied wines with medium-level sweetness
ABC
Initials for "Anything but Chardonnay" or "Anything but Cabernet". A term conceived by Bonny Doon's Randall Grahm to denote wine drinkers' interest in grape varieties.
Abfüllung (Erzeugerabfüllung)
Bottled by the proprietor. Will be on the label followed by relevant information concerning the bottler.
ABV
Abbreviation of alcohol by volume, generally listed on a wine label.
AC
Abbreviation for "Agricultural Cooperative" on Greek wine labels and for Adega Cooperativa on Portuguese labels.
Acescence
Wine with a sharp, sweet-and-sour tang. The acescence characteristics frequently recalls a vinegary smell.
Adamado
Portuguese term for a medium-sweet wine
Adega
Portuguese wine term for a winery or wine cellar.
Almacenista
Spanish term for a Sherry producer who ferments and matures the wine before selling it to a merchant
Altar wine
The wine used by the Catholic Church in celebrations of the Eucharist.
Alte Reben
German term for old vine
Amabile
Italian term for a medium-sweet wine
AOC
Abbreviation for Appellation d'Origine Contrôlée, (), as specified under French law. The AOC laws specify and delimit the geography from which a particular wine (or other food product) may originate and methods by which it may be made. The regulations are administered by the Institut National des Appellations d'Origine (INAO).
A.P. number
Abbreviation for Amtliche Prüfungsnummer, the official testing number displayed on a German wine label that shows that the wine was tasted and passed government quality control standards.
ATTTB
Abbreviation for the Alcohol and Tobacco Tax and Trade Bureau, a United States government agency that is primarily responsible
|
https://en.wikipedia.org/wiki/Dragon%27s%20Egg
|
Dragon's Egg is a 1980 hard science fiction novel by American writer Robert L. Forward. In the story, Dragon's Egg is a neutron star with a surface gravity 67 billion times that of Earth, and inhabited by cheela, intelligent creatures the size of a sesame seed who live, think and develop a million times faster than humans. Most of the novel, from May to June 2050, chronicles the cheela civilization beginning with its discovery of agriculture to advanced technology and its first face-to-face contact with humans, who are observing the hyper-rapid evolution of the cheela civilization from orbit around Dragon's Egg.
As is typical of the genre, Dragon's Egg attempts to communicate unfamiliar ideas and imaginative scenes while giving adequate attention to the known scientific principles involved.
Plot summary
The neutron star
Half a million years ago and 50 light-years from Earth, a star in the constellation Draco turns supernova, and the star's remnant becomes a neutron star. The radiation from the explosion causes mutations in many Earth organisms, including a group of hominina that become the ancestors of Homo sapiens. The star's short-lived plasma jets are lop-sided because of anomalies in its magnetic field, and set it on a course passing within 250 astronomical units of the Sun. In 2020 AD, human astronomers detect the neutron star, call it "Dragon's Egg", and in 2050 they send an expedition to explore it.
The star contains about half of a solar mass of matter, compressed into a diameter of about , making its surface gravity 67 billion times that of Earth. Its outer crust, compressed to about 7,000 kg per cubic centimeter, is mainly iron nuclei with a high concentration of neutrons, overlaid with about of white dwarf star material. The atmosphere, mostly iron vapor, is about thick. The star shrinks slightly as it cools, causes the crust to crack and produce mountains high. Large volcanoes, formed by liquid material oozing from deep cracks, can be many centim
|
https://en.wikipedia.org/wiki/Factorial%20moment%20generating%20function
|
In probability theory and statistics, the factorial moment generating function (FMGF) of the probability distribution of a real-valued random variable X is defined as
for all complex numbers t for which this expected value exists. This is the case at least for all t on the unit circle , see characteristic function. If X is a discrete random variable taking values only in the set {0,1, ...} of non-negative integers, then is also called probability-generating function (PGF) of X and is well-defined at least for all t on the closed unit disk .
The factorial moment generating function generates the factorial moments of the probability distribution.
Provided exists in a neighbourhood of t = 1, the nth factorial moment is given by
where the Pochhammer symbol (x)n is the falling factorial
(Many mathematicians, especially in the field of special functions, use the same notation to represent the rising factorial.)
Examples
Poisson distribution
Suppose X has a Poisson distribution with expected value λ, then its factorial moment generating function is
(use the definition of the exponential function) and thus we have
See also
Moment (mathematics)
Moment-generating function
Cumulant-generating function
|
https://en.wikipedia.org/wiki/Bore%20%28wind%20instruments%29
|
In music, the bore of a wind instrument (including woodwind and brass) is its interior chamber. This defines a flow path through which air travels, which is set into vibration to produce sounds. The shape of the bore has a strong influence on the instrument's timbre.
Bore shapes
The cone and the cylinder are the two idealized shapes used to describe the bores of wind instruments. Other shapes are not generally used, as they tend to produce dissonant, anharmonic overtones and an unmusical sound. Instruments may consist of a primarily conical or cylindrical tube, but begin in a mouthpiece, and end in a rapidly-expanding "flare" or "bell". This flare reduces the acoustic impedance mismatch between the instrument and the air, allowing the instrument to transmit sound to the air more effectively.
These shapes affect the prominence of harmonics associated with the timbre of the instrument. A bore that flares from the mouthpiece reduces resistance to the breath, while a bore that narrows from the mouth increases it, compared to a cylinder.
Cylindrical bore
The diameter of a cylindrical bore remains constant along its length. The acoustic behavior depends on whether the instrument is stopped (closed at one end and open at the other), or open (at both ends). For an open pipe, the wavelength produced by the first normal mode (the fundamental note) is approximately twice the length of the pipe. The wavelength produced by the second normal mode is half that, that is, the length of the pipe, so its pitch is an octave higher; thus an open cylindrical bore instrument overblows at the octave. This corresponds to the second harmonic, and generally the harmonic spectrum of an open cylindrical bore instrument is strong in both even and odd harmonics. For a stopped pipe, the wavelength produced by the first normal mode is approximately four times the length of the pipe. The wavelength produced by the second normal mode is one third that, i.e. the 4/3 length of the pipe, so it
|
https://en.wikipedia.org/wiki/Abrikosov%20vortex
|
In superconductivity, a fluxon (also called an Abrikosov vortex or quantum vortex) is a vortex of supercurrent in a type-II superconductor, used by Alexei Abrikosov to explain magnetic behavior of type-II superconductors. Abrikosov vortices occur generically in the Ginzburg–Landau theory of superconductivity.
Overview
The solution is a combination of fluxon solution by Fritz London, combined with a concept of core of quantum vortex by Lars Onsager.
In the quantum vortex, supercurrent circulates around the normal (i.e. non-superconducting) core of the vortex. The core has a size — the superconducting coherence length (parameter of a Ginzburg–Landau theory). The supercurrents decay on the distance about (London penetration depth) from the core. Note that in type-II superconductors . The circulating supercurrents induce magnetic fields with the total flux equal to a single flux quantum . Therefore, an Abrikosov vortex is often called a fluxon.
The magnetic field distribution of a single vortex far from its core can be described by the same equation as in the London's fluxoid
where is a zeroth-order Bessel function. Note that, according to the above formula, at the magnetic field , i.e. logarithmically diverges. In reality, for the field is simply given by
where κ = λ/ξ is known as the Ginzburg–Landau parameter, which must be in type-II superconductors.
Abrikosov vortices can be trapped in a type-II superconductor by chance, on defects, etc. Even if initially type-II superconductor contains no vortices, and one applies a magnetic field larger than the lower critical field (but smaller than the upper critical field ), the field penetrates into superconductor in terms of Abrikosov vortices. Each vortex obeys London's magnetic flux quantization and carries one quantum of magnetic flux . Abrikosov vortices form a lattice, usually triangular, with the average vortex density (flux density) approximately equal to the externally applied magnetic field. As wit
|
https://en.wikipedia.org/wiki/Sacral%20architecture
|
Sacral architecture (also known as sacred architecture or religious architecture) is a religious architectural practice concerned with the design and construction of places of worship or sacred or intentional space, such as churches, mosques, stupas, synagogues, and temples. Many cultures devoted considerable resources to their sacred architecture and places of worship. Religious and sacred spaces are amongst the most impressive and permanent monolithic buildings created by humanity. Conversely, sacred architecture as a locale for meta-intimacy may also be non-monolithic, ephemeral and intensely private, personal and non-public.
Sacred, religious and holy structures often evolved over centuries and were the largest buildings in the world, prior to the modern skyscraper. While the various styles employed in sacred architecture sometimes reflected trends in other structures, these styles also remained unique from the contemporary architecture used in other structures. With the rise of Christianity and Islam, religious buildings increasingly became centres of worship, prayer and meditation.
The Western scholarly discipline of the history of architecture itself closely follows the history of religious architecture from ancient times until the Baroque period, at least. Sacred geometry, iconography, and the use of sophisticated semiotics such as signs, symbols and religious motifs are endemic to sacred architecture.
Spiritual aspects of religious architecture
Sacred or religious architecture is sometimes called sacred space.
Architect Norman L. Koonce has suggested that the goal of sacred architecture is to make "transparent the boundary between matter and mind, flesh and the spirit." In discussing sacred architecture, Protestant minister Robert Schuller suggested that "to be psychologically healthy, human beings need to experience their natural setting—the setting we were designed for, which is the garden." Meanwhile, Richard Kieckhefer suggests that entering into
|
https://en.wikipedia.org/wiki/Stokes%20relations
|
In physical optics, the Stokes relations, named after Sir George Gabriel Stokes, describe the relative phase of light reflected at a boundary between materials of different refractive indices. They also relate the transmission and reflection coefficients for the interaction. Their derivation relies on a time-reversal argument, so they only work when there is no absorption in the system.
A reflection of the incoming field (E) is transmitted at the dielectric boundary to give rE and tE (where r and t are the amplitude reflection and transmission coefficients, respectively). Since there is no absorption this system is reversible, as shown in the second picture (where the direction of the beams has been reversed). If this reversed process were actually taking place, there will be parts of the incoming fields (rE and tE) that are themselves transmitted and reflected at the boundary. In the third picture, this is shown by the coefficients r and t (for reflection and transmission of the reversed fields). Everything must interfere so that the second and third pictures agree; beam x has amplitude E and beam y has amplitude 0, providing Stokes relations.
The most interesting result here is that r=-r’. Thus, whatever phase is associated with reflection on one side of the interface, it is 180 degrees different on the other side of the interface. For example, if r has a phase of 0, r’ has a phase of 180 degrees.
Explicit values for the transmission and reflection coefficients are provided by the Fresnel equations
|
https://en.wikipedia.org/wiki/Superconducting%20coherence%20length
|
In superconductivity, the superconducting coherence length, usually denoted as (Greek lowercase xi), is the characteristic exponent of the variations of the density of superconducting component.
The superconducting coherence length is one of two parameters in the Ginzburg–Landau theory of superconductivity. It is given by:
where is a constant in the Ginzburg–Landau equation for with the form .
In Landau mean-field theory, at temperatures near the superconducting critical temperature , . Up to a factor of , it is equivalent to the characteristic exponent describing a recovery of the order parameter away from a perturbation in the theory of the second order phase transitions.
In some special limiting cases, for example in the weak-coupling BCS theory of isotropic s-wave superconductor it is related to characteristic Cooper pair size:
where is the reduced Planck constant, is the mass of a Cooper pair (twice the electron mass), is the Fermi velocity, and is the superconducting energy gap. The superconducting coherence length is a measure of the size of a Cooper pair (distance between the two electrons) and is of the order of cm. The electron near or at the Fermi surface moving through the lattice of a metal produces behind itself an attractive potential of range of the order of cm, the lattice distance being of order cm. For a very authoritative explanation based on physical intuition see the CERN article by V.F. Weisskopf.
The ratio , where is the London penetration depth, is known as the Ginzburg–Landau parameter. Type-I superconductors are those with , and type-II superconductors are those with .
In strong-coupling, anisotropic and multi-component theories these expressions are modified.
|
https://en.wikipedia.org/wiki/Reactivity%E2%80%93selectivity%20principle
|
In chemistry the reactivity–selectivity principle or RSP states that a more reactive chemical compound or reactive intermediate is less selective in chemical reactions. In this context selectivity represents the ratio of reaction rates.
This principle was generally accepted until the 1970s when too many exceptions started to appear. The principle is now considered obsolete.
A classic example of perceived RSP found in older organic chemistry textbooks concerns the free radical halogenation of simple alkanes. Whereas the relatively unreactive bromine reacts with 2-methylbutane predominantly to 2-bromo-2-methylbutane, the reaction with much more reactive chlorine results in a mixture of all four regioisomers.
Another example of RSP can be found in the selectivity of the reaction of certain carbocations with azides and water. The very stable triphenylmethyl carbocation derived from solvolysis of the corresponding triphenylmethyl chloride reacts 100 times faster with the azide anion than with water. When the carbocation is the very reactive tertiary adamantane carbocation (as judged from diminished rate of solvolysis) this difference is only a factor of 10.
Constant or inverse relationships are just as frequent. For example, a group of 3- and 4-substituted pyridines in their reactivity quantified by their pKa show the same selectivity in their reactions with a group of alkylating reagents.
The reason for the early success of RSP was that the experiments involved very reactive intermediates with reactivities close to kinetic diffusion control and as a result the more reactive intermediate appeared to react slower with the faster substrate.
General relationships between reactivity and selectivity in chemical reactions can successfully be explained by Hammond's postulate.
When reactivity-selectivity relationships do exist they signify different reaction modes. In one study the reactivity of two different free radical species (A, sulfur, B carbon) towards addition to
|
https://en.wikipedia.org/wiki/Body%20plan
|
A body plan, (), or ground plan is a set of morphological features common to many members of a phylum of animals. The vertebrates share one body plan, while invertebrates have many.
This term, usually applied to animals, envisages a "blueprint" encompassing aspects such as symmetry, layers, segmentation, nerve, limb, and gut disposition. Evolutionary developmental biology seeks to explain the origins of diverse body plans.
Body plans have historically been considered to have evolved in a flash in the Ediacaran biota; filling the Cambrian explosion with the results, and a more nuanced understanding of animal evolution suggests gradual development of body plans throughout the early Palaeozoic. Recent studies in animals and plants started to investigate whether evolutionary constraints on body plan structures can explain the presence of developmental constraints during embryogenesis such as the phenomenon referred to as phylotypic stage.
History
Among the pioneering zoologists, Linnaeus identified two body plans outside the vertebrates; Cuvier identified three; and Haeckel had four, as well as the Protista with eight more, for a total of twelve. For comparison, the number of phyla recognised by modern zoologists has risen to 36.
Linnaeus, 1735
In his 1735 book Systema Naturæ, Swedish botanist Linnaeus grouped the animals into quadrupeds, birds, "amphibians" (including tortoises, lizards and snakes), fish, "insects" (Insecta, in which he included arachnids, crustaceans and centipedes) and "worms" (Vermes). Linnaeus's Vermes included effectively all other groups of animals, not only tapeworms, earthworms and leeches but molluscs, sea urchins and starfish, jellyfish, squid and cuttlefish.
Cuvier, 1817
In his 1817 work, Le Règne Animal, French zoologist Georges Cuvier combined evidence from comparative anatomy and palaeontology to divide the animal kingdom into four body plans. Taking the central nervous system as the main organ system which controlled all the othe
|
https://en.wikipedia.org/wiki/Peanut%20sauce
|
Peanut sauce, satay sauce (saté sauce), bumbu kacang, sambal kacang, or pecel is an Indonesian sauce made from ground roasted or fried peanuts, widely used in Indonesian cuisine and many other dishes throughout the world.
Peanut sauce is used with meat and vegetables, with grilled skewered meat, such as satays, poured over vegetables as salad dressing such as in gado-gado, or as a dipping sauce.
Ingredients
The main ingredient is ground roasted peanuts, for which peanut butter can act as a substitute. Several different recipes for making peanut sauces exist, resulting in a variety of flavours, textures and consistency. A typical recipe usually contains ground roasted peanuts or peanut butter (smooth or crunchy), coconut milk, soy sauce, tamarind, galangal, garlic, and spices (such as coriander seed or cumin). Other possible ingredients are chili peppers, sugar, fried onion, and lemongrass. The texture and consistency (thin or thick) of a peanut sauce corresponds to the amount of water being mixed in it.
In Western countries, the readily and widely available peanut butter is often used as a substitute ingredient to make peanut sauce. To achieve authenticity, some recipes might insist on making roasted ground peanuts from scratch, using traditional stone mortar and pestle for grinding to achieve desired texture, graininess and earthy flavour of peanut sauce. This sauce is popularly applied on chicken skewers, beef satay or warm noodles.
Regional
Indonesia
One of the main characteristics of Indonesian cuisine is the wide applications of bumbu kacang (peanut sauce) in many Indonesian signature dishes, such as satay, gado-gado, karedok, ketoprak, rujak and pecel, or Chinese-influenced dishes such as siomay. It is usually added to main ingredients (meat or vegetable) to add taste, used as dipping sauce such as sambal kacang (a mixture of ground chilli and fried peanuts) for otak-otak or ketan or as a dressing on vegetables. Satays are commonly served with peanut
|
https://en.wikipedia.org/wiki/Siliceous%20sponge
|
The siliceous sponges form a major group of the phylum Porifera, consisting of classes Demospongiae and Hexactinellida. They are characterized by spicules made out of silicon dioxide, unlike calcareous sponges.
Individual siliachoates (silica skeleton scaffolding) can be arranged tightly within the sponginocyte or crosshatched and fused together. Siliceous spicules come in two sizes called megascleres and microscleres.
Systematics
Most studies support the monophyly of siliceous sponges.
The group, as a part of the phylum Porifera, has been named Silicispongia Schmidt, 1862 and Silicea Bowerbank, 1864. Silicarea is a proposed new phylum based on molecular studies of the phylum Porifera. It consists of the Poriferan classes Demospongiae and Hexactinellida. Some scientists believe that Porifera is polyphyletic/paraphyletic, and that some sponges, the Calcarea, are a separate phylum which was the first to diverge from the main line of kingdom Animalia. Silicarea is considered the next phylum to diverge from the primary animal lineage.
Ecology
Siliceous sponges are usually found in the marine ecosystem but they are occasionally found in freshwater.
During the Triassic, siliceous sponges grew reefs similar to calcarea of the modern era. During the Cretaceous period, diatoms became so successful that they significantly decreased the amount of silica present in sea water, after which "siliceous sponges could never again form reefs."
|
https://en.wikipedia.org/wiki/Earth%27s%20rotation
|
Earth's rotation or Earth's spin is the rotation of planet Earth around its own axis, as well as changes in the orientation of the rotation axis in space. Earth rotates eastward, in prograde motion. As viewed from the northern polar star Polaris, Earth turns counterclockwise.
The North Pole, also known as the Geographic North Pole or Terrestrial North Pole, is the point in the Northern Hemisphere where Earth's axis of rotation meets its surface. This point is distinct from Earth's North Magnetic Pole. The South Pole is the other point where Earth's axis of rotation intersects its surface, in Antarctica.
Earth rotates once in about 24 hours with respect to the Sun, but once every 23 hours, 56 minutes and 4 seconds with respect to other distant stars (see below). Earth's rotation is slowing slightly with time; thus, a day was shorter in the past. This is due to the tidal effects the Moon has on Earth's rotation. Atomic clocks show that the modern day is longer by about 1.7 milliseconds than a century ago, slowly increasing the rate at which UTC is adjusted by leap seconds. Analysis of historical astronomical records shows a slowing trend; the length of a day increased by about 2.3 milliseconds per century since the 8th century BCE.
Scientists reported that in 2020 Earth had started spinning faster, after consistently spinning slower than 86,400 seconds per day in the decades before. On June 29, 2022, Earth's spin was completed in 1.59 milliseconds under 24 hours, setting a new record. Because of that trend, engineers worldwide are discussing a 'negative leap second' and other possible timekeeping measures.
This increase in speed is thought to be due to various factors, including the complex motion of its molten core, oceans, and atmosphere, the effect of celestial bodies such as the Moon, and possibly climate change, which is causing the ice at Earth's poles to melt. The masses of ice account for the Earth's shape being that of an oblate spheroid, bulging around t
|
https://en.wikipedia.org/wiki/British%20Atomic%20Scientists%20Association
|
The British Atomic Scientists Association (ASA or BASA), was founded by Joseph Rotblat in 1946.
It was a politically neutral group, composed of eminent physicists and other scientists and was concerned with matters of British public policy regarding applications and dangers of nuclear physics (including nuclear weapons and nuclear power).
In so doing it also sought to inform fellow scientists and the public of the essential facts, usually via published papers and other documents.
Members
The vice-president (VP) was the executive head while the president (P) was the
honorary position.
Kathleen Lonsdale (VP, P 1967)
Harrie Massey
Nevill Mott
Joseph Rotblat (VP 1946)
Basil Schonland
See also
Atomic Energy Research Establishment
Nuclear physics
Pugwash group
Science policy
Franco-British Nuclear Forum
External links
Founding, activities and fall of BASA
1946 establishments in the United Kingdom
1959 establishments in the United Kingdom
Defunct organisations based in the United Kingdom
Scientific organisations based in the United Kingdom
Nuclear technology in the United Kingdom
Nuclear organizations
Organizations disestablished in 1959
Scientific organizations established in 1946
|
https://en.wikipedia.org/wiki/The%20Ground%20of%20Arts
|
Robert Recorde's Arithmetic: or, The Ground of Arts was one of the first printed English textbooks on arithmetic and the most popular of its time. The Ground of Arts appeared in London in 1543, and it was reprinted around 45 more editions until 1700. Editors and contributors of new sections included John Dee, John Mellis, Robert Hartwell, Thomas Willsford, and finally Edward Hatton.
The text is in the format of a dialogue between master and student to facilitate learning arithmetic without a teacher.
|
https://en.wikipedia.org/wiki/P%C3%B3lya%20enumeration%20theorem
|
The Pólya enumeration theorem, also known as the Redfield–Pólya theorem and Pólya counting, is a theorem in combinatorics that both follows from and ultimately generalizes Burnside's lemma on the number of orbits of a group action on a set. The theorem was first published by J. Howard Redfield in 1927. In 1937 it was independently rediscovered by George Pólya, who then greatly popularized the result by applying it to many counting problems, in particular to the enumeration of chemical compounds.
The Pólya enumeration theorem has been incorporated into symbolic combinatorics and the theory of combinatorial species.
Simplified, unweighted version
Let X be a finite set and let G be a group of permutations of X (or a finite symmetry group that acts on X). The set X may represent a finite set of beads, and G may be a chosen group of permutations of the beads. For example, if X is a necklace of n beads in a circle, then rotational symmetry is relevant so G is the cyclic group Cn, while if X is a bracelet of n beads in a circle, rotations and reflections are relevant so G is the dihedral group Dn of order 2n. Suppose further that Y is a finite set of colors — the colors of the beads — so that YX is the set of colored arrangements of beads (more formally: YX is the set of functions .) Then the group G acts on YX. The Pólya enumeration theorem counts the number of orbits under G of colored arrangements of beads by the following formula:
where is the number of colors and c(g) is the number of cycles of the group element g when considered as a permutation of X.
Full, weighted version
In the more general and more important version of the theorem, the colors are also weighted in one or more ways, and there could be an infinite number of colors provided that the set of colors has a generating function with finite coefficients. In the univariate case, suppose that
is the generating function of the set of colors, so that there are fw colors of weight w for each integer w ≥
|
https://en.wikipedia.org/wiki/Coconut%20milk%20powder
|
Coconut milk powder is a fine, white powder used in Southeast Asian and other cuisines. Coconut milk powder is manufactured through the spray drying process of raw unsweetened coconut cream and is reconstituted with water for use in recipes that call for coconut milk. Many commercially available coconut milk powders list milk or casein among their ingredients.
See also
Powdered milk
Notes
Convenience foods
Food ingredients
Milk substitutes
Southeast Asian cuisine
Instant foods and drinks
Powdered drink mixes
|
https://en.wikipedia.org/wiki/Card%20reader
|
A card reader is a data input device that reads data from a card-shaped storage medium. The first were punched card readers, which read the paper or cardboard punched cards that were used during the first several decades of the computer industry to store information and programs for computer systems. Modern card readers are electronic devices that can read plastic cards embedded with either a barcode, magnetic strip, computer chip or another storage medium.
A memory card reader is a device used for communication with a smart card or a memory card.
A magnetic card reader is a device used to read magnetic stripe cards, such as credit cards.
A business card reader is a device used to scan and electronically save printed business cards.
Smart card readers
A smart card reader is an electronic device that reads smart cards and can be found in the following form:
Keyboards with a built-in card reader
External devices and internal drive bay card reader devices for personal computers (PC)
Laptop models containing a built-in smart card reader and/or using flash upgradeable firmware.
External devices that can read a Personal identification number (PIN) or other information may also be connected to a keyboard (usually called "card readers with PIN pad"). This model works by supplying the integrated circuit on the smart card with electricity and communicating via protocols, thereby enabling the user to read and write to a fixed address on the card.
If the card does not use any standard transmission protocol, but uses a custom/proprietary protocol, it has the communication protocol designation T=14.
The latest PC/SC CCID specifications define a new smart card framework. This framework works with USB devices with the specific device class 0x0B. Readers with this class do not need device drivers when used with PC/SC-compliant operating systems, because the operating system supplies the driver by default.
PKCS#11 is an API designed to be platform-independent, defining a
|
https://en.wikipedia.org/wiki/Schuss
|
Schuss (, German for 'shot') was the first (then unofficial) mascot of the 1968 Winter Olympics in Grenoble, France, featuring a stylized cartoon character wearing skis. Schuss was seen on pins and small toys. Afterwards, every Olympic Games has had a mascot (excluding the 1972 Winter Olympics in Sapporo, Japan)
In alpine skiing, a schuss or schussboom is a straight downhill run at high speed, contrasting with a slalom, mogul, or ski jumping.
|
https://en.wikipedia.org/wiki/Scanned%20synthesis
|
Scanned synthesis represents a powerful and efficient technique for animating wave tables and controlling them in real-time . Developed by Bill Verplank, Rob Shaw, and Max Mathews between 1998 and 1999 at Interval Research, Inc., it is based on the psychoacoustics of how we hear and appreciate timbres and on our motor control (haptic) abilities to manipulate timbres during live performance
Scanned synthesis involves a slow dynamic system whose frequencies of vibration are below about 15 Hz . The ear cannot hear the low frequencies of the dynamic system. So, to make audible frequencies, the "shape" of the dynamic system, along a closed path, is scanned periodically. The "shape" is converted to a sound wave whose pitch is determined by the speed of the scanning function. Pitch control is completely separate from the dynamic system control. Thus timbre and pitch are independent. This system can be looked upon as a dynamic wave table. The model can be compared to a slowly vibrating string, or a two dimensional surface obeying the wave equation.
The following implementations of scanned synthesis are freely available:
Csound features the scanu and scans opcodes developed by Paris Smaragdis. This was the first publicly available implementation of scanned synthesis.
Pure Data features the 'pdp_scan~' and 'pdp_scanxy~' objects of the PDP extension.
Common Lisp Music in circular-scanned.clm
Scanned Synth VST from Humanoid Sound Systems was the first VST implementation of scanned synthesis, first released in March 2006 and still being actively developed. It is available from the Humanoid Sound Systems web site.
ScanSynthGL is another VST implementation of scanned synthesis by mdsp of Smartelectronix, also first released in March 2006. It is available from the KVRAudio forum. There is an unreleased beta version, some audio samples and a screenshot but no public version has been released yet.
|
https://en.wikipedia.org/wiki/DDObjects
|
DDObjects is a remoting framework for Borland Delphi and C++ Builder. A main goal while developing DDObjects has not been only to keep the code one has to implement in order to utilize DDObjects as simple as possible but also very close to Delphi's usual style of event-driven programming.
DDObjects supports remote method calls, server callbacks, asynchronous calls, asynchronous callbacks, stateful and -less objects and other features. DDObjects doesn't mimic other implementations as DCOM or CORBA, which are generalized to a least common denominator, but makes use of Delphi's rich type system including Objects, Exceptions, Records, Sets and Enumerations.
DDObjects uses plain XML and HTTP as protocol, contains a broker component, a sourcecode generator as well as some new visual controls. DDObjects supports Delphi 5 to 7, 2005-XE2 (currently 32bit only) as well as C++ Builder 6, 2006 and 2009.
External links
Inter-process communication
|
https://en.wikipedia.org/wiki/Tobacco%20virtovirus%201
|
Tobacco virtovirus 1, informally called Tobacco mosaic satellite virus, Satellite tobacco mosaic virus (STMV), or tobacco mosaic satellite virus, is a satellite virus first reported in Nicotiana glauca from southern California, U.S.. Its genome consists of linear positive-sense single-stranded RNA.
Tobacco virtovirus 1 is a small, icosahedral plant virus which worsens the symptoms of infection by Tobacco mosaic virus (TMV). Satellite viruses are some of the smallest possible reproducing units in nature; they achieve this by relying on both the host cell and a host-virus (in this case, TMV) for the machinery necessary for them to reproduce. The entire Tobacco virtovirus 1 particle consists of 60 identical copies of a single protein (CP) that make up the viral capsid (coating), and a 1063-nucleotide single-stranded RNA genome which codes for the capsid and one other protein of unknown function.
In a broader sense, the Tobacco Mosaic Virus holds distinctive properties, which primarily include how they are distributed and the range of their hosts. They can be found within Nicotious Glauna plants, which are typically located in warmer areas, such as the United States in California and the South American region in Bolivia and Argentina. Satellite viruses like the Tobacco Vitro Virus 1 tend to be commonly located in the same tobacco tree plant(N. Glauca), which can be described as a tall shrub that possesses small leaves, that show signs of viral infection through its mosaic and yellow complexion. The Satellite Tobacco Mosaic Virus also has a variety of alternative virus helpers, which include tomatoes tobacco, and peppers, but has yet to be found in alternate crop plants.
Additionally, the Tobacco Mosaic Virus has distinctive features in cells, which are particularly instances where virus crystals may form, as well as other protein bodies within unit membrane-bound structures. The membrane that surrounds these crystals contains many vesicles which allows for genome r
|
https://en.wikipedia.org/wiki/Lochlainn%20O%27Raifeartaigh
|
Lochlainn O'Raifeartaigh (; 11 March 1933 – 18 November 2000) was an Irish physicist in the field of theoretical particle physics. He is best known for the O'Raifeartaigh Theorem, a result in unification theory, and the O'Raifeartaigh Model of supersymmetry breaking.
O'Raifeartaigh was born in Clontarf, Dublin in 1933, and attended St. Joseph's C.B.S. in Fairview and Castleknock College.
Most of his scientific career was centred on that city, where he obtained his first degrees at University College Dublin (BA in 1953 and MSc in Mathematical Physics in 1956), and spent from 1968 until his death as Senior Professor at the Dublin Institute for Advanced Studies. He obtained his doctorate from the University of Zurich in 1960, under Walter Heitler. He also visited many institutions, notably Madras, IHES Bures, and the Institute for Advanced Study in Princeton, New Jersey, but it was during an extended stay at Syracuse University (1964-8) that he made the discovery that established his reputation. This result, which became known as O'Raifeartaigh's no-go theorem, showed that it was impossible to combine internal and relativistic symmetries other than in a trivial fashion, thus ending a widespread quest by the particle physics community to achieve this fusion. The O'Raifeartaigh theorem was later generalized to a result known as the Coleman–Mandula theorem.
O'Raifeartaigh's prolific career in theoretical physics was manifested by many fundamental contributions to the application of symmetries in particle physics. In the 1970s he showed that the new supersymmetries could provide a mechanism (O'Raifeartaigh's mechanism) for circumventing his no-go theorem which had assumed only classical Lie group symmetries. In the 1980s he applied non-Abelian gauge theory to the analysis of magnetic monopoles. His interests encompassed the spin-statistics theorem, Kac–Moody and W-algebras, and included early contributions to the theory of non-invariance (dynamical) groups, among much el
|
https://en.wikipedia.org/wiki/Motty
|
Motty (11 July – 21 July 1978) was the only proven hybrid between an Asian and an African elephant. The male calf was born in Chester Zoo, to Asian mother Sheba and African father Jumbolino. He was named after George Mottershead, who founded the Chester Zoo in 1931.
Appearance
Motty's head and ears were morphologically like Loxodonta (African), while the toenail numbers, with five on the front feet and four on the hind were that of Elephas (Asian). The trunk had a single trunk finger as seen in Elephas but the trunk length was more similar to Loxodonta. His vertebral column showed an Loxodonta profile above the shoulders transitioning to the convex hump profile of Elephas below the shoulders.
Cause of death
Due to being born six weeks early, Motty was considered underweight by . Despite intensive human care, Motty died of an umbilical infection 10 days after his birth on 21 July. The necropsy revealed death to be due to necrotizing enterocolitis and E. coli septicaemia present in both his colon and the umbilical cord.
Preservation
His body was preserved by a private company, and is a mounted specimen at the Natural History Museum in London.
Other hybrids
The straight-tusked elephant, an extinct elephant whose closest extant relative is the African forest elephant, interbred with the Asian elephant, as recovered DNA has shown.
Although the Asian elephant Elephas maximus and the African elephant Loxodonta africana belong to different genera, they share the same number of chromosomes, thus making hybridisation possible.
See also
List of individual elephants
|
https://en.wikipedia.org/wiki/Push-IMAP
|
Push-IMAP, which is otherwise known as P-IMAP or Push extensions for Internet Message Access Protocol, is an email protocol designed as a faster way to synchronise a mobile device like a PDA or smartphone to an email server.
It was developed by Oracle and other partners, and based on IMAP with additional enhancements for optimization in a mobile setting. It was submitted as input to the Lemonade Profile IETF Working Group - but was not included in the resulting RFC 4550.
The protocol
The protocol was designed to provide for a secure way to automatically keep communicating new messages between a server and a mobile device like a PDA or Smartphone. It should reduce the time and effort needed to synchronize messages between the two by using an open connection that is kept alive by some kind of heartbeat. To reduce necessary bandwidth, it uses compression and command macros.
Additionally, P-IMAP features a mechanism for sending email that is derived from (but not identical to) SMTP, and so a rich email service is provided using a single connection.
P-IMAP should not be viewed as an alternative to the IMAP IDLE command (RFC 2177). In fact, IDLE is one of the required mechanisms for a P-IMAP server to notify the client (optional notifications are SMS or WAP Push).
Other mobile technologies
Although they are both based on IMAP, the Yahoo! Mail and iCloud push email services for iPhone do not use a standard form of P-IMAP. Yahoo! Mail uses a special UDP message to trigger an email synchronization, while Apple's iCloud push email uses a variant of XMPP.
See also
IMAP
Push email
Lemonade Profile
SyncML
|
https://en.wikipedia.org/wiki/Rydberg%20molecule
|
A Rydberg molecule is an electronically excited chemical species. Electronically excited molecular states are generally quite different in character from electronically excited atomic states. However, particularly for highly electronically excited molecular systems, the ionic core interaction with an excited electron can take on the general aspects of the interaction between the proton and the electron in the hydrogen atom. The spectroscopic assignment of these states follows the Rydberg formula, named after the Swedish physicist Johannes Rydberg, and they are called Rydberg states of molecules. Rydberg series are associated with partially removing an electron from the ionic core.
Each Rydberg series of energies converges on an ionization energy threshold associated with a particular ionic core configuration. These quantized Rydberg energy levels can be associated with the quasiclassical Bohr atomic picture. The closer you get to the ionization threshold energy, the higher the principal quantum number, and the smaller the energy difference between near threshold Rydberg states. As the electron is promoted to higher energy levels in a Rydberg series, the spatial excursion of the electron from the ionic core increases and the system is more like the Bohr quasiclassical picture.
The Rydberg states of molecules with low principal quantum numbers can interact with the other excited electronic states of the molecule. This can cause shifts in energy. The assignment of molecular Rydberg states often involves following a Rydberg series from intermediate to high principal quantum numbers. The energy of Rydberg states can be refined by including a correction called the quantum defect in the Rydberg formula. The quantum defect correction can be associated with the presence of a distributed ionic core.
The experimental study of molecular Rydberg states has been conducted with traditional methods for generations. However, the development of laser-based techniques such as Reson
|
https://en.wikipedia.org/wiki/Surface%20runoff
|
Surface runoff (also known as overland flow or terrestrial runoff) is the unconfined flow of water over the ground surface, in contrast to channel runoff (or stream flow). It occurs when excess rainwater, stormwater, meltwater, or other sources, can no longer sufficiently rapidly infiltrate in the soil. This can occur when the soil is saturated by water to its full capacity, and the rain arrives more quickly than the soil can absorb it. Surface runoff often occurs because impervious areas (such as roofs and pavement) do not allow water to soak into the ground. Furthermore, runoff can occur either through natural or human-made processes.
Surface runoff is a major component of the water cycle. It is the primary agent of soil erosion by water. The land area producing runoff that drains to a common point is called a drainage basin.
Runoff that occurs on the ground surface before reaching a channel can be a nonpoint source of pollution, as it can carry human-made contaminants or natural forms of pollution (such as rotting leaves). Human-made contaminants in runoff include petroleum, pesticides, fertilizers and others. Much agricultural pollution is exacerbated by surface runoff, leading to a number of down stream impacts, including nutrient pollution that causes eutrophication.
In addition to causing water erosion and pollution, surface runoff in urban areas is a primary cause of urban flooding, which can result in property damage, damp and mold in basements, and street flooding.
Generation
Surface runoff is defined as precipitation (rain, snow, sleet, or hail) that reaches a surface stream without ever passing below the soil surface. It is distinct from direct runoff, which is runoff that reaches surface streams immediately after rainfall or melting snowfall and excludes runoff generated by the melting of snowpack or glaciers.
Snow and glacier melt occur only in areas cold enough for these to form permanently. Typically snowmelt will peak in the spring and glacie
|
https://en.wikipedia.org/wiki/BioGRID
|
The Biological General Repository for Interaction Datasets (BioGRID) is a curated biological database of protein-protein interactions, genetic interactions, chemical interactions, and post-translational modifications created in 2003 (originally referred to as simply the General Repository for Interaction Datasets (GRID) by Mike Tyers, Bobby-Joe Breitkreutz, and Chris Stark at the Lunenfeld-Tanenbaum Research Institute at Mount Sinai Hospital. It strives to provide a comprehensive curated resource for all major model organism species while attempting to remove redundancy to create a single mapping of data. Users of The BioGRID can search for their protein, chemical or publication of interest and retrieve annotation, as well as curated data as reported, by the primary literature and compiled by in house large-scale curation efforts. The BioGRID is hosted in Toronto, Ontario, Canada and Dallas, Texas, United States and is partnered with the Saccharomyces Genome Database, FlyBase, WormBase, PomBase, and the Alliance of Genome Resources. The BioGRID is funded by the NIH and CIHR. BioGRID is an observer member of the International Molecular Exchange Consortium (IMEx).
History
The BioGRID was originally published and released as simply the General Repository for Interaction Datasets but was later renamed to the BioGRID in order to more concisely describe the project, and help distinguish it from several GRID Computing projects with a similar name. Originally separated into organism specific databases, the newest version now provides a unified front end allowing for searches across several organisms simultaneously. The BioGRID was developed initially as a project at the Lunenfeld-Tanenbaum Research Institute at Mount Sinai Hospital but has since expanded to include teams at the Institut de Recherche en Immunologie et en Cancérologie at the Université de Montréal and the Lewis-Sigler Institute for Integrative Genomics at Princeton University. The BioGRID's original focus wa
|
https://en.wikipedia.org/wiki/Aromatic%20ring%20current
|
An aromatic ring current is an effect observed in aromatic molecules such as benzene and naphthalene. If a magnetic field is directed perpendicular to the plane of the aromatic system, a ring current is induced in the delocalized π electrons of the aromatic ring. This is a direct consequence of Ampère's law; since the electrons involved are free to circulate, rather than being localized in bonds as they would be in most non-aromatic molecules, they respond much more strongly to the magnetic field.
The ring current creates its own magnetic field. Outside the ring, this field is in the same direction as the externally applied magnetic field; inside the ring, the field counteracts the externally applied field. As a result, the net magnetic field outside the ring is greater than the externally applied field alone, and is less inside the ring.
Relevance to NMR spectroscopy
Aromatic ring currents are relevant to NMR spectroscopy, as they dramatically influence the chemical shifts of 1H nuclei ("protons") in aromatic molecules. The effect helps distinguish these nuclear environments and is therefore of great use in molecular structure determination. In benzene, the ring protons experience deshielding because the induced magnetic field has the same direction outside the ring as the external field and their chemical shift is 7.3 parts per million (ppm) compared to 5.6 for the vinylic proton in cyclohexene. In contrast any proton inside the aromatic ring experiences shielding because both fields are in opposite direction. This effect can be observed in cyclooctadecanonaene ([18]annulene) with 6 inner protons at −3 ppm.
The situation is reversed in antiaromatic compounds. In the dianion of [18]annulene the inner protons are strongly deshielded at 20.8 ppm and 29.5 ppm with the outer protons significantly shielded (with respect to the reference) at −1.1 ppm. Hence a diamagnetic ring current or diatropic ring current is associated with aromaticity whereas a paratropic ring
|
https://en.wikipedia.org/wiki/Zfone
|
Zfone is software for secure voice communication over the Internet (VoIP), using the ZRTP protocol. It is created by Phil Zimmermann, the creator of the PGP encryption software. Zfone works on top of existing SIP- and RTP-programs, but should work with any SIP- and RTP-compliant VoIP-program.
Zfone turns many existing VoIP clients into secure phones. It runs in the Internet Protocol stack on any Windows XP, Mac OS X, or Linux PC, and intercepts and filters all the VoIP packets as they go in and out of the machine, and secures the call on the fly. A variety of different software VoIP clients can be used to make a VoIP call. The Zfone software detects when the call starts, and initiates a cryptographic key agreement between the two parties, and then proceeds to encrypt and decrypt the voice packets on the fly. It has its own separate GUI, telling the user if the call is secure. Zfone describes itself to end-users as a "bump on the wire" between the VoIP client and the Internet, which acts upon the protocol stack.
Zfone's libZRTP SDK libraries are released under either the Affero General Public License (AGPL) or a commercial license. Note that only the libZRTP SDK libraries are provided under the AGPL. The parts of Zfone that are not part of the libZRTP SDK libraries are not licensed under the AGPL or any other open source license. Although the source code of those components is published for peer review, they remain proprietary. The Zfone proprietary license also contains a time bomb provision.
It appears that Zfone development has stagnated, however, as the most recent version was released on 22 Mar 2009. In addition, since 29 Jan 2011, it has not been possible to download Zfone from the developer's website since the download server has gone offline.
Platforms and specification
Availability – Mac OS X, Linux, and Windows as compiled programs as well as an SDK.
Encryption standards – Based on ZRTP, which uses 128- or 256-bit AES together with a 3072-bit key ex
|
https://en.wikipedia.org/wiki/Principal%20type
|
In type theory, a type system is said to have the principal type property if, given a term and an environment, there exists a principal type for this term in this environment, i.e. a type such that all other types for this term in this environment are an instance of the principal type.
The principal type property is a desirable one for a type system, as it provides a way to type expressions in a given environment with a type which encompasses all of the expressions' possible types, instead of having several incomparable possible types. Type inference for systems with the principal type property will usually attempt to infer the principal type.
For instance, the ML system has the principal type property and principal types for an expression can be computed by Robinson's unification algorithm, which is used by the Hindley–Milner type inference algorithm. However, many extensions to the type system of ML, such as polymorphic recursion, can make the inference of the principal type undecidable. Other extensions, such as Haskell's generalized algebraic data types, destroy the principal type property of the language, requiring the use of type annotations or the compiler to "guess" the intended type from among several options.
The principal typing property requires that, given a term, there exist a typing (i.e. a pair with a context and a type) which is an instance of all possible typings of the term. The principal typing property can be confused with the principal type property but is distinct. The principal type property relies on the context as an input to determine the type, but the principal typing property outputs the context as a result.
|
https://en.wikipedia.org/wiki/No%20Time%20Like%20the%20Past
|
"No Time Like the Past" is episode 112 of the American television anthology series The Twilight Zone. In this episode a man tries to escape the troubles of the 20th century by taking up residence in an idyllic small town in the 19th century.
Opening narration
Plot
Disgusted with 20th century problems such as world wars, atomic weapons and radioactive poisoning, Paul Driscoll solicits the help of his colleague Harvey and uses a time machine, intent to remake the present by altering past events.
Paul first travels to Hiroshima on August 6, 1945, and attempts to warn a Hiroshima police captain about the atomic bomb, but the captain dismisses him as insane. Paul then travels to a Berlin hotel room to assassinate Adolf Hitler in August 1939 (immediately before the outbreak of World War II the following month), but is interrupted when a housekeeper knocks on his door and later calls two SS guards to his room. On his third stop, Paul tries to change the course of the Lusitania on May 6, 1915, to avoid being torpedoed by a German U-boat, but the ship's captain questions his credibility.
Paul accepts the hypothesis that the past cannot be changed. He then uses the time machine to go to the town of Homeville, Indiana in 1881, resolving not to make any changes, but just to live out his life free of the problems of the modern age. Upon his arrival, he realizes that President James A. Garfield will be shot the next day, but resists the temptation to intervene. He stays at a boarding house in town and meets Abigail Sloan, a teacher. At one of the boarding house's dinners, a boarder named Hanford vehemently espouses American imperialism. Paul delivers an angry rebuttal in which he accuses Hanford of speaking from ignorance of war and a certainty that he himself will not have to take part in any fighting, while dropping numerous allusions to wars that have yet to take place. Abigail is impressed and privately tells him that she shares his views, having lost her father and two b
|
https://en.wikipedia.org/wiki/Memory%20and%20aging
|
Age-related memory loss, sometimes described as "normal aging" (also spelled "ageing" in British English), is qualitatively different from memory loss associated with types of dementia such as Alzheimer's disease, and is believed to have a different brain mechanism.
Mild cognitive impairment
Mild cognitive impairment (MCI) is a condition in which people face memory problems more often than that of the average person their age. These symptoms, however, do not prevent them from carrying out normal activities and are not as severe as the symptoms for Alzheimer's disease (AD). Symptoms often include misplacing items, forgetting events or appointments, and having trouble finding words.
According to recent research, MCI is seen as the transitional state between cognitive changes of normal aging and Alzheimer's disease. Several studies have indicated that individuals with MCI are at an increased risk for developing AD, ranging from one percent to twenty-five percent per year; in one study twenty-four percent of MCI patients progressed to AD in two years and twenty percent more over three years, whereas another study indicated that the progression of MCI subjects was fifty-five percent in four and a half years. Some patients with MCI, however, never progress to AD.
Studies have also indicated patterns that are found in both MCI and AD. Much like patients with Alzheimer's disease, those with mild cognitive impairment have difficulty accurately defining words and using them appropriately in sentences when asked. While MCI patients had a lower performance in this task than the control group, AD patients performed worse overall. The abilities of MCI patients stood out, however, due to the ability to provide examples to make up for their difficulties. AD patients failed to use any compensatory strategies and therefore exhibited the difference in use of episodic memory and executive functioning.
Normal aging
Normal aging is associated with a decline in various memory abili
|
https://en.wikipedia.org/wiki/Haag%E2%80%93%C5%81opusza%C5%84ski%E2%80%93Sohnius%20theorem
|
In theoretical physics, the Haag–Łopuszański–Sohnius theorem states that if both commutating and anticommutating generators are considered, then the only way to nontrivially mix spacetime and internal symmetries is through supersymmetry. The anticommutating generators must be spin-1/2 spinors which can additionally admit their own internal symmetry known as R-symmetry. The theorem is a generalization of the Coleman–Mandula theorem to Lie superalgebras. It was proved in 1975 by Rudolf Haag, Jan Łopuszański, and Martin Sohnius as a response to the development of the first supersymmetric field theories by Julius Wess and Bruno Zumino in 1974.
History
During the 1960s, a set of theorems investigating how internal symmetries can be combined with spacetime symmetries were proved, with the most general being the Coleman–Mandula theorem. It showed that the Lie group symmetry of an interacting theory must necessarily be a direct product of the Poincaré group with some compact internal group. Unaware of this theorem, during the early 1970s a number of authors independently came up with supersymmetry, seemingly in contradiction to the theorem since there some generators do transform non-trivially under spacetime transformations.
In 1974 Jan Łopuszański visited Karlsruhe from Wrocław shortly after Julius Wess and Bruno Zumino constructed the first supersymmetric quantum field theory, the Wess–Zumino model. Speaking to Wess, Łopuszański was interested in figuring out how these new theories managed to overcome the Coleman–Mandula theorem. While Wess was too busy to work with Łopuszański, his doctoral student Martin Sohnius was available. Over the next few weeks they devised a proof of their theorem after which Łopuszański went on to CERN where he worked with Rudolf Haag to significantly refine the argument and also extend it to the massless case. Later, after Łopuszański went back to Wrocław, Sohnius went to CERN to finish the paper with Haag, which was published in 1975.
The
|
https://en.wikipedia.org/wiki/Uncle%20Petros%20and%20Goldbach%27s%20Conjecture
|
Uncle Petros and Goldbach's Conjecture is a 1992 novel by Greek author Apostolos Doxiadis. It concerns a young man's interaction with his reclusive uncle, who sought to prove a famous unsolved mathematics problem, called Goldbach's Conjecture, that every even number greater than two is the sum of two primes. The novel discusses mathematical problems and some recent history of mathematics.
Plot
Petros Papachristos, a child prodigy, is brought by his father, a Greek businessman, to the University of Munich to verify his genius with Constantin Caratheodory, a Greek-German mathematician. The boy immediately shows an excellent aptitude for mathematics and graduates soon at the University of Berlin. Later he worked as a postdoctoral researcher at the University of Cambridge, where he collaborates with the mathematicians Godfrey Harold Hardy, John Edensor Littlewood and Srinivasa Ramanujan. He is then offered a professorship in Munich, which he accepts because it was far from the great mathematical centres of the time, and it was therefore the ideal place to live in isolation while tackling the Goldbach conjecture.
After years of fruitless work, Petros arrives at an important intermediate result, which he prefers not to disclose in order not to reveal the object of his research and involuntary helping someone else working on the same problem. Later he comes to an even more important result and decides finally to publish it. He sends it to Hardy, whose answer, however, is disappointing: the same discovery had already been published by a young Austrian mathematician. Petros then falls into the deepest depression, taunted by mental exhaustion and the fear that his genius might vanish. Mathematics also begins to enter his dreams, which often turn into nightmares. During a research visit at Trinity College, however, he learns from a young mathematician named Alan Turing of the existence of the incompleteness' theorem by Kurt Gödel.
Returning to Munich, he resumes his work w
|
https://en.wikipedia.org/wiki/Primordium
|
A primordium (; : primordia; synonym: anlage) in embryology, is an organ or tissue in its earliest recognizable stage of development. Cells of the primordium are called primordial cells. A primordium is the simplest set of cells capable of triggering growth of the would-be organ and the initial foundation from which an organ is able to grow. In flowering plants, a floral primordium gives rise to a flower.
Although it is a frequently used term in plant biology, the word is used in describing the biology of all multicellular organisms (for example: a tooth primordium in animals, a leaf primordium in plants or a sporophore primordium in fungi.)
Primordium development in plants
Plants produce both leaf and flower primordia cells at the shoot apical meristem (SAM). Primordium development in plants is critical to the proper positioning and development of plant organs and cells. The process of primordium development is intricately regulated by a set of genes that affect the positioning, growth and differentiation of the primordium. Genes including STM (shoot meristemless) and CUC (cup-shaped cotyledon) are involved in defining the borders of the newly formed primordium.
The plant hormone auxin has also been implicated in this process, with the new primordium being initiated at the placenta, where the auxin concentration is highest. There is still much to understand about the genes involved in primordium development.
Leaf primordia are groups of cells that will form into new leaves. These new leaves form near the top of the shoot and resemble knobby outgrowths or inverted cones. Flower primordia are the little buds we see at the end of stems, from which flowers will develop. Flower primordia start off as a crease or indentation and later form into a bulge. This bulging is caused by slower and less anisotropic, or directionally dependent, growth.
Primordium Initiation
Primordia initiation is the precursor for the start of a primordium, and typically confers new growt
|
https://en.wikipedia.org/wiki/Connectedness%20locus
|
In one-dimensional complex dynamics, the connectedness locus of a parameterized family of one-variable holomorphic functions is a subset of the parameter space which consists of those parameters for which the corresponding Julia set is connected.
Examples
Without doubt, the most famous connectedness locus is the Mandelbrot set, which arises from the family of complex quadratic polynomials :
The connectedness loci of the higher-degree unicritical families,
(where ) are often called 'Multibrot sets'.
For these families, the bifurcation locus is the boundary of the connectedness locus. This is no longer true in settings, such as the full parameter space of cubic polynomials, where there is more than one free critical point. For these families, even maps with disconnected Julia sets may display nontrivial dynamics. Hence here the connectedness locus is generally of less interest.
External links
Complex analysis
Fractals
|
https://en.wikipedia.org/wiki/Sheep%E2%80%93goat%20hybrid
|
A sheep–goat hybrid (called a geep in popular media or sometimes a shoat) is the offspring of a sheep and a goat. While sheep and goats are similar and can be mated, they belong to different genera in the subfamily Caprinae of the family Bovidae. Sheep belong to the genus Ovis and have 54 chromosomes, while goats belong to the genus Capra and have 60 chromosomes. The offspring of a sheep–goat pairing is generally stillborn. Despite widespread shared pasturing of goats and sheep, hybrids are very rare, demonstrating the genetic distance between the two species. They are not to be confused with sheep–goat chimera, which are artificially created by combining the embryos of a goat and a sheep.
Characteristics
There is a long-standing belief in sheep–goat hybrids, which is presumably due to the animals' resemblance to each other. Some primitive varieties of sheep may be misidentified as goats. In Darwinism – An Exposition of the Theory of Natural Selection with Some of Its Applications (1889), Alfred Russel Wallace wrote:
[...] the following statement of Mr. Low: "It has been long known to shepherds, though questioned by naturalists, that the progeny of the cross between the sheep and goat is fertile. Breeds of this mixed race are numerous in the north of Europe." Nothing appears to be known of such hybrids either in Scandinavia or in Italy; but Professor Giglioli of Florence has kindly given me some useful references to works in which they are described. The following extract from his letter is very interesting: "I need not tell you that there being such hybrids is now generally accepted as a fact. Buffon (Supplements, tom. iii. p. 7, 1756) obtained one such hybrid in 1751 and eight in 1752. Sanson (La Culture, vol. vi. p. 372, 1865) mentions a case observed in the Vosges, France. Geoff. St. Hilaire (Hist. Nat. Gén. des reg. org., vol. iii. p. 163) was the first to mention, I believe, that in different parts of South America the ram is more usually crossed with th
|
https://en.wikipedia.org/wiki/Profunctor
|
In category theory, a branch of mathematics, profunctors are a generalization of relations and also of bimodules.
Definition
A profunctor (also named distributor by the French school and module by the Sydney school) from a category to a category , written
,
is defined to be a functor
where denotes the opposite category of and denotes the category of sets. Given morphisms respectively in and an element , we write to denote the actions.
Using the cartesian closure of , the category of small categories, the profunctor can be seen as a functor
where denotes the category of presheaves over .
A correspondence from to is a profunctor .
Profunctors as categories
An equivalent definition of a profunctor is a category whose objects are the disjoint union of the objects of and the objects of , and whose morphisms are the morphisms of and the morphisms of , plus zero or more additional morphisms from objects of to objects of . The sets in the formal definition above are the hom-sets between objects of and objects of . (These are also known as het-sets, since the corresponding morphisms can be called heteromorphisms.) The previous definition can be recovered by the restriction of the hom-functor to .
This also makes it clear that a profunctor can be thought of as a relation between the objects of and the objects of , where each member of the relation is associated with a set of morphisms. A functor is a special case of a profunctor in the same way that a function is a special case of a relation.
Composition of profunctors
The composite of two profunctors
and
is given by
where is the left Kan extension of the functor along the Yoneda functor of (which to every object of associates the functor ).
It can be shown that
where is the least equivalence relation such that whenever there exists a morphism in such that
and .
Equivalently, profunctor composition can be written using a coend
The bicategory of profunctors
Composit
|
https://en.wikipedia.org/wiki/Joan%20Roughgarden
|
Joan Roughgarden (born 13 March 1946) is an American ecologist and evolutionary biologist. She has engaged in theory and observation of coevolution and competition in Anolis lizards of the Caribbean, and recruitment limitation in the rocky intertidal zones of California and Oregon. She has more recently become known for her rejection of sexual selection, her theistic evolutionism, and her work on holobiont evolution.
Personal life and education
Roughgarden was born in Paterson, New Jersey, United States. She received a Bachelor of Science in biology (with Distinction and Phi Beta Kappa) and a Bachelor of Arts in Philosophy with highest honors from University of Rochester in 1968 and later a Ph.D. in biology from Harvard University in 1971. In 1998, Roughgarden came out as transgender and changed her name to Joan, making a coming out post on her website on her 52nd birthday.
Career
Roughgarden worked as an instructor and Assistant Professor of Biology at the University of Massachusetts Boston from 1970 to 1972. In 1972 she joined the faculty of the Department of Biology at Stanford University. After becoming full professor she retired in 2011, and became Emeritus Professor. She founded and directed the Earth Systems Program at Stanford and has received awards for service to undergraduate education. In 2012 she moved to Hawaii, where she became an adjunct professor at the Hawaiʻi Institute of Marine Biology. In her academic career, Roughgarden advised 20 Ph.D. students and 15 postdoctoral fellows.
Roughgarden has authored books and over 180 scientific articles. In addition to a textbook on ecological and evolutionary theory in 1979, Roughgarden has carried out ecological field studies with Caribbean lizards and with barnacles and their larvae along the California coast. In 2015, she wrote the fiction novel Ram-2050, a science-fiction retelling of the Ramayana.
Research
Caribbean Anoles & Interspecific Competition
Roughgarden's early work in the 1970s and 80s h
|
https://en.wikipedia.org/wiki/Home%20Work%20Convention
|
Home Work Convention, created in 1996, is an International Labour Organization (ILO) Convention, which came into force in 2000. It offers protection to workers who are employed in their own homes.
Overview
It was established in 1996, with the preamble stating:
The Convention provides protection for home workers, giving them equal rights with regard to workplace health and safety, social security rights, access to training, remuneration, minimum age of employment, maternity protection, and other rights.
Objectives of the Home Work Convention
The term home work means remote work done by a person in a place other than the workplace of the employer. The term employer describes a person, who, either directly or through an intermediary, provides home work in pursuance of his or her business.
Each member of the Convention aims the continuous improving the situation of homeworkers. The intention of the convention is to strengthen the principle of equal treatment, in particular to guarantee the establishment of the rights of homeworkers.
In addition, the convention has the specific purpose of protecting against discrimination in the following areas of employment: occupational safety, remuneration, social security protection, access to training, minimum age for taking up employment and maternity benefits.
Safety and health at work
National laws and regulations on safety and health at work also apply to home work. When working at home, certain conditions must be adapted so that a safe and healthy working environment is ensured.
Ratifications
The convention has been ratified by 13 countries as of 2022:
|
https://en.wikipedia.org/wiki/Fifth%20metacarpal%20bone
|
The fifth metacarpal bone (metacarpal bone of the little finger or pinky finger) is the most medial and second-shortest of the metacarpal bones.
Surfaces
It presents on its base one facet on its superior surface, which is concavo-convex and articulates with the hamate, and one on its radial side, which articulates with the fourth metacarpal.
On its ulnar side is a prominent tubercle for the insertion of the tendon of the extensor carpi ulnaris muscle.
The dorsal surface of the body is divided by an oblique ridge, which extends from near the ulnar side of the base to the radial side of the head. The lateral part of this surface serves for the attachment of the fourth Interosseus dorsalis; the medial part is smooth, triangular, and covered by the extensor tendons of the little finger.
The palmar surface is similarly divided: Its lateral side (facing the fourth metacarpal) provides the origin for the third palmar interosseus, its medial side contains the insertion of opponens digiti quinti.
Clinical significance
A fracture of the fourth and/or fifth metacarpal bones transverse neck secondary due to axial loading is known as a boxer's fracture. The fifth metacarpal bone is the most common bone to be injured when throwing a punch.
Ossification
The ossification process begins in the shaft during prenatal life, and in the head between 11th and 37th months.
Additional images
See also
Metacarpus
First metacarpal bone
Second metacarpal bone
Third metacarpal bone
Fourth metacarpal bone
|
https://en.wikipedia.org/wiki/Fourth%20metacarpal%20bone
|
The fourth metacarpal bone (metacarpal bone of the ring finger) is shorter and smaller than the third.
The base is small and quadrilateral; its superior surface presents two facets, a large one medially for articulation with the hamate, and a small one laterally for the capitate.
On the radial side are two oval facets, for articulation with the third metacarpal; and on the ulnar side a single concave facet, for the fifth metacarpal.
Clinical relevance
A shortened fourth metacarpal bone can be a symptom of Kallmann syndrome, a genetic condition which results in the failure to commence or the non-completion of puberty.
A short fourth metacarpal bone can also be found in Turner syndrome, a disorder involving sex chromosomes.
A fracture of the fourth and/or fifth metacarpal bones transverse neck secondary due to axial loading is known as a boxer's fracture.
Ossification
The ossification process begins in the shaft during prenatal life, and in the head between 11th and 37th months.
Additional images
See also
Metacarpus
First metacarpal bone
Second metacarpal bone
Third metacarpal bone
Fifth metacarpal bone
|
https://en.wikipedia.org/wiki/Third%20metacarpal%20bone
|
The third metacarpal bone (metacarpal bone of the middle finger) is a little smaller than the second.
The dorsal aspect of its base presents on its radial side a pyramidal eminence, the styloid process, which extends upward behind the capitate; immediately distal to this is a rough surface for the attachment of the extensor carpi radialis brevis muscle.
The carpal articular facet is concave behind, flat in front, and articulates with the capitate.
On the radial side is a smooth, concave facet for articulation with the second metacarpal, and on the ulnar side two small oval facets for the fourth metacarpal.
Ossification
The ossification process begins in the shaft during prenatal life, and in the head between the 11th and 27th months.
Additional images
See also
Metacarpus
First metacarpal bone
Second metacarpal bone
Fourth metacarpal bone
Fifth metacarpal bone
|
https://en.wikipedia.org/wiki/Second%20metacarpal%20bone
|
The second metacarpal bone (metacarpal bone of the index finger) is the longest, and its base the largest, of all the metacarpal bones.
Human anatomy
Its base is prolonged upward and medialward, forming a prominent ridge.
It presents four articular facets, three on the upper surface and one on the ulnar side:
Of the facets on the upper surface:
the intermediate is the largest and is concave from side to side, convex from before backward for articulation with the lesser multangular;
the lateral is small, flat and oval for articulation with the greater multangular;
the medial, on the summit of the ridge, is long and narrow for articulation with the capitate.
The facet on the ulnar side articulates with the third metacarpal.
The extensor carpi radialis longus muscle is inserted on the dorsal surface and the flexor carpi radialis muscle on the volar surface of the base. The shaft gives origin to the first palmar interosseus and the first and second dorsal interossei.
This bone is often the most prone to damage from fast bowlers in cricket, as it is furthest down the bat handle on both left- and right-handers, and as such is in danger of being struck by balls that are pitched short.
Evolution
The articulation between the second metacarpal and the capitate is considered uniquely specialized in hominids. On the second metacarpal, the facet for the capitate is directed proximally, almost perpendicular to the facet for the third metacarpal, while the corresponding facet on the capitate is oriented distally. This is to receive compressive forces generated by the pad-to-pad opposition between the thumb and the index finger. In contrast, in apes, including fossil apes such as Dryopithecus and Proconsul, these facets are oriented in a sagittal plane. In quadrupedal monkeys these facets are oriented slightly differently due to their locomotor behaviour.
In Oreopithecus, a Miocene hominid that became extinct , the orientation of the facet on the second metacarpal i
|
https://en.wikipedia.org/wiki/HP%20Xpander
|
The HP Xpander (F1903A) aka "Endeavour" was to be Hewlett-Packard's newest graphing calculator in 2002, but the project was cancelled in November 2001 months before it was scheduled to go into production. It had both a keyboard and a pen-based interface, measured 162.6 mm by 88.9 mm by 22.9 mm, with a large grayscale screen, and ran on two rechargeable AA batteries. It had a semi-translucent green cover on a gray case and an expansion slot.
The underlying operating system was Windows CE 3.0. It had 8 MB RAM, 16 MB ROM, a geometry application, a 240×320 display, a Hitachi SH3 processor, and e-lessons. One of the obvious omissions in the Xpander was the lack of a computer algebra system (CAS).
Math Xpander
After discontinuing the Xpander, HP decided to release the Xpander software, named the Math Xpander, as a free-of-charge application that ran on Windows CE-based Pocket PC devices. It was hosted by Saltire Software, who had been involved in its design.
See also
List of Hewlett-Packard products: Pocket calculators
HP calculators
Casio ClassPad 300 — a similar device by Casio
TI PLT SHH1
HP Jornada X25
|
https://en.wikipedia.org/wiki/Chronology%20of%20computation%20of%20%CF%80
|
The table below is a brief chronology of computed numerical values of, or bounds on, the mathematical constant pi (). For more detailed explanations for some of these calculations, see Approximations of .
The last 100 decimal digits of the latest 2022 world record computation are:
4658718895 1242883556 4671544483 9873493812 1206904813 2656719174 5255431487 2142102057 7077336434 3095295560
Before 1400
1400–1949
1949–2009
2009–present
See also
History of pi
Approximations of π
|
https://en.wikipedia.org/wiki/Sun%20SPOT
|
Sun SPOT (Sun Small Programmable Object Technology) was a sensor node for a wireless sensor network developed by Sun Microsystems announced in 2007. The device used the IEEE 802.15.4 standard for its networking, and unlike other available sensor nodes, used the Squawk Java virtual machine.
After the acquisition of Sun Microsystems by Oracle Corporation, the SunSPOT platform was supported but its forum was shut down in 2012. A mirror of the old site is maintained for posterity.
Hardware
The completely assembled device fit in the palm of a hand.
Its first processor board included an ARM architecture 32 bit CPU with ARM920T core running at 180 MHz. It had 512 KB RAM and 4 MB flash memory. A 2.4 GHz IEEE 802.15.4 radio had an integrated antenna and a USB interface was included.
A sensor board included a three-axis accelerometer (with 2G and 6G range settings), temperature sensor, light sensor, 8 tri-color LEDs, analog and digital inputs, two momentary switches, and 4 high current output pins.
The unit used a 3.7V rechargeable 750 mAh lithium-ion battery, had a 30 uA deep sleep mode, and battery management provided by software.
Software
The device's use of Java device drivers is unusual since Java is generally hardware-independent. Sun SPOT uses a small Java ME Squawk which ran directly on the processor without an operating system. Both the Squawk VM and the Sun SPOT code are open source.
Standard Java development environments such as NetBeans can be used to create SunSPOT applications.
The management and deployment of application are handled by ant scripts, which can be called from a development environment, command line, or the tool provided with the SPOT SDK, "solarium".
The nodes communicate using the IEEE 802.15.4 standard including the base-station approach to sensor networking. Protocols such as Zigbee can be built on 802.15.4.
Sun Labs reported implementations of RSA and elliptic curve cryptography (ECC) optimized for small embedded devices.
Availabi
|
https://en.wikipedia.org/wiki/McCumber%20cube
|
In 1991, John McCumber created a model framework for establishing and evaluating information security (information assurance) programs, now known as The McCumber Cube.
This security model is depicted as a three-dimensional Rubik's Cube-like grid.
The concept of this model is that, in developing information assurance systems, organizations must consider the interconnectedness of all the different factors that impact them. To devise a robust information assurance program, one must consider not only the security goals of the program (see below), but also how these goals relate specifically to the various states in which information can reside in a system and the full range of available security safeguards that must be considered in the design. The McCumber model helps one to remember to consider all important design aspects without becoming too focused on any one in particular (i.e., relying exclusively on technical controls at the expense of requisite policies and end-user training).
Dimensions and attributes
Desired goals
Confidentiality: assurance that sensitive information is not intentionally or accidentally disclosed to unauthorized individuals.
Integrity: assurance that information is not intentionally or accidentally modified in such a way as to call into question its reliability.
Availability: ensuring that authorized individuals have both timely and reliable access to data and other resources when needed.
Information states
Storage: Data at rest (DAR) in an information system, such as that stored in memory or on a magnetic tape or disk.
Transmission: transferring data between information systems - also known as data in transit (DIT).
Processing: performing operations on data in order to achieve the desired objective.
Safeguards
Policy and practices: administrative controls, such as management directives, that provide a foundation for how information assurance is to be implemented within an organization. (examples: acceptable use policies or inci
|
https://en.wikipedia.org/wiki/Extreme%20Networks
|
Extreme Networks is an American networking company based in Morrisville, North Carolina. Extreme Networks designs, develops, and manufactures wired and wireless network infrastructure equipment and develops the software for network management, policy, analytics, security and access controls.
History
Extreme Networks was established by co-founders Gordon Stitt, Herb Schneider, and Stephen Haddock in 1996 in California, United States, with its first offices located in Cupertino, which later moved to Santa Clara, and later to San Jose. Early investors included Norwest Venture Partners, AVI Capital Management, Trinity Ventures, and Kleiner Perkins Caufield & Byers. Gordon Stitt was a co-founder and served as chief executive officer until August 2006, when he retired and became chairman of the board of directors.
The initial public offering in April 1999 was listed on the NASDAQ stock exchange as ticker "EXTR."
In April 2013, Charles W. Berger (from ParAccel as it was acquired by Actian) replaced Oscar Rodriguez as CEO.
In November 2014, Extreme Networks was named the first Official Wi-Fi solutions provider of the NFL.
On April 19, 2015, Charles W. Berger resigned as CEO, and was replaced by Board Chairman Ed Meyercord.
In September 2020, analyst firm Omdia named Extreme Networks the fast-growing vendor in cloud-managed networking.
In November 2021, Extreme Networks was named a Leader in the 2021 Gartner Magic Quadrant for Wired and Wireless LAN Access Infrastructure for the fourth consecutive year by Gartner analysts.
Acquisitions
In October 1996, Extreme Networks acquired Mammoth Technology.
Extreme Networks acquired Optranet in February 2001 and Webstacks in March 2001. Extreme had invested in both companies, which were purchased for about $73 million and $74 million respectively.
On September 12, 2013, Extreme Networks announced it would acquire Enterasys Networks for about $180 million.
On October 31, 2016, Extreme Networks announced that it completed th
|
https://en.wikipedia.org/wiki/Arthus%20reaction
|
In immunology, the Arthus reaction () is a type of local type III hypersensitivity reaction. Type III hypersensitivity reactions are immune complex-mediated, and involve the deposition of antigen/antibody complexes mainly in the vascular walls, serosa (pleura, pericardium, synovium), and glomeruli. This reaction is usually encountered in experimental settings following the injection of antigens.
History
The Arthus reaction was discovered by Nicolas Maurice Arthus in 1903. Arthus repeatedly injected horse serum subcutaneously into rabbits. After four injections, he found that there was edema and that the serum was absorbed slowly. Further injections eventually led to gangrene.
Process
The Arthus reaction involves the in situ formation of antigen/antibody complexes after the intradermal injection of an antigen. If the individual has circulating antibody either from passive immunity or because of prior encounter with the antigen, an Arthus reaction may occur. Typical of most mechanisms of the type III hypersensitivity, Arthus manifests as local vasculitis due to deposition of IgG-based immune complexes in dermal blood vessels. The pathogenesis of the Arthus reaction is often erroneously described to be the result of complement activation, which subsequently results in neutrophil infiltration along with the other hallmarks of inflammation. However, complement in and of itself likely has a minor role in the actual process of the Arthus reaction and other type III hypersensitivities. Specifically, mice lacking the common gamma chain subunit of the Fc receptors that is required for signaling by CD64 (FcγRI) and CD16A (FcγRIIIA) as well as FcεRI have a drastic reduction in their Arthus reaction severity. Furthermore, mice with intact Fc signaling whose complement is depleted through the use of cobra venom have only a minor reduction in their Arthus reaction scores. The reaction as a whole is driven by mast cell degranulation. Subsequent investigation demonstrated that
|
https://en.wikipedia.org/wiki/Cross-fostering
|
Cross-fostering is a technique used in animal husbandry, animal science, genetic and nature versus nurture studies, and conservation, whereby offspring are removed from their biological parents at birth and raised by surrogates, typically of a different species, hence 'cross.' This can also occasionally occur in nature.
Animal husbandry
Cross-fostering young animals is usually done to equalize litter size. Individual animals born in large litters are faced with much more competition for resources, such as breast milk, food and space, than individuals born in smaller litters. Herd managers will typically move some individuals from a large litter to a smaller litter where they will be raised by a non-biological parent. This is typically done in pig farming because litters with up to 15 piglets are common. A sow with a large litter may have difficulty producing enough milk for all piglets, or the sow may not have enough functional teats to feed all piglets simultaneously. When this occurs, smaller or weaker piglets are at risk of starving to death. Herd managers will often transfer some piglets from a large litter to another lactating sow which either has a smaller litter or has had her own biological piglets recently weaned. Herd managers will typically try to equalize litters by number and also weight of individuals. When done successfully, cross-fostering reduces piglet mortality.
In research
Cross-fostering can be used to study the impact of postnatal environment on genetic-linked diseases as well as on behavioural pattern. In behavioral studies, if cross-fostered offspring show a behavioral trait similar to their biological parents and dissimilar from their foster parents, a behavior can be shown to have a genetic basis. Similarly if the offspring develops traits dissimilar to their biological parents and similar to their foster parents environmental factors are shown to be dominant. In many cases there is a blend of the two, which shows both genes and environme
|
https://en.wikipedia.org/wiki/Optical%20lens%20design
|
Optical lens design is the process of designing a lens to meet a set of performance requirements and constraints, including cost and manufacturing limitations. Parameters include surface profile types (spherical, aspheric, holographic, diffractive, etc.), as well as radius of curvature, distance to the next surface, material type and optionally tilt and decenter. The process is computationally intensive, using ray tracing or other techniques to model how the lens affects light that passes through it.
Design requirements
Performance requirements can include:
Optical performance (image quality): This is quantified by various metrics, including encircled energy, modulation transfer function, Strehl ratio, ghost reflection control, and pupil performance (size, location and aberration control); the choice of the image quality metric is application specific.
Physical requirements such as weight, static volume, dynamic volume, center of gravity and overall configuration requirements.
Environmental requirements: ranges for temperature, pressure, vibration and electromagnetic shielding.
Design constraints can include realistic lens element center and edge thicknesses, minimum and maximum air-spaces between lenses, maximum constraints on entrance and exit angles, physically realizable glass index of refraction and dispersion properties.
Manufacturing costs and delivery schedules are also a major part of optical design. The price of an optical glass blank of given dimensions can vary by a factor of fifty or more, depending on the size, glass type, index homogeneity quality, and availability, with BK7 usually being the cheapest. Costs for larger and/or thicker optical blanks of a given material, above 100–150 mm, usually increase faster than the physical volume due to increased blank annealing time required to achieve acceptable index homogeneity and internal stress birefringence levels throughout the blank volume. Availability of glass blanks is driven by how frequently
|
https://en.wikipedia.org/wiki/FKM
|
FKM is a family of fluorocarbon-based fluoroelastomer materials defined by ASTM International standard D1418, and ISO standard 1629. It is commonly called fluorine rubber or fluoro-rubber. FKM is an abbreviation of Fluorine Kautschuk Material. All FKMs contain vinylidene fluoride as a monomer. Originally developed by DuPont (under the brand name Viton, now owned by Chemours), FKMs are today also produced by many companies, including: Daikin (Dai-El), 3M (Dyneon), Solvay S.A. (Tecnoflon), HaloPolymer (Elaftor), Gujarat Fluorochemicals (Fluonox), and several Chinese manufacturers. Fluoroelastomers are more expensive than neoprene or nitrile rubber elastomers. They provide additional heat and chemical resistance. FKMs can be divided into different classes on the basis of either their chemical composition, their fluorine content, or their cross-linking mechanism.
Types
On the basis of their chemical composition FKMs can be divided into the following types:
Type-1 FKMs are composed of vinylidene fluoride (VDF) and hexafluoropropylene (HFP). Copolymers are the standard type of FKMs showing a good overall performance. Their fluorine content is approximately 66 weight percent.
Type-2 FKMs are composed of VDF, HFP, and tetrafluoroethylene (TFE). Terpolymers have a higher fluorine content compared to copolymers (typically between 68 and 69 weight percent fluorine), which results in better chemical and heat resistance. Compression set and low temperature flexibility may be affected negatively.
Type-3 FKMs are composed of VDF, TFE, and perfluoromethylvinylether (PMVE). The addition of PMVE provides better low temperature flexibility compared to copolymers and terpolymers. Typically, the fluorine content of type-3 FKMs ranges from 62 to 68 weight percent.
Type-4 FKMs are composed of propylene, TFE, and VDF. While base resistance is increased in type-4 FKMs, their swelling properties, especially in hydrocarbons, are worsened. Typically, they have a fluorine content of about
|
https://en.wikipedia.org/wiki/Buckets%20of%20Rain
|
"Buckets of Rain" is a song by Bob Dylan, recorded on September 19, 1974 in New York City and released in 1975 on Dylan's critically acclaimed album Blood on the Tracks.
A September 18, 1974 outtake of the song was released in 2018 on the single-CD and 2-LP versions of The Bootleg Series Vol. 14: More Blood, More Tracks, with the complete recording sessions released on the deluxe edition of that album.
Background
In the officially released studio recording, "Buckets of Rain" is played in the key of E major. There are only two instruments: acoustic guitar and bass guitar. The guitar is not in standard tuning; rather, it is in "Open E" tuning.
Lyrically, "Buckets of Rain" is relatively simple, with five short verses addressing a lover. Oliver Trager describes the song thus:
Closing an otherwise desperate album with a light reappraisal of commitment, "Buckets of Rain" is a final, Sinatra-like tip of the hat sung with the playfulness of an old Piedmont songster. Though Dylan seems to liken the relationship he describes here with the ferocity of a deluge, he plaintively sings to his love, describing in light, sensual brushstrokes why he still finds her special. (88)
The melody in fact is virtually identical to that of the 1972 song "Seaside Shuffle" written by English musician Jona Lewie and recorded that year under the band name "Terry Dactyl and the Dinosaurs" although the mood and style of the two songs are very different.
Reception and legacy
Spectrum Culture included "Buckets of Rain" on a list of "Bob Dylan's 20 Greatest Songs of the 1970s". In an article accompanying the list, critic Jacob Nierenberg notes that, in spite of being known as the "spokesman of a generation", Dylan sounds on the song "like a man who wants to be nothing more than a man, to process his emotions and remind himself of the things he loves about the woman he loves. 'I like your smile/ And your fingertips/ I like the way that you move your hips/ I like the cool way you look at me', Dyla
|
https://en.wikipedia.org/wiki/Lavasoft
|
Adaware, formerly known as Lavasoft, is a software development company that produces spyware and malware detection software, including Adaware. It operates as a subsidiary of Avanquest, a division of Claranova.
The company offers products Adaware Antivirus, Adaware Protect, Adaware Safe Browser, Adaware Privacy, Adaware AdBlock, Adaware PC Cleaner and Adaware Driver Manager.
Adaware's headquarters are in Montreal, Canada, having previously been located in Gothenburg, Sweden since 2002. Nicolas Stark and Ann-Christine Åkerlund established the company in Germany in 1999 with its flagship Adaware antivirus product. In 2011, Adware was acquired by the Solaria Fund, a private equity fund front for entrepreneurs Daniel Assouline and Michael Dadoun, who have been accused of selling software that is available for free, including Adaware antivirus prior to acquiring the company itself.
Adaware antivirus
An anti-spyware and anti-virus software program, Adaware Antivirus, according to its developer, supposedly detects and removes malware, spyware and adware, computer viruses, dialers, Trojans, bots, rootkits, data miners,, parasites, browser hijackers and tracking components. Adaware Web Companion, a component of the Adaware antivirus, is frequently packaged alongside potentially unwanted programs. Adaware accomplishes this by striking deals with malware operators and site owners to distribute its software in exchange for money. Adaware Web Companion is known to collect user data and send it back to remote servers.
History
Adaware antivirus was originally developed, as Ad-Aware, in 1999 to highlight web beacons inside of Internet Explorer. On many websites, users would see a tiny pixelated square next to each web beacon, warning the user that the computer's IP address and other non-essential information was being tracked by this website. Over time, Ad-Aware added the ability to block those beacons, or ads.
In the 2008 Edition, Lavasoft bundled Ad-Aware Pro and Plus for
|
https://en.wikipedia.org/wiki/Adinkra%20symbols
|
Adinkra are symbols from Ghana that represent concepts or aphorisms. Adinkra are used extensively in fabrics, logos and pottery. They are incorporated into walls and other architectural features. Adinkra symbols appear on some traditional Akan goldweights. The symbols are also carved on stools for domestic and ritual use. Tourism has led to new departures in the use of the symbols in items such as T-shirts and jewellery.
The symbols have a decorative function but also represent objects that encapsulate evocative messages conveying traditional wisdom, aspects of life, or the environment. There are many symbols with distinct meanings, often linked with proverbs. In the words of Kwame Anthony Appiah, they were one of the means for "supporting the transmission of a complex and nuanced body of practice and belief".
History
Adinkra symbols were originally created by the Bono people of Gyaman. The Gyaman king, Nana Kwadwo Agyemang Adinkra, originally created or designed these symbols, naming it after himself. The Adinkra symbols were largely used on pottery, stools etc. by the people of Bono. Adinkra cloth was worn by the king of Gyaman, and its usage spread from Bono Gyaman to Asante and other Akan kingdoms following its defeat. It is said that the guild designers who designed this cloth for the Kings were forced to teach the Asantes the craft. Gyaman king Nana Kwadwo Agyemang Adinkra's first son, Apau, who was said to be well versed in the Adinkra craft, was forced to teach more about Adinkra cloths. Oral accounts have attested to the fact that Adinkra Apau taught the process to a man named Kwaku Dwaku in a town near Kumasi. Over time, all Akan people including the Fante, Akuapem and Akyem all made Adinkra symbols a major part of their culture, as they all originated from the ancient Bono Kingdom.
The oldest surviving adinkra cloth was made in 1817. The cloth features 15 stamped symbols, including nsroma (stars), dono ntoasuo (double Dono drums), and diamonds. The p
|
https://en.wikipedia.org/wiki/Palais%E2%80%93Smale%20compactness%20condition
|
The Palais–Smale compactness condition, named after Richard Palais and Stephen Smale, is a hypothesis for some theorems of the calculus of variations. It is useful for guaranteeing the existence of certain kinds of critical points, in particular saddle points. The Palais-Smale condition is a condition on the functional that one is trying to extremize.
In finite-dimensional spaces, the Palais–Smale condition for a continuously differentiable real-valued function is satisfied automatically for proper maps: functions which do not take unbounded sets into bounded sets. In the calculus of variations, where one is typically interested in infinite-dimensional function spaces, the condition is necessary because some extra notion of compactness beyond simple boundedness is needed. See, for example, the proof of the mountain pass theorem in section 8.5 of Evans.
Strong formulation
A continuously Fréchet differentiable functional from a Hilbert space H to the reals satisfies the Palais–Smale condition if every sequence such that:
is bounded, and
in H
has a convergent subsequence in H.
Weak formulation
Let X be a Banach space and be a Gateaux differentiable functional. The functional is said to satisfy the weak Palais–Smale condition if for each sequence such that
,
in ,
for all ,
there exists a critical point of with
|
https://en.wikipedia.org/wiki/Mountain%20pass%20theorem
|
The mountain pass theorem is an existence theorem from the calculus of variations, originally due to Antonio Ambrosetti and Paul Rabinowitz. Given certain conditions on a function, the theorem demonstrates the existence of a saddle point. The theorem is unusual in that there are many other theorems regarding the existence of extrema, but few regarding saddle points.
Statement
The assumptions of the theorem are:
is a functional from a Hilbert space H to the reals,
and is Lipschitz continuous on bounded subsets of H,
satisfies the Palais–Smale compactness condition,
,
there exist positive constants r and a such that if , and
there exists with such that .
If we define:
and:
then the conclusion of the theorem is that c is a critical value of I.
Visualization
The intuition behind the theorem is in the name "mountain pass." Consider I as describing elevation. Then we know two low spots in the landscape: the origin because , and a far-off spot v where . In between the two lies a range of mountains (at ) where the elevation is high (higher than a>0). In order to travel along a path g from the origin to v, we must pass over the mountains—that is, we must go up and then down. Since I is somewhat smooth, there must be a critical point somewhere in between. (Think along the lines of the mean-value theorem.) The mountain pass lies along the path that passes at the lowest elevation through the mountains. Note that this mountain pass is almost always a saddle point.
For a proof, see section 8.5 of Evans.
Weaker formulation
Let be Banach space. The assumptions of the theorem are:
and have a Gateaux derivative which is continuous when and are endowed with strong topology and weak* topology respectively.
There exists such that one can find certain with
.
satisfies weak Palais–Smale condition on .
In this case there is a critical point of satisfying . Moreover, if we define
then
For a proof, see section 5.5 of Aubin and Ekeland.
|
https://en.wikipedia.org/wiki/Staring
|
Staring is a prolonged gaze or fixed look. In staring, one subject or person is the continual focus of visual interest, for an amount of time. Staring can be interpreted as being either hostile like disapproval of another's behavior, or the result of intense concentration, interest or affection. Staring behavior can be considered as a form of aggression like when it is an invasion of an individual's privacy in certain contexts, or as a nonverbal cue to convey feelings of attraction in a social setting. The resultant behavior or action defines whether it is aggressive in nature (e.g. leering that results in street harassment), passive or active expression of attraction, etc. However, to some extent staring often occurs accidentally, and often a person would be simply staring into a space for awareness, or could be lost in thought, stupefied, or be unable to see. As such, the meaning of a person's staring behavior depends upon the attributions made by the observer.
In a staring contest, a mutual staring can take the form of a battle of wills. When eye contact is reciprocated, it could be an aggressive-dominating game where the loser is the person who looks away first.
Staring conceptually also implies confronting the inevitable – 'staring death in the face', or 'staring into the abyss'. Group staring evokes and emphasizes paranoia; such as the archetypal stranger walking into a saloon in a Western to be greeted by the stares of all the regulars. The fear of being stared at is called scopophobia.
Social factors
Children have to be socialised into learning acceptable staring behaviour. This is often difficult because children have different sensitivities to self-esteem. Staring is also sometimes used as a technique of flirting with an object of affection. However, being stared at, especially for a prolonged amount of time or very frequently by one person in particular, can cause discomfort to those subjected to it.
Jean-Paul Sartre discusses "The look" in Being an
|
https://en.wikipedia.org/wiki/Calbindin
|
Calbindins are three different calcium-binding proteins: calbindin, calretinin and S100G. They were originally described as vitamin D-dependent calcium-binding proteins in the intestine and kidney of chicks and mammals. They are now classified in different subfamilies as they differ in the number of Ca2+ binding EF hands.
Calbindin 1
Calbindin 1 or simply calbindin was first shown to be present in the intestine in birds and then found in the mammalian kidney. It is also expressed in a number of neuronal and endocrine cells, particularly in the cerebellum. It is a 28 kDa protein encoded in humans by the CALB1 gene.
Calbindin contains 4 active calcium-binding domains, and 2 modified domains that have lost their calcium-binding capacity. Calbindin acts as a calcium buffer and calcium sensor and can hold four Ca2+ in the EF-hands of loops EF1, EF3, EF4 and EF5. The structure of rat calbindin was originally solved by nuclear magnetic resonance and was one of the largest proteins then to be determined by this technique. The sequence of calbindin is 263 residues in length and has only one chain. The sequence consists mostly of alpha helices but beta sheets are not absent. According to the NMR PDB (PDB entry 2G9B) it is 44% helical with 14 helices containing 117 residues, and 4% beta sheet with 9 strands containing 13 residues. In 2018 the X-ray crystal structure of human calbindin was published (PDB entry 6FIE). There were differences observed between the nuclear magnetic resonance and crystal structure despite 98% sequence identity between the rat and human isoforms. Small angle X-ray scattering indicates that the crystal structure better predicts the properties of calbindin in solution compared with the structure determined by nuclear magnetic resonance.
Calbindin is a vitamin D–responsive gene in many tissues, in particular the chick intestine, where it has a clear function in mediating calcium absorption. In the brain, its synthesis is independent of vitamin-D.
|
https://en.wikipedia.org/wiki/Cardamine%20hirsuta
|
Cardamine hirsuta, commonly called hairy bittercress, is an annual or biennial species of plant in the family Brassicaceae, and is edible as a salad green. It is common in moist areas around the world.
Description
Depending on the climate C. hirsuta may complete two generations in a year, one in the spring and one in the fall; also depending on the climate, the seeds may germinate in the fall and the plants may remain green throughout the winter before flowering in the spring. It often grows a rosette of leaves at the base of the stem, while there may be leaves on the upright stem, most of the leaves will be part of the basal rosette. The leaves in this rosette are pinnately divided into 8–15 leaflets which have short stems connecting them to the petiole. These basal leaves are often 3.5–15 cm long. The leaflets are round to ovate in shape and may have smooth or dentate edges. The leaflet at the tip of the leaf (terminal leaflet) will be larger than the other leaflets and round to reniform in shape. The cauline (attached to the upright stem) leaves are also pinnately divided, with fewer leaflets, and generally smaller than the basal leaves; these leaves will be borne on a petiole and are 1.2–5.5 cm long. The stems, petioles, and upper surfaces of the cauline leaves are sparsely hairy.
Plants of this species are usually erect and grow to no more than about from a stem which is either unbranched or branched near the base. The small white flowers are borne in a raceme without any bracts, soon followed by the seeds and often continuing to flower as the first seeds ripen. The flowers have (4) white petals (which may be lacking but are mostly present) which are 1.5–4.5 mm long and spatulate shaped. The flowers also have (4) stamens of equal height instead of the 6 which are found in most closely related plants. Pollens are elongated, approximately 32 microns in length.
Below the flowers there are 4 sepals which are oblong shaped and 1.5–2.5 mm long and .3–.7 mm wide.
|
https://en.wikipedia.org/wiki/French%20video%20game%20policy
|
French video game policy refers to the strategy and set of measures laid out by France since 2002 to maintain and develop a local video game development industry in order to preserve European market diversity.
History
Proposals for government support
The French game developer trade group, known as Association des Producteurs d'Oeuvres Multimedia (APOM, now "Syndicat National du Jeu Video") was founded in 2001 by Eden Studios' Stéphane Baudet, Kalisto's Nicolas Gaume, former cabinet member and author Alain Le Diberder, financier and former journalist Romain Poirot-Lellig and Darkworks' Antoine Villette. APOM was for established for game developers only, since game publishers were already grouped under the umbrella of the Syndicat des Editeurs de Logiciels de Loisirs (SELL).
In November 2002, the Prime Minister Jean-Pierre Raffarin visited Darkworks, and formally asked game developers to submit him a set of proposals, promising to meet again in Spring 2003 to give his feedback.
Confronted by the bankruptcies or difficulties of many studios such as Cryo, Kalisto, Arxel Tribe, APOM had to propose short term solutions as well as long term, growth-oriented measures to the French government. Video game professionals responded in March 2003 with a set of proposals, including several options to set up a long term financing system to develop quality video games for the European and international market.
Era of government support
On April 19, 2003, the Prime Minister announced the creation of the Ecole Nationale du Jeu Video et des Medias Interactifs, a national school dedicated to the education of game development executives and project managers. He also announced the creation of a 4 million euro prototyping fund for games managed by the Centre National de la Cinematographie, the "Fonds d'Aide pour l'Edition Multimédia" ("FAEM"), and that he would order a report to be drafted in order to determine and to answer the needs of the game development industry with regards to
|
https://en.wikipedia.org/wiki/SWAR
|
SIMD within a register (SWAR), also known by the name "packed SIMD" is a technique for performing parallel operations on data contained in a processor register. SIMD stands for single instruction, multiple data. Flynn's 1972 taxonomy categorises SWAR as "pipelined processing".
Many modern general-purpose computer processors have some provisions for SIMD, in the form of a group of registers and instructions to make use of them. SWAR refers to the use of those registers and instructions, as opposed to using specialized processing engines designed to be better at SIMD operations. It also refers to the use of SIMD with general-purpose registers and instructions that were not meant to do it at the time, by way of various novel software tricks.
SWAR architectures
A SWAR architecture is one that includes instructions explicitly intended to perform parallel operations across data that is stored in the independent subwords or fields of a register. A SWAR-capable architecture is one that includes a set of instructions that is sufficient to allow data stored in these fields to be treated independently even though the architecture does not include instructions that are explicitly intended for that purpose.
An early example of a SWAR architecture was the Intel Pentium with MMX, which implemented the MMX extension set. The Intel Pentium, by contrast, did not include such instructions, but could still act as a SWAR architecture through careful hand-coding or compiler techniques.
Early SWAR architectures include DEC Alpha , Hewlett-Packard's PA-RISC MAX, Silicon Graphics Incorporated's MIPS MDMX, and Sun's SPARC V9 VIS. Like MMX, many of the SWAR instruction sets are intended for faster video coding.
History of the SWAR programming model
Wesley A. Clark introduced partitioned subword data operations in the 1950s. This can be seen as a very early predecessor to SWAR. Leslie Lamport presented SWAR techniques in his paper titled "Multiple byte processing with full-word instruct
|
https://en.wikipedia.org/wiki/Any-source%20multicast
|
Any-source multicast (ASM) is the older and more usual form of multicast where multiple senders can be on the same group/channel, as opposed to source-specific multicast where a single particular source is specified.
Any-source multicast allows a host computer to map IPs and then sends IPs to a number of groups via IP address. This method of multicasting allows hosts to transmit to/from groups without any restriction on the location of end-user computers by allowing any receiving host group computer to become a transmission source. Bandwidth usage is nominal allowing Video Conferencing to be used extensively. However, this type of multicast is vulnerable in that it allows for unauthorized traffic and denial-of-service attacks.
Commonly, any-source multicast is used in IGMP version 2; however, it can also be used in PIM-SM, MSDP, and MBGP. ASM utilizes IPv4 in association with the previously stated protocols; in addition, MLDv1 protocol is used for IPv6 addresses.
Benefits
Scalability for large tasks
The reduction of group management
Ability to use existing technologies
See also
IP multicast
Internet Group Management Protocol
RTCP
Xcast
|
https://en.wikipedia.org/wiki/M82%20X-1
|
M82 X-1 is an ultra-luminous X-ray source located in the galaxy M82. It is a candidate intermediate-mass black hole, with the exact mass estimate varying from around 100 to 1000. One of the most luminous ULXs ever known, its luminosity exceeds the Eddington limit for a stellar mass object.
See also
M82 X-2
|
https://en.wikipedia.org/wiki/Institute%20Vienna%20Circle%20/%20Vienna%20Circle%20Society
|
The Institute Vienna Circle (IVC) ("Society for the Advancement of the Scientific World Conception") was founded in October 1991 as an international nonprofit organization dedicated to the work and influence of the Vienna Circle of Logical Empiricism. Since 2011 the IVC was established as a subunit (Department) of the Faculty of Philosophy and Education at the University of Vienna. In 2016 the title of the co-existing society was changed to "Vienna Circle Society" (VCS), which entertains a close co-operation with the IVC. The Institute’s founder and scientific director of the VCS is Friedrich Stadler, who serves as a permanent fellow of the IVC in parallel.
Objectives
Its goal is the documentation and continued development of the Vienna Circle's work in science and public education, areas that have been neglected until now, as well as the maintenance and application of logical-empirical, critical-rational and linguistic analytical thought and construction of a scientific philosophy and world view in conjunction with general socio-cultural trends. One of the Institute's main objectives is to democratize knowledge and science as a process of enlightenment, counteracting all forms of irrational, dogmatic or fundamentalist thought, in a societal context and taking into account the latest developments in international research.
Activities
The organisation of a large number of international workshops, conferences and seminars on the Vienna Circle, the philosophy of science and related topics.
Publication of a number of books within book series in German and English: Vienna Circle Institute Yearbook, the book series Vienna Circle Institute Library, the book series Wissenschaftliche Weltauffassung und Kunst, the book series Veröffentlichungen des Instituts Wiener Kreis.
Research projects: Moritz Schlick edition project in cooperation with the University of Rostock, Ernst Mach edition project. Numerous completed and ongoing research projects.
The IVC/VCS is involved in
|
https://en.wikipedia.org/wiki/Ecological%20stoichiometry
|
Ecological stoichiometry (more broadly referred to as biological stoichiometry) considers how the balance of energy and elements influences living systems. Similar to chemical stoichiometry, ecological stoichiometry is founded on constraints of mass balance as they apply to organisms and their interactions in ecosystems. Specifically, how does the balance of energy and elements affect and how is this balance affected by organisms and their interactions. Concepts of ecological stoichiometry have a long history in ecology with early references to the constraints of mass balance made by Liebig, Lotka, and Redfield. These earlier concepts have been extended to explicitly link the elemental physiology of organisms to their food web interactions and ecosystem function.
Most work in ecological stoichiometry focuses on the interface between an organism and its resources. This interface, whether it is between plants and their nutrient resources or large herbivores and grasses, is often characterized by dramatic differences in the elemental composition of each part. The difference, or mismatch, between the elemental demands of organisms and the elemental composition of resources leads to an elemental imbalance. Consider termites, which have a tissue carbon:nitrogen ratio (C:N) of about 5 yet consume wood with a C:N ratio of 300–1000. Ecological stoichiometry primarily asks:
why do elemental imbalances arise in nature?
how is consumer physiology and life-history affected by elemental imbalances? and
what are the subsequent effects on ecosystem processes?
Elemental imbalances arise for a number of physiological and evolutionary reasons related to the differences in the biological make up of organisms, such as differences in types and amounts of macromolecules, organelles, and tissues. Organisms differ in the flexibility of their biological make up and therefore in the degree to which organisms can maintain a constant chemical composition in the face of variations in their
|
https://en.wikipedia.org/wiki/SUMO%20protein
|
In molecular biology, SUMO (Small Ubiquitin-like Modifier) proteins are a family of small proteins that are covalently attached to and detached from other proteins in cells to modify their function. This process is called SUMOylation (sometimes written sumoylation). SUMOylation is a post-translational modification involved in various cellular processes, such as nuclear-cytosolic transport, transcriptional regulation, apoptosis, protein stability, response to stress, and progression through the cell cycle.
SUMO proteins are similar to ubiquitin and are considered members of the ubiquitin-like protein family. SUMOylation is directed by an enzymatic cascade analogous to that involved in ubiquitination. In contrast to ubiquitin, SUMO is not used to tag proteins for degradation. Mature SUMO is produced when the last four amino acids of the C-terminus have been cleaved off to allow formation of an isopeptide bond between the C-terminal glycine residue of SUMO and an acceptor lysine on the target protein.
SUMO family members often have dissimilar names; the SUMO homologue in yeast, for example, is called SMT3 (suppressor of mif two 3). Several pseudogenes have been reported for SUMO genes in the human genome.
Function
SUMO modification of proteins has many functions. Among the most frequent and best studied are protein stability, nuclear-cytosolic transport, and transcriptional regulation. Typically, only a small fraction of a given protein is SUMOylated and this modification is rapidly reversed by the action of deSUMOylating enzymes. SUMOylation of target proteins has been shown to cause a number of different outcomes including altered localization and binding partners. The SUMO-1 modification of RanGAP1 (the first identified SUMO substrate) leads to its trafficking from cytosol to nuclear pore complex. The SUMO modification of ninein leads to its movement from the centrosome to the nucleus. In many cases, SUMO modification of transcriptional regulators correlates wi
|
https://en.wikipedia.org/wiki/Lemote
|
Jiangsu Lemote Tech Co., Ltd or Lemote () is a computer company established as a joint venture between the Jiangsu Menglan Group and the Chinese Institute of Computing Technology, involved in computer hardware and software products, services, and projects.
History
In June 2006, shortly after Institute of Computing Technology of the Chinese Academy of Sciences developed Loongson 2E they need a company to build end product, so the Jiangsu Menglan Group began a joint venture with the Institute of Computing Technology of the Chinese Academy of Sciences. The venture was named Jiangsu Lemote Tech Co., Ltd.
A computer was announced by Fuxin Zhang, an ICT researcher also a Lemote staff, who said the purpose of this project was to "provide everyone with a personal computer". The device is intended for low income groups and rural area students.
Hardware
Lemote builds small form factor computers including network computers and netbooks with Loongson Processors.
Netbook computers
The Yeeloong netbook computer is intended to be built on free software from the BIOS upwards, and for this reason is used and recommended by the founder of Free Software Foundation, Richard Stallman as of September 2008 and 23 January 2010.
The specifications are:
Loongson 3A laptop
Loongson insiders revealed a new model based on the Loongson 3A quad-core laptop has been developed and was expected to launch in August 2011. With a similar design to the MacBook Pro from Apple Inc., it will carry a Linux operating system by default.
In September 2011, Lemote announced the Yeeloong-8133 13.3" laptop featuring 900 MHz, quad-core Loongson-3A/2GQ CPU.
Desktop computers
Lynloong, all-in-one desktop computer, combined computer and monitor, without keyboard.
Myloong, desktop diskless network computer (NC), without monitor or keyboard.
Fuloong, see below.
Products in development
Hiloong, SOHO and family storage center.
Fuloong 2 series of small desktop computers
The Fuloong 2 series is a desktop com
|
https://en.wikipedia.org/wiki/Boojum%20%28superfluidity%29
|
In the physics of superfluidity, a boojum is a geometric pattern on the surface of one of the phases of superfluid helium-3, whose motion can result in the decay of a supercurrent. A boojum can result from a monopole singularity in the bulk of the liquid being drawn to, and then "pinned" on a surface. Although superfluid helium-3 only exists within a few thousandths of a degree of absolute zero, boojums have also been observed forming in various liquid crystals, which exist at a far broader range of temperatures.
The boojum was named by N. David Mermin of Cornell University in 1976. He was inspired by Lewis Carroll's poem The Hunting of the Snark. As in the poem, the appearance of a boojum can cause something (in this case, the supercurrent) to "softly and suddenly vanish away". Other, less whimsical names had already been suggested for the phenomenon, but Mermin was persistent. After an exchange of letters that Mermin describes as both "lengthy and hilarious", the editors of Physical Review Letters agreed to his terminology. Research using the term "boojum" in a superfluid context was first published in 1977, and the term has since gained widespread acceptance in broader areas of physics. Its Russian phonetic equivalent is "budzhum", which is also well accepted by physicists.
The plural of the term is "boojums", a word initially disliked by Mermin (who at first used "booja") but one which is defined unambiguously by Carroll in his poem.
|
https://en.wikipedia.org/wiki/Friability
|
In materials science, friability ( ), the condition of being friable, describes the tendency of a solid substance to break into smaller pieces under duress or contact, especially by rubbing. The opposite of friable is indurate.
Substances that are designated hazardous, such as asbestos or crystalline silica, are often said to be friable if small particles are easily dislodged and become airborne, and hence respirable (able to enter human lungs), thereby posing a health hazard.
Tougher substances, such as concrete, may also be mechanically ground down and reduced to finely divided mineral dust. However, such substances are not generally considered friable because of the degree of difficulty involved in breaking the substance's chemical bonds through mechanical means. Some substances, such as polyurethane foams, show an increase in friability with exposure to ultraviolet radiation, as in sunlight.
Friable is sometimes used metaphorically to describe "brittle" personalities who can be "rubbed" by seemingly-minor stimuli to produce extreme emotional responses.
General
A friable substance is any substance that can be reduced to fibers or finer particles by the action of a small amount of pressure or friction, such as rubbing or inadvertently brushing up against the substance. The term could also apply to any material that exhibits these properties, such as:
Ionically bound substances that are less than 1 kg/L in density
Clay tablets
Crackers
Mineral fibers
Polyurethane (foam)
Aerogel
Geological
Friable and indurated are terms used commonly in soft-rock geology, especially with sandstones, mudstones, and shales to describe how well the component rock fragments are held together.
Examples:
Clumps of dried clay
Chalk
Perlite
Medical
The term friable is also used to describe tumors in medicine. This is an important determination because tumors that are easily torn apart have a higher risk of malignancy and metastasis.
Examples:
Some forms of cancer, such
|
https://en.wikipedia.org/wiki/Renal%20blood%20flow
|
In the physiology of the kidney, renal blood flow (RBF) is the volume of blood delivered to the kidneys per unit time. In humans, the kidneys together receive roughly 25% of cardiac output, amounting to 1.2 - 1.3 L/min in a 70-kg adult male.
It passes about 94% to the cortex. RBF is closely related to renal plasma flow (RPF), which is the volume of blood plasma delivered to the kidneys per unit time.
While the terms generally apply to arterial blood delivered to the kidneys, both RBF and RPF can be used to quantify the volume of venous blood exiting the kidneys per unit time. In this context, the terms are commonly given subscripts to refer to arterial or venous blood or plasma flow, as in RBFa, RBFv, RPFa, and RPFv. Physiologically, however, the differences in these values are negligible so that arterial flow and venous flow are often assumed equal.
Renal plasma flow
Renal plasma flow is the volume of plasma that reaches the kidneys per unit time. Renal plasma flow is given by the Fick principle:
This is essentially a conservation of mass equation which balances the renal inputs (the renal artery) and the renal outputs (the renal vein and ureter). Put simply, a non-metabolizable solute entering the kidney via the renal artery has two points of exit, the renal vein and the ureter. The mass entering through the artery per unit time must equal the mass exiting through the vein and ureter per unit time:
where Pa is the arterial plasma concentration of the substance, Pv is its venous plasma concentration, Ux is its urine concentration, and V is the urine flow rate. The product of flow and concentration gives mass per unit time.
As mentioned previously, the difference between arterial and venous blood flow is negligible, so RPFa is assumed to be equal to RPFv, thus
Rearranging yields the previous equation for RPF:
Measuring
Values of Pv are difficult to obtain in patients. In practice, PAH clearance is used instead to calculate the effective renal plasma flow (
|
https://en.wikipedia.org/wiki/Effective%20renal%20plasma%20flow
|
Effective renal plasma flow (eRPF) is a measure used in renal physiology to calculate renal plasma flow (RPF) and hence estimate renal function.
Because the extraction ratio of PAH is high, it has become commonplace to estimate the RPF by dividing the amount of PAH in the urine by the plasma PAH level, ignoring the level in renal venous blood. The value obtained in this way is called the effective renal plasma flow (eRPF) to indicate that the level in renal venous plasma was not measured.
The actual RPF can be calculated from eRPF as follows:
where extraction ratio is the ratio of compound entering the kidney that is excreted into the final urine.
When using a compound with an extraction ratio near 1, such as para-aminohippurate (PAH), eRPF approximates RPF. Therefore, PAH clearance can be used to estimate RPF.
|
https://en.wikipedia.org/wiki/Receiver%20%28information%20theory%29
|
The receiver in information theory is the receiving end of a communication channel. It receives decoded messages/information from the sender, who first encoded them. Sometimes the receiver is modeled so as to include the decoder. Real-world receivers like radio receivers or telephones can not be expected to receive as much information as predicted by the noisy channel coding theorem.
|
https://en.wikipedia.org/wiki/Conformational%20change
|
In biochemistry, a conformational change is a change in the shape of a macromolecule, often induced by environmental factors.
A macromolecule is usually flexible and dynamic. Its shape can change in response to changes in its environment or other factors; each possible shape is called a conformation, and a transition between them is called a conformational change. Factors that may induce such changes include temperature, pH, voltage, light in chromophores, concentration of ions, phosphorylation, or the binding of a ligand. Transitions between these states occur on a variety of length scales (tenths of Å to nm) and time scales (ns to s),
and have been linked to functionally relevant phenomena such as allosteric signaling and enzyme catalysis.
Laboratory analysis
Many biophysical techniques such as crystallography, NMR, electron paramagnetic resonance (EPR) using spin label techniques, circular dichroism (CD), hydrogen exchange, and FRET can be used to study macromolecular conformational change. Dual-polarization interferometry is a benchtop technique capable of providing information about conformational changes in biomolecules.
A specific nonlinear optical technique called second-harmonic generation (SHG) has been recently applied to the study of conformational change in proteins. In this method, a second-harmonic-active probe is placed at a site that undergoes motion in the protein by mutagenesis or non-site-specific attachment, and the protein is adsorbed or specifically immobilized to a surface. A change in protein conformation produces a change in the net orientation of the dye relative to the surface plane and therefore the intensity of the second harmonic beam. In a protein sample with a well-defined orientation, the tilt angle of the probe can be quantitatively determined, in real space and real time. Second-harmonic-active unnatural amino acids can also be used as probes.
Another method applies electro-switchable biosurfaces where proteins are place
|
https://en.wikipedia.org/wiki/Gravitational%20energy
|
Gravitational energy or gravitational potential energy is the potential energy a massive object has in relation to another massive object due to gravity. It is the potential energy associated with the gravitational field, which is released (converted into kinetic energy) when the objects fall towards each other. Gravitational potential energy increases when two objects are brought further apart.
Formulation
For two pairwise interacting point particles, the gravitational potential energy is given by
where and are the masses of the two particles, is the distance between them, and is the gravitational constant.
Close to the Earth's surface, the gravitational field is approximately constant, and the gravitational potential energy of an object reduces to
where is the object's mass, is the gravity of Earth, and is the height of the object's center of mass above a chosen reference level.
Newtonian mechanics
In classical mechanics, two or more masses always have a gravitational potential. Conservation of energy requires that this gravitational field energy is always negative, so that it is zero when the objects are infinitely far apart. The gravitational potential energy is the potential energy an object has because it is within a gravitational field.
The force between a point mass, , and another point mass, , is given by Newton's law of gravitation:
To get the total work done by an external force to bring point mass from infinity to the final distance (for example the radius of Earth) of the two mass points, the force is integrated with respect to displacement:
Because , the total work done on the object can be written as:
In the common situation where a much smaller mass is moving near the surface of a much larger object with mass , the gravitational field is nearly constant and so the expression for gravitational energy can be considerably simplified. The change in potential energy moving from the surface (a distance from the center) to a height
|
https://en.wikipedia.org/wiki/Dynamical%20genetics
|
Dynamical genetics concerns the study and the interpretation of those phenomena in which physiological enzymatic protein complexes alter the DNA, in a more or less sophisticated way.
The study of such mechanisms is important firstly since they promote useful functions, as for example the immune system recombination (on individual scale) and the crossing-over (on evolutionary scale); secondly since they may sometimes become harmful because of some malfunctioning, causing for example neurodegenerative disorders.
Typical examples of dynamical genetics subjects are:
dynamic mutations, term introduced by Robert I. Richards and Grant R. Sutherland to indicate mutations caused by other mutations; this phenomenon often involves the variable number tandem repeats, closely related to many neurodegenerative diseases, as the trinucleotide repeat disorders (interpreted by Anita Harding).
dynamic genome, term introduced by Nina Fedoroff and David Botstein to indicate the transposition discovered by Barbara McClintock.
immune V(D)J recombination (discovered by Susumu Tonegawa) and isotype class switching, terms introduced to indicate two kinds of immune system recombinations, which are the main cause of the enormous variety of antibodies.
horizontal DNA transfer (discovered by Frederick Griffith) that indicates the DNA transfer between two organisms.
crossing-over (discovered by Thomas Hunt Morgan) mediated by formation and unwinding (by means of peculiar enzymatic complexes such as helicase) of uncommon four-helix DNA structures known as G-quadruplexes (discovered by Martin Gellert, Marie N. Lipsett, and David R. Davies).
|
https://en.wikipedia.org/wiki/Turgor%20pressure
|
Turgor pressure is the force within the cell that pushes the plasma membrane against the cell wall.
It is also called hydrostatic pressure, and is defined as the pressure in a fluid measured at a certain point within itself when at equilibrium. Generally, turgor pressure is caused by the osmotic flow of water and occurs in plants, fungi, and bacteria. The phenomenon is also observed in protists that have cell walls. This system is not seen in animal cells, as the absence of a cell wall would cause the cell to lyse when under too much pressure. The pressure exerted by the osmotic flow of water is called turgidity. It is caused by the osmotic flow of water through a selectively permeable membrane. Movement of water through a semipermeable membrane from a volume with a low solute concentration to one with a higher solute concentration is called osmotic flow. In plants, this entails the water moving from the low concentration solute outside the cell into the cell's vacuole.
Etymology
1610s, from Latin turgidus "swollen, inflated, distended," from turgere "to swell," of unknown origin. Figurative use in reference to prose is from 1725. Related: Turgidly; turgidness.
Mechanism
Osmosis is the process in which water flows from a volume with a low solute concentration (osmolarity), to an adjacent region with a higher solute concentration until equilibrium between the two areas is reached. It is usually accompanied by a favorable increase in the entropy of the solvent. All cells are surrounded by a lipid bi-layer cell membrane which permits the flow of water into and out of the cell while limiting the flow of solutes. When the cell is in a hypertonic solution, water flows out of the cell, which decreases the cell's volume. When in a hypotonic solution, water flows into the membrane and increases the cell's volume, while in an isotonic solution, water flows in and out of the cell at an equal rate.
Turgidity is the point at which the cell's membrane pushes against the cell
|
https://en.wikipedia.org/wiki/Overlap%20extension%20polymerase%20chain%20reaction
|
The overlap extension polymerase chain reaction (or OE-PCR) is a variant of PCR. It is also referred to as Splicing by overlap extension / Splicing by overhang extension (SOE) PCR. It is used assemble multiple smaller double stranded DNA fragments into a larger DNA sequence. OE-PCR is widely used to insert mutations at specific points in a sequence or to assemble custom DNA sequence from smaller DNA fragments into a larger polynucleotide.
Splicing of DNA molecules
As in most PCR reactions, two primers—one for each end—are used per sequence. To splice two DNA molecules, special primers are used at the ends that are to be joined. For each molecule, the primer at the end to be joined is constructed such that it has a 5' overhang complementary to the end of the other molecule. Following annealing when replication occurs, the DNA is extended by a new sequence that is complementary to the molecule it is to be joined to. Once both DNA molecules are extended in such a manner, they are mixed and a PCR is carried out with only the primers for the far ends. The overlapping complementary sequences introduced will serve as primers and the two sequences will be fused. This method has an advantage over other gene splicing techniques in not requiring restriction sites.
To get higher yields, some primers are used in excess as in asymmetric PCR.
Introduction of mutations
To insert a mutation into a DNA sequence, a specific primer is designed. The primer may contain a single substitution or contain a new sequence at its 5' end. If a deletion is required, a sequence that is 5' of the deletion is added, because the 3' end of the primer must have complementarity to the template strand so that the primer can sufficiently anneal to the template DNA.
Following annealing of the primer to the template, DNA replication proceeds to the end of the template. The duplex is denatured and the second primer anneals to the newly formed DNA strand, containing sequence from the first primer. Repli
|
https://en.wikipedia.org/wiki/Egg-and-dart
|
Egg-and-dart, also known as egg-and-tongue, egg-and-anchor, or egg-and-star, is an ornamental device adorning the fundamental quarter-round, convex ovolo profile of moulding, consisting of alternating details on the face of the ovolo—typically an egg-shaped object alternating with a V-shaped element (e.g., an arrow, anchor, or dart). The device is carved or otherwise fashioned into ovolos composed of wood, stone, plaster, or other materials.
Egg-and-dart enrichment of the ovolo molding of the Ionic capital was used by ancient Greek builders, so it is found in ancient Greek architecture (e.g., the Erechtheion at the Acropolis of Athens), was used later by the Romans and continues to adorn capitals of modern buildings built in Classical styles (e.g., the Ionic capitals of the Jefferson Memorial in Washington, D.C., or the ones of the Romanian Athenaeum from Bucharest). Its ovoid shape (the egg) and serrated leaf (the dart) are believed to represent the opium poppy and its leaves. The moulding design element continues in use in neoclassical architecture. As a mass-produced architectural motif at the turn of the 19th to the 20th century, it can, when seen alongside dentils (tooth-like blocks of wood in rows), be used to date a building to the Edwardian period, which began with the death of Queen Victoria in 1901.
Gallery
See also
Rais-de-cœur
Ionic order
|
https://en.wikipedia.org/wiki/YouOS
|
YouOS was a web desktop and web integrated development environment, developed by Webshaka until June 2008.
From 2006 to 2008 YouOS replicated the desktop environment of a modern operating system on a webpage, using JavaScript to communicate with the remote server. This allowed users to save their current desktop state to return to later, much like the hibernation feature in many true operating systems, and for multiple users to collaborate using a single environment. YouOS featured built-in sharing of music, documents and other files. The software was in alpha stage, and was referred to as a "web operating system" by WebShaka.
An application programming interface and an IDE (integrated development environment) were in development.
Over 700 applications were created using this API.
In 2006, YouOS was listed on the 7th position of PC World's list of "The 20 Most Innovative Products of the Year".
YouOS was shut down on July 30, 2008 because the developers had not actively developed it since November 2006. They have since moved on to other projects.
The domain youos.com domain name was acquired by a German startup company, Dynacrowd, in May 2015. The project name YouOS now represents a mobile platform for hyperlocal interaction used to operate the German refugee assistance system AngelaApp.
Parent Company
Webshaka was a messaging company most notable for making YouOS. It was founded by Samuel Hsiung, Jeff Mullen, Srini Panguluri and Joseph Wong.
|
https://en.wikipedia.org/wiki/Wold%27s%20theorem
|
In statistics, Wold's decomposition or the Wold representation theorem (not to be confused with the Wold theorem that is the discrete-time analog of the Wiener–Khinchin theorem), named after Herman Wold, says that every covariance-stationary time series can be written as the sum of two time series, one deterministic and one stochastic.
Formally
where:
is the time series being considered,
is an uncorrelated sequence which is the innovation process to the process – that is, a white noise process that is input to the linear filter .
is the possibly infinite vector of moving average weights (coefficients or parameters)
is a deterministic time series, such as one represented by a sine wave.
The moving average coefficients have these properties:
Stable, that is square summable <
Causal (i.e. there are no terms with j < 0)
Minimum delay
Constant ( independent of t)
It is conventional to define
This theorem can be considered as an existence theorem: any stationary process has this seemingly special representation. Not only is the existence of such a simple linear and exact representation remarkable, but even more so is the special nature of the moving average model. Imagine creating a process that is a moving average but not satisfying these properties 1–4. For example, the coefficients could define an acausal and model. Nevertheless the theorem assures the existence of a causal that exactly represents this process. How this all works for the case of causality and the minimum delay property is discussed in Scargle (1981), where an extension of the Wold Decomposition is discussed.
The usefulness of the Wold Theorem is that it allows the dynamic evolution of a variable to be approximated by a linear model. If the innovations are independent, then the linear model is the only possible representation relating the observed value of to its past evolution. However, when is merely an uncorrelated but not independent sequence, then the linear mod
|
https://en.wikipedia.org/wiki/Wold%27s%20decomposition
|
In mathematics, particularly in operator theory, Wold decomposition or Wold–von Neumann decomposition, named after Herman Wold and John von Neumann, is a classification theorem for isometric linear operators on a given Hilbert space. It states that every isometry is a direct sum of copies of the unilateral shift and a unitary operator.
In time series analysis, the theorem implies that any stationary discrete-time stochastic process can be decomposed into a pair of uncorrelated processes, one deterministic, and the other being a moving average process.
Details
Let H be a Hilbert space, L(H) be the bounded operators on H, and V ∈ L(H) be an isometry. The Wold decomposition states that every isometry V takes the form
for some index set A, where S is the unilateral shift on a Hilbert space Hα, and U is a unitary operator (possible vacuous). The family {Hα} consists of isomorphic Hilbert spaces.
A proof can be sketched as follows. Successive applications of V give a descending sequences of copies of H isomorphically embedded in itself:
where V(H) denotes the range of V. The above defined Hi = Vi(H). If one defines
then
It is clear that K1 and K2 are invariant subspaces of V.
So V(K2) = K2. In other words, V restricted to K2 is a surjective isometry, i.e., a unitary operator U.
Furthermore, each Mi is isomorphic to another, with V being an isomorphism between Mi and Mi+1: V "shifts" Mi to Mi+1. Suppose the dimension of each Mi is some cardinal number α. We see that K1 can be written as a direct sum Hilbert spaces
where each Hα is an invariant subspaces of V and V restricted to each Hα is the unilateral shift S. Therefore
which is a Wold decomposition of V.
Remarks
It is immediate from the Wold decomposition that the spectrum of any proper, i.e. non-unitary, isometry is the unit disk in the complex plane.
An isometry V is said to be pure if, in the notation of the above proof, ∩i≥0 Hi = {0}. The multiplicity of a pure isometry V is the dimension of the ke
|
https://en.wikipedia.org/wiki/Linear%20canonical%20transformation
|
In Hamiltonian mechanics, the linear canonical transformation (LCT) is a family of integral transforms that generalizes many classical transforms. It has 4 parameters and 1 constraint, so it is a 3-dimensional family, and can be visualized as the action of the special linear group SL2(R) on the time–frequency plane (domain). As this defines the original function up to a sign, this translates into an action of its double cover on the original function space.
The LCT generalizes the Fourier, fractional Fourier, Laplace, Gauss–Weierstrass, Bargmann and the Fresnel transforms as particular cases. The name "linear canonical transformation" is from canonical transformation, a map that preserves the symplectic structure, as SL2(R) can also be interpreted as the symplectic group Sp2, and thus LCTs are the linear maps of the time–frequency domain which preserve the symplectic form, and their action on the Hilbert space is given by the Metaplectic group.
The basic properties of the transformations mentioned above, such as scaling, shift, coordinate multiplication are considered. Any linear canonical transformation is related to affine transformations in phase space, defined by time-frequency or position-momentum coordinates.
Definition
The LCT can be represented in several ways; most easily, it can be parameterized by a 2×2 matrix with determinant 1, i.e., an element of the special linear group SL2(C). Then for any such matrix with ad − bc = 1, the corresponding integral transform from a function to is defined as
Special cases
Many classical transforms are special cases of the linear canonical transform:
Scaling
Scaling, , corresponds to scaling the time and frequency dimensions inversely (as time goes faster, frequencies are higher and the time dimension shrinks):
Fourier transform
The Fourier transform corresponds to a clockwise rotation by 90° in the time–frequency plane, represented by the matrix
Fractional Fourier transform
The fractional Fourier transform
|
https://en.wikipedia.org/wiki/Georges%20Matheron
|
Georges François Paul Marie Matheron (2 December 1930 – 7 August 2000) was a French mathematician and civil engineer of mines, known as the founder of geostatistics and a co-founder (together with Jean Serra) of mathematical morphology. In 1968, he created the Centre de Géostatistique et de Morphologie Mathématique at the Paris School of Mines in Fontainebleau. He is known for his contributions on Kriging and mathematical morphology. His seminal work is posted for study and review to the Online Library of the Centre de Géostatistique, Fontainebleau, France.
Early career
Matheron graduated from École Polytechnique and later Ecole des Mines de Paris, where he studied mathematics, physics and probability theory (as a student of Paul Lévy).
From 1954 to 1963, he worked with the French Geological Survey in Algeria and France, and was influenced by the works of Krige, Sichel, and de Wijs, from the South African school, on the gold deposits of the Witwatersrand. This influence led him to develop the major concepts of the theory for estimating resources he named Geostatistics.
Geostatistics
Matheron’s [Formule des Minerais Connexes] became his Note Statistique No 1. In this paper of 25 November 1954, Matheron derived the degree of associative dependence between lead and silver grades of core samples. In his Rectificatif of 13 January 1955, he revised the arithmetic mean lead and silver grades because his core samples varied in length. He did derive the length-weighted average lead and silver grades but failed to derive the variances of his weighted averages. Neither did he derive the degree of associative dependence between metal grades of ordered core samples as a measure for spatial dependence between ordered core samples. He did not disclose his primary data set and worked mostly with symbols rather than real measured values such test results for lead and silver in Matheron's core samples. Matheron's Interprétations des corrélations entre variables aléatoires lognor
|
https://en.wikipedia.org/wiki/Relaxation%20%28NMR%29
|
In MRI and NMR spectroscopy, an observable nuclear spin polarization (magnetization) is created by a homogeneous magnetic field. This field makes the magnetic dipole moments of the sample precess at the resonance (Larmor) frequency of the nuclei. At thermal equilibrium, nuclear spins precess randomly about the direction of the applied field. They become abruptly phase coherent when they are hit by radiofrequency (RF) pulses at the resonant frequency, created orthogonal to the field. The RF pulses cause the population of spin-states to be perturbed from their thermal equilibrium value. The generated transverse magnetization can then induce a signal in an RF coil that can be detected and amplified by an RF receiver. The return of the longitudinal component of the magnetization to its equilibrium value is termed spin-lattice relaxation while the loss of phase-coherence of the spins is termed spin-spin relaxation, which is manifest as an observed free induction decay (FID).
For spin=½ nuclei (such as 1H), the polarization due to spins oriented with the field N− relative to the spins oriented against the field N+ is given by the Boltzmann distribution:
where ΔE is the energy level difference between the two populations of spins, k is the Boltzmann constant, and T is the sample temperature. At room temperature, the number of spins in the lower energy level, N−, slightly outnumbers the number in the upper level, N+. The energy gap between the spin-up and spin-down states in NMR is minute by atomic emission standards at magnetic fields conventionally used in MRI and NMR spectroscopy. Energy emission in NMR must be induced through a direct interaction of a nucleus with its external environment rather than by spontaneous emission. This interaction may be through the electrical or magnetic fields generated by other nuclei, electrons, or molecules. Spontaneous emission of energy is a radiative process involving the release of a photon and typified by phenomena such as fluo
|
https://en.wikipedia.org/wiki/DNA%20origami
|
DNA origami is the nanoscale folding of DNA to create arbitrary two- and three-dimensional shapes at the nanoscale. The specificity of the interactions between complementary base pairs make DNA a useful construction material, through design of its base sequences. DNA is a well-understood material that is suitable for creating scaffolds that hold other molecules in place or to create structures all on its own.
DNA origami was the cover story of Nature on March 16, 2006. Since then, DNA origami has progressed past an art form and has found a number of applications from drug delivery systems to uses as circuitry in plasmonic devices; however, most commercial applications remain in a concept or testing phase.
Overview
The idea of using DNA as a construction material was first introduced in the early 1980s by Nadrian Seeman. The current method of DNA origami was developed by Paul Rothemund at the California Institute of Technology. The process involves the folding of a long single strand of viral DNA (typically the 7,249 bp genomic DNA of M13 bacteriophage) aided by multiple smaller "staple" strands. These shorter strands bind the longer in various places, resulting in the formation of a pre-defined two- or three-dimensional shape. Examples include a smiley face and a coarse map of China and the Americas, along with many three-dimensional structures such as cubes.
To produce a desired shape, images are drawn with a raster fill of a single long DNA molecule. This design is then fed into a computer program that calculates the placement of individual staple strands. Each staple binds to a specific region of the DNA template, and thus due to Watson-Crick base pairing, the necessary sequences of all staple strands are known and displayed. The DNA is mixed, then heated and cooled. As the DNA cools, the various staples pull the long strand into the desired shape. Designs are directly observable via several methods, including electron microscopy, atomic force microscopy, or
|
https://en.wikipedia.org/wiki/Composite%20application
|
In computing, a composite application is a software application built by combining multiple existing functions into a new application. The technical concept can be compared to mashups. However, composite applications use business sources (e.g., existing modules or even Web services ) of information, while mashups usually rely on web-based, and often free, sources.
It is wrong to assume that composite applications are by definition part of a service-oriented architecture (SOA). Composite applications can be built using any technology or architecture.
A composite application consists of functionality drawn from several different sources. The components may be individual selected functions from within other applications, or entire systems whose outputs have been packaged as business functions, modules, or web services.
Composite applications often incorporate orchestration of "local" application logic to control how the composed functions interact with each other to produce the new, derived functionality. For composite applications that are based on SOA, WS-CAF is a Web services standard for composite applications.
See also
Web 2.0
Composite Application Service Assembly (CASA)
Enterprise service bus (ESB)
Service-oriented architecture (SOA)
Service component architecture (SCA)
Mashup (web application hybrid)
External links
Composite application guidance from patterns & practices
NetBeans SOA Composite Application Project Home
camelse
Running Apache Camel in OpenESB
eclipse sirius - Free and GPL eclipse tool to build your own arbitrary complex military grade modeling tools on one hour
eclipse SCA Tools - Gnu free composite tool
Free GPL obeodesigner made with eclipse sirius
|
https://en.wikipedia.org/wiki/Approximations%20of%20%CF%80
|
Approximations for the mathematical constant pi () in the history of mathematics reached an accuracy within 0.04% of the true value before the beginning of the Common Era. In Chinese mathematics, this was improved to approximations correct to what corresponds to about seven decimal digits by the 5th century.
Further progress was not made until the 15th century (through the efforts of Jamshīd al-Kāshī). Early modern mathematicians reached an accuracy of 35 digits by the beginning of the 17th century (Ludolph van Ceulen), and 126 digits by the 19th century (Jurij Vega), surpassing the accuracy required for any conceivable application outside of pure mathematics.
The record of manual approximation of is held by William Shanks, who calculated 527 digits correctly in 1853. Since the middle of the 20th century, the approximation of has been the task of electronic digital computers (for a comprehensive account, see Chronology of computation of ). On 8 June 2022, the current record was established by Emma Haruka Iwao with Alexander Yee's y-cruncher with 100 trillion () digits.
Early history
The best known approximations to dating to before the Common Era were accurate to two decimal places; this was improved upon in Chinese mathematics in particular by the mid-first millennium, to an accuracy of seven decimal places. After this, no further progress was made until the late medieval period.
Some Egyptologists
have claimed that the ancient Egyptians used an approximation of as = 3.142857 (about 0.04% too high) from as early as the Old Kingdom.
This claim has been met with skepticism.
Babylonian mathematics usually approximated to 3, sufficient for the architectural projects of the time (notably also reflected in the description of Solomon's Temple in the Hebrew Bible). The Babylonians were aware that this was an approximation, and one Old Babylonian mathematical tablet excavated near Susa in 1936 (dated to between the 19th and 17th centuries BCE) gives a better appr
|
https://en.wikipedia.org/wiki/Tongan%20music%20notation
|
The Tuungafasi or Tongan music notation is a subset of the standard music notation, originally developed by the missionary James Egan Moulton in the 19th century for singing church hymns in Tonga.
The notation
Tongan music from the pre-European times was not really music in the current sense but rather a non tonic recital (like the 'pater noster'), a style still known nowadays as the tau fakaniua. Therefore, when the missionaries started to teach singing, they had also to start with music from scratch. They found the do-re-mi-fa-sol-la-si-do scale sufficient for their needs, avoiding the very complex and difficult to learn international music notation. But due to the limited number of consonants in the Tongan language, the note names were localised into to-le-mi… Unfortunately the word 'tole' is a vulgar expression for the female genital area, and as such not to be used.
Moulton then developed a system where the main notes were indicated with the numbers 3 to 9, while a strike to the digits was used to sharpen them, for example: 7, being 7 or 8. At the end the full 12 notes of the octave became: 3-3-4-4-5-6-6-7-7-8-8-9, which are pronounced as: to-lu-fa-ma-ni-o-no-tu-fi-va-a-hi, (variants of the Tongan numerals 3 to 9 being tolu, fā, nima, ono, fitu, valu, hiva). To extend the single octave (MIDI octave number 4) into the next higher, a dot can be put above the number. To reach the next lower, a dot or a little tail can be put under them. If needed 2 tails can be taken to arrive at even lower pitches, but that is rare. Since the notation is made for human singing, it does not need to have the extended range of musical instruments.
The Moulton notation, or Tongan notation was extremely popular and is still cherished by the Tongans. It is extremely common to see bandmasters writing out the music on the blackboards in the church halls during choir practices.
Pitch
Tongan singers recognise up to 4 voices, which results in the typical 4 lines of numbers in the notat
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.