source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Focused%20ultrasound
|
High-intensity focused ultrasound (HIFU) is a non-invasive therapeutic technique that uses non-ionizing ultrasonic waves to heat or ablate tissue. HIFU can be used to increase the flow of blood or lymph or to destroy tissue, such as tumors, via thermal and mechanical mechanisms. Given the prevalence and relatively low cost of ultrasound generation mechanisms, The premise of HIFU is that it is expect a non-invasive and low-cost therapy that can at minimum outperform operating room care.
The technology is different from that used in ultrasonic imaging, though lower frequencies and continuous, rather than pulsed, waves are used to achieve the necessary thermal doses. However, pulsed waves may also be used if mechanical rather than thermal damage is desired. Acoustic lenses are often used to achieve the necessary intensity at the target tissue without damaging the surrounding tissue. The ideal pattern diagram is the beam-focusing of a magnifying glass of sunlight; only the focal point of the magnifying glass has high temperature.
HIFU is combined with other imaging techniques such as medical ultrasound or MRI to enable guidance of the treatment and monitoring.
History
Studies on localized prostate cancer showed that, after treatment, progression-free survival rates were high for low- and intermediate- risk patients with recurrent prostate cancer. The InsighTec ExAblate 2000 was the first MRgFUS system to obtain FDA market approval.
Medical uses
There is no clear consensus on the boundaries between HIFU and other forms of therapeutic ultrasound. In particular literature, HIFU refers to the high levels of energy required to destroy tissue through ablation or cavitation, although it is also sometimes used to describe lower intensity applications such as occupational therapy and physical therapy.
Either way, HIFU is used to non-invasively heat tissue deep in the body without the need for an incision. The main applications are the destruction of tissue caused by hyp
|
https://en.wikipedia.org/wiki/Kruskal%27s%20tree%20theorem
|
In mathematics, Kruskal's tree theorem states that the set of finite trees over a well-quasi-ordered set of labels is itself well-quasi-ordered under homeomorphic embedding.
History
The theorem was conjectured by Andrew Vázsonyi and proved by ; a short proof was given by . It has since become a prominent example in reverse mathematics as a statement that cannot be proved in ATR0 (a second-order arithmetic theory with a form of arithmetical transfinite recursion).
In 2004, the result was generalized from trees to graphs as the Robertson–Seymour theorem, a result that has also proved important in reverse mathematics and leads to the even-faster-growing SSCG function which dwarfs TREE(3). A finitary application of the theorem gives the existence of the fast-growing TREE function.
Statement
The version given here is that proven by Nash-Williams; Kruskal's formulation is somewhat stronger. All trees we consider are finite.
Given a tree with a root, and given vertices , , call a successor of if the unique path from the root to contains , and call an immediate successor of if additionally the path from to contains no other vertex.
Take to be a partially ordered set. If , are rooted trees with vertices labeled in , we say that is inf-embeddable in and write if there is an injective map from the vertices of to the vertices of such that
For all vertices of , the label of precedes the label of ,
If is any successor of in , then is a successor of , and
If , are any two distinct immediate successors of , then the path from to in contains .
Kruskal's tree theorem then states: If is well-quasi-ordered, then the set of rooted trees with labels in is well-quasi-ordered under the inf-embeddable order defined above. (That is to say, given any infinite sequence of rooted trees labeled in , there is some so that .)
Friedman's work
For a countable label set , Kruskal's tree theorem can be expressed and proven using second-order arithmetic. However, l
|
https://en.wikipedia.org/wiki/Imaging%20spectroscopy
|
In imaging spectroscopy (also hyperspectral imaging or spectral imaging) each pixel of an image acquires many bands of light intensity data from the spectrum, instead of just the three bands of the RGB color model. More precisely, it is the simultaneous acquisition of spatially coregistered images in many spectrally contiguous bands.
Some spectral images contain only a few image planes of a spectral data cube, while others are better thought of as full spectra at every location in the image. For example, solar physicists use the spectroheliograph to make images of the Sun built up by scanning the slit of a spectrograph, to study the behavior of surface features on the Sun; such a spectroheliogram may have a spectral resolution of over 100,000 () and be used to measure local motion (via the Doppler shift) and even the magnetic field (via the Zeeman splitting or Hanle effect) at each location in the image plane. The multispectral images collected by the Opportunity rover, in contrast, have only four wavelength bands and hence are only a little more than 3-color images.
One application is spectral geophysical imaging, which allows quantitative and qualitative characterization of the surface and of the atmosphere, using radiometric measurements. These measurements can then be used for unambiguous direct and indirect identification of surface materials and atmospheric trace gases, the measurement of their relative concentrations, subsequently the assignment of the proportional contribution of mixed pixel signals (e.g., the spectral unmixing problem), the derivation of their spatial distribution (mapping problem), and finally their study over time (multi-temporal analysis). The Moon Mineralogy Mapper on Chandrayaan-1 was a geophysical imaging spectrometer.
Background
In 1704, Sir Isaac Newton demonstrated that white light could be split up into component colours. The subsequent history of spectroscopy led to precise measurements and provided the empirical foundatio
|
https://en.wikipedia.org/wiki/Xerox%20NoteTaker
|
The Xerox NoteTaker is a portable computer developed at Xerox PARC in Palo Alto, California, in 1978. Although it did not enter production, and only around ten prototypes were built, it strongly influenced the design of the later Osborne 1 and Compaq Portable computers.
Development
The NoteTaker was developed by a team that included Adele Goldberg, Douglas Fairbairn, and Larry Tesler. It drew heavily on earlier research by Alan Kay, who had previously developed the Dynabook project. While the Dynabook was a concept for a transportable computer that was impossible to implement with available technology, the NoteTaker was intended to show what could be done.
Description
The computer employed what was then highly advanced technology, including a built-in monochrome display monitor, a floppy disk drive and a mouse. It had 256 KB of RAM, then a very large amount, and used a 5 MHz Intel 8086 CPU. It used a version of the Smalltalk operating system that was originally written for the Xerox Alto computer, which pioneered the graphical user interface.
The NoteTaker fitted into a case similar in form to that of a portable sewing machine; the keyboard folded out from the bottom to reveal the monitor and floppy drive. The form factor was later used on the highly successful "luggable" computers, including the Osborne 1 and Compaq Portable. However, these later models were about half as heavy as the NoteTaker, which weighed .
See also
IBM 5100
Osborne 1
Kaypro
Compaq Portable
Portable Computers
|
https://en.wikipedia.org/wiki/Form%20constant
|
A form constant is one of several geometric patterns which are recurringly observed during hypnagogia, hallucinations and altered states of consciousness.
History
In 1926, Heinrich Klüver systematically studied the effects of mescaline (peyote) on the subjective experiences of its users. In addition to producing hallucinations characterized by bright, "highly saturated" colors and vivid imagery, Klüver noticed that mescaline produced recurring geometric patterns in different users. He called these patterns "form constants" and categorized four types: lattices (including honeycombs, checkerboards, and triangles), cobwebs, tunnels, and spirals.
In 1988 David Lewis-Williams and T.A. Dowson incorporated the form constant into his Three Stages of Trance model, the geometric shapes comprising the visuals observed in the model's first stage.
Precipitants
Klüver's form constants have appeared in other drug-induced and naturally occurring hallucinations, suggesting a similar physiological process underlying hallucinations with different triggers. Klüver's form constants also appear in near-death experiences and sensory experiences of those with synesthesia. Other triggers include psychological stress, threshold consciousness (hypnagogia), insulin hypoglycemia, the delirium of fever, epilepsy, psychotic episodes, advanced syphilis, sensory deprivation, photostimulation, electrical stimulation, crystal gazing, migraine headaches, dizziness and a variety of drug-induced intoxications. These shapes may appear on their own or with eyes shut in the form of phosphenes, especially when exerting pressure against the closed eyelid.
It is believed that the reason why these form constants appear has to do with the way the visual system is organized, and in particular in the mapping between patterns on the retina and the columnar organization of the primary visual cortex. Concentric circles in the retina are mapped into parallel lines in the visual cortex. Spirals, tunnels, lattice
|
https://en.wikipedia.org/wiki/Transepidermal%20water%20loss
|
Transepidermal water loss (TEWL or TWL) is the loss of water that passes from inside a body (animal or plant) through the epidermis (that is, either the epidermal layer of animal skin or the epidermal layer of plants) to the surrounding atmosphere via diffusion and evaporation processes. TEWL in mammals is also known as insensible water loss (IWL), as it is a process over which organisms have little physiologic control and of which they are usually mostly unaware. Insensible loss of body water can threaten fluid balance; in humans, substantial dehydration sometimes occurs before a person realizes what is happening.
Measurements of TEWL may be useful for identifying skin damage caused by certain chemicals, physical insult (such as "tape stripping") or pathological conditions such as eczema, as rates of TEWL increase in proportion to the level of damage. However, TEWL is also affected by environmental factors such as humidity, temperature, the time of year (season variation) and the moisture content of the skin (hydration level). Therefore, care must be taken when interpreting the meaning of TEWL rates.
Implications
Human health
From a clinical standpoint, TEWL measurements are of great importance in evaluating skin barrier functionality. Often normal rates of TEWL – from 2.3 g/(m2h) to 44 g/(m2h) – are compromised due to injury, infection and/or severe damage as in the case of burns causing rates over 50 or even over 100 g/m2/h. Damage to the stratum corneum and superficial skin layers not only results in physical vulnerability, but also results in an excess rate of water loss. Therefore, dehydration, metabolic acidosis, and conditions such as anhydremia or concentration of the blood are often critical issues for healthcare providers to consider in the treatment of burn patients.
TEWL is of major concern in public health, considering the relatively high rate of burn incidence among communities in the developing world due to poor quality cooking stoves. Resourc
|
https://en.wikipedia.org/wiki/Fano%27s%20inequality
|
In information theory, Fano's inequality (also known as the Fano converse and the Fano lemma) relates the average information lost in a noisy channel to the probability of the categorization error. It was derived by Robert Fano in the early 1950s while teaching a Ph.D. seminar in information theory at MIT, and later recorded in his 1961 textbook.
It is used to find a lower bound on the error probability of any decoder as well as the lower bounds for minimax risks in density estimation.
Let the random variables and represent input and output messages with a joint probability . Let represent an occurrence of error; i.e., that , with being an approximate version of . Fano's inequality is
where denotes the support of ,
is the conditional entropy,
is the probability of the communication error, and
is the corresponding binary entropy.
Proof
Define an indicator random variable , that indicates the event that our estimate is in error,
Consider . We can use the chain rule for entropies to expand this in two different ways
Equating the two
Expanding the right most term,
Since means ; being given the value of allows us to know the value of with certainty. This makes the term .
On the other hand, means that , hence given the value of , we can narrow down to one of different values, allowing us to upper bound the conditional entropy . Hence
The other term, , because conditioning reduces entropy. Because of the way is defined, , meaning that . Putting it all together,
Because is a Markov chain, we have by the data processing inequality, and hence , giving us
Intuition
Fano's inequality can be interpreted as a way of dividing the uncertainty of a conditional distribution into two questions given an arbitrary predictor. The first question, corresponding to the term , relates to the uncertainty of the predictor. If the prediction is correct, there is no more uncertainty remaining. If the prediction is incorrect, the uncertainty of any discrete distrib
|
https://en.wikipedia.org/wiki/Atomic%20layer%20deposition
|
Atomic layer deposition (ALD) is a thin-film deposition technique based on the sequential use of a gas-phase chemical process; it is a subclass of chemical vapour deposition. The majority of ALD reactions use two chemicals called precursors (also called "reactants"). These precursors react with the surface of a material one at a time in a sequential, self-limiting, manner. A thin film is slowly deposited through repeated exposure to separate precursors. ALD is a key process in fabricating semiconductor devices, and part of the set of tools for synthesising nanomaterials.
Introduction
During atomic layer deposition, a film is grown on a substrate by exposing its surface to alternate gaseous species (typically referred to as precursors or reactants). In contrast to chemical vapor deposition, the precursors are never present simultaneously in the reactor, but they are inserted as a series of sequential, non-overlapping pulses. In each of these pulses the precursor molecules react with the surface in a self-limiting way, so that the reaction terminates once all the available sites on the surface are consumed. Consequently, the maximum amount of material deposited on the surface after a single exposure to all of the precursors (a so-called ALD cycle) is determined by the nature of the precursor-surface interaction. By varying the number of cycles it is possible to grow materials uniformly and with high precision on arbitrarily complex and large substrates.
ALD is a deposition method with great potential for producing very thin, conformal films with control of the thickness and composition of the films possible at the atomic level. A major driving force for the recent interest is the prospective seen for ALD in scaling down microelectronic devices according to Moore's law. ALD is an active field of research, with hundreds of different processes published in the scientific literature, though some of them exhibit behaviors that depart from that of an ideal ALD process. Cu
|
https://en.wikipedia.org/wiki/Acoustic%20foam
|
Acoustic foam is an open celled foam used for acoustic treatment. It attenuates airbone sound waves, reducing their amplitude, for the purposes of noise reduction or noise control. The energy is dissipated as heat. Acoustic foam can be made in several different colors, sizes and thickness.
Acoustic foam can be attached to walls, ceilings, doors, and other features of a room to control noise levels, vibration, and echoes.
Many acoustic foam products are treated with dyes and/or fire retardants.
Uses
The objective of acoustic foam is to improve or change a room's sound qualities by controlling residual sound through absorption. This purpose requires strategic placement of acoustic foam panels on walls, ceilings, floors and other surfaces. Proper placement can help effectively manage resonance within the room and help give the room the desired sonic qualities.
Acoustic enhancement
The objective of acoustic foam is to enhance the sonic properties of a room by effectively managing unwanted reverberations. For this reason, acoustic foam is often used in restaurants, performance spaces, and recording studios. Acoustic foam is also often installed in large rooms with large, reverberative surfaces like gymnasiums, churches, synagogues, theaters, and concert halls where excess reverberation is prone to arise. The purpose is to reduce, but not entirely eliminate, resonance within the room. In unmanaged spaces without acoustic foam or similar sound absorbing materials, sound waves reflect off of surfaces and continue to bounce around in the room. When a wave encounters a change in acoustic impedance, such as hitting a solid surface, acoustic reflections transpire. These reflections will occur many times before the wave becomes inaudible. Reflections can cause acoustic problems such as phase summation and phase cancellation. A new complex wave originates when the direct source wave coincides with the reflected waves. This complex wave will change the frequency response of
|
https://en.wikipedia.org/wiki/Crocin
|
Crocin is a carotenoid chemical compound that is found in the flowers of crocus and gardenia. Its oxygen content also chemically makes it a xanthene. Crocin is the chemical primarily responsible for the color of saffron.
Chemically, crocin is the diester formed from the disaccharide gentiobiose and the dicarboxylic acid crocetin. When isolated as a pure chemical compound, it has a deep red color and forms crystals with a melting point of 186 °C. When dissolved in water, it forms an orange solution.
The term crocins may also refer to members of a series of related hydrophilic carotenoids that are either monoglycosyl or diglycosyl polyene esters of crocetin. The crocin underlying saffron's aroma is α-crocin (a carotenoid pigment that may compose more than 10% of dry saffron's mass): trans-crocetin di-(β-D-gentiobiosyl) ester; it bears the systematic (IUPAC) name 8,8-diapo-8,8-carotenoic acid.
The major active component of saffron is the yellow pigment crocin 2 (three other derivatives with different glycosylations are known) containing a gentiobiose (disaccharide) group at each end of the molecule. The five major biologically active components of saffron, namely the four crocins and crocetin, can be measured with HPLC-UV.
Research
Absorption
Crocin ingested orally is hydrolised to crocetin in the gut which is absorbed across the intestinal barrier, and that crocetin can permeate the Blood–brain barrier.
Antioxidant
Crocin has been shown to be an antioxidant, and neural protective agent. Crocin can reduce oxidative stress and ROS (Reactive Oxygen Species) through enhancement of gene expression of Nrf2, HO-1, and anti-oxidant enzymes, such as CAT, GSH, and SOD.
Neuroprotective
Crocin and its derivative crocetin may counteract oxidative stress, mitochondrial dysfunction and neuroinflammation, which are closely linked to initiation and progression of major brain pathologies such as Alzheimer's and Parkinson's Disease.
In an animal model of malathion-induced
|
https://en.wikipedia.org/wiki/Xavier%20Guichard
|
Xavier Guichard (1870–1947) was a French Director of Police, archaeologist and writer.
His 1936 book Eleusis Alesia: Enquête sur les origines de la civilisation européenne is an early example of speculative thinking concerning Earth mysteries, based on his observations of apparent alignments between Alesia-like place names on a map of France. His theories are analogous to those of his near-contemporary in the United Kingdom, Alfred Watkins, concerning Ley lines.
Xavier Guichard appears as a character in the novels of Georges Simenon, where he is the superior of the fictional detective Jules Maigret.
See also
366 geometry
|
https://en.wikipedia.org/wiki/Baillie%E2%80%93PSW%20primality%20test
|
The Baillie–PSW primality test is a probabilistic or possibly deterministic primality testing algorithm that determines whether a number is composite or is a probable prime. It is named after Robert Baillie, Carl Pomerance, John Selfridge, and Samuel Wagstaff.
The Baillie–PSW test is a combination of a strong Fermat probable prime test to base 2 and a standard or strong Lucas probable prime test. The Fermat and Lucas test each have their own list of pseudoprimes, that is, composite numbers that pass the test. For example, the first ten strong pseudoprimes to base 2 are
2047, 3277, 4033, 4681, 8321, 15841, 29341, 42799, 49141, and 52633 .
The first ten strong Lucas pseudoprimes (with Lucas parameters (P, Q) defined by Selfridge's Method A) are
5459, 5777, 10877, 16109, 18971, 22499, 24569, 25199, 40309, and 58519 .
There is no known overlap between these lists, and there is even evidence that the numbers tend to be of different kind, in fact even with standard and not strong Lucas test there is no known overlap. For example, Fermat pseudoprimes to base 2 tend to fall into the residue class 1 (mod m) for many small m, whereas Lucas pseudoprimes tend to fall into the residue class −1 (mod m). As a result, a number that passes both a strong Fermat base 2 and a strong Lucas test is very likely to be prime. If you choose a random base, there might be some composite n that passes both the Fermat and Lucas tests. For example, n=5777 is a strong psp base 76, and is also a strong Lucas pseudoprime.
No composite number below 264 (approximately 1.845·1019) passes the strong or standard Baillie–PSW test, that result was also separately verified by Charles Greathouse in June 2011. Consequently, this test is a deterministic primality test on numbers below that bound. There are also no known composite numbers above that bound that pass the test, in other words, there are no known Baillie–PSW pseudoprimes.
In 1980, the authors Pomerance, Selfridge, and Wagstaff offered $30 for
|
https://en.wikipedia.org/wiki/Podokesaurus
|
Podokesaurus is a genus of coelophysoid dinosaur that lived in what is now the eastern United States during the Early Jurassic Period. The first fossil was discovered by the geologist Mignon Talbot near Mount Holyoke, Massachusetts, in 1910. The specimen was fragmentary, preserving much of the body, limbs, and tail. In 1911, Talbot described and named the new genus and species Podokesaurus holyokensis based on it. The full name can be translated as "swift-footed lizard of Holyoke". This discovery made Talbot the first woman to find and describe a non-bird dinosaur. The holotype fossil was recognized as significant and was studied by other researchers, but was lost when the building it was kept in burned down in 1917; no unequivocal Podokesaurus specimens have since been discovered. It was made state dinosaur of Massachusetts in 2022.
Estimated to have been about in length and in weight, Podokesaurus was lightly constructed with hollow bones, and would have been similar to Coelophysis, being slender, long-necked, and with sharp, recurved teeth. The were very light and hollow, and some were slightly concave at each end. The (neck) vertebrae were relatively large in length and diameter compared to the (back) vertebrae, and the (tail) vertebrae were long and slender. The (upper-arm bone) was small and delicate, less than half the length of the (thigh-bone). The pubis (pubic bone) was very long, expanding both at the front and hind ends. The femur was slender, nearly straight, had thin walls, and was expanded at the back side of its lower end. The three of the lower leg were closely appressed together forming a compact structure.
Since it was one of the few small theropods known at the time it was described, the affinities of Podokesaurus were long unclear. It was placed in the family Podokesauridae along with other small theropods, and was speculated to have been similar to a proto-bird. It was suggested it was a synonym of Coelophysis and a natural cast spec
|
https://en.wikipedia.org/wiki/Cdc25
|
Cdc25 is a dual-specificity phosphatase first isolated from the yeast Schizosaccharomyces pombe as a cell cycle defective mutant. As with other cell cycle proteins or genes such as Cdc2 and Cdc4, the "cdc" in its name refers to "cell division control".
Dual-specificity phosphatases are considered a sub-class of protein tyrosine phosphatases. By removing inhibitory phosphate residues from target cyclin-dependent kinases (Cdks), Cdc25 proteins control entry into and progression through various phases of the cell cycle, including mitosis and S ("Synthesis") phase.
Function in activating Cdk1
Cdc25 activates cyclin dependent kinases by removing phosphate from residues in the Cdk active site. In turn, the phosphorylation by M-Cdk (a complex of Cdk1 and cyclin B) activates Cdc25. Together with Wee1, M-Cdk activation is switch-like. The switch-like behavior forces entry into mitosis to be quick and irreversible. Cdk activity can be reactivated after dephosphorylation by Cdc25. The Cdc25 enzymes Cdc25A-C are known to control the transitions from G1 to S phase and G2 to M phase.
Structure
The structure of Cdc25 proteins can be divided into two main regions: the N-terminal region, which is highly divergent and contains sites for its phosphorylation and ubiquitination, which regulate the phosphatase activity; and the C-terminal region, which is highly homologous and contains the catalytic site.
Evolution and species distribution
Cdc25 enzymes are well conserved through evolution, and have been isolated from fungi such as yeasts as well as all metazoans examined to date, including humans. The exception among eukaryotes may be plants, as the purported plant Cdc25s have characteristics, (such as the use of cations for catalysis), that are more akin to serine/threonine phosphatases than dual-specificity phosphatases, raising doubts as to their authenticity as Cdc25 phosphatases. The Cdc25 family appears to have expanded in relation to the complexity of the cell-cycle and li
|
https://en.wikipedia.org/wiki/Joe%20Harris%20%28mathematician%29
|
Joseph Daniel Harris (born August 17, 1951) is a mathematician at Harvard University working in the field of algebraic geometry. After earning an AB from Harvard College, where he took Math 55, he continued at Harvard to study for a PhD under Phillip Griffiths.
Work
During the 1980s, he was on the faculty of Brown University, moving to Harvard around 1988. He served as chair of the department at Harvard from 2002 to 2005. His work is characterized by its classical geometric flavor: he has claimed that nothing he thinks about could not have been imagined by the Italian geometers of the late 19th and early 20th centuries, and that if he has had greater success than them, it is because he has access to better tools.
Harris is well known for several of his books on algebraic geometry, notable for their informal presentations:
Principles of Algebraic Geometry , with Phillip Griffiths
Geometry of Algebraic Curves, Vol. 1 , with Enrico Arbarello, Maurizio Cornalba, and Phillip Griffiths
, with William Fulton
, with David Eisenbud
Moduli of Curves , with Ian Morrison.
Fat Chance: Probability from 0 to 1, with Benedict Gross and Emily Riehl, 2019
As of 2018, Harris has supervised 50 PhD students, including Brendan Hassett, James McKernan, Rahul Pandharipande, Zvezdelina Stankova, and Ravi Vakil.
|
https://en.wikipedia.org/wiki/Chip%20art
|
Chip art, also known as silicon art, chip graffiti or silicon doodling, refers to microscopic artwork built into integrated circuits, also called chips or ICs. Since ICs are printed by photolithography, not constructed a component at a time, there is no additional cost to include features in otherwise unused space on the chip. Designers have used this freedom to put all sorts of artwork on the chips themselves, from designers' simple initials to rather complex drawings. Given the small size of chips, these figures cannot be seen without a microscope. Chip graffiti is sometimes called the hardware version of software easter eggs.
Prior to 1984, these doodles also served a practical purpose. If a competitor produced a similar chip, and examination showed it contained the same doodles, then this was strong evidence that the design was copied (a copyright violation) and not independently derived. A 1984 revision of the US copyright law (the Semiconductor Chip Protection Act of 1984) made all chip masks automatically copyrighted, with exclusive rights to the creator, and similar rules apply in most other countries that manufacture ICs. Since an exact copy is now automatically a copyright violation, the doodles serve no useful purpose.
Creating chip art
Integrated Circuits are constructed from multiple layers of material, typically silicon, silicon dioxide (glass), and aluminum. The composition and thickness of these layers give them their distinctive color and appearance. These elements created an irresistible palette for IC design and layout engineers.
The creative process involved in the design of these chips, a strong sense of pride in their work, and an artistic temperament combined compels people to want to mark their work as their own. It is very common to find initials, or groups of initials on chips. This is the design engineer's way of "signing" his or her work.
Often this creative artist's instinct extends to the inclusion of small pictures or icons
|
https://en.wikipedia.org/wiki/Rise%20over%20thermal
|
In wireless communication systems, the rise over thermal (ROT) indicates the ratio between the total interference received on a base station and the thermal noise.
The ROT is a measurement of congestion of a cellular telephone network. The acceptable level of ROT is often used to define the capacity of systems using CDMA (code-division multiple access).
|
https://en.wikipedia.org/wiki/Hammar%20experiment
|
The Hammar experiment was an experiment designed and conducted by Gustaf Wilhelm Hammar (1935) to test the aether drag hypothesis. Its negative result refuted some specific aether drag models, and confirmed special relativity.
Overview
Experiments such as the Michelson–Morley experiment of 1887 (and later other experiments such as the Trouton–Noble experiment in 1903 or the Trouton–Rankine experiment in 1908), presented evidence against the theory of a medium for light propagation known as the luminiferous aether; a theory that had been an established part of science for nearly one hundred years at the time. These results cast doubts on what was then a very central assumption of modern science, and later led to the development of special relativity.
In an attempt to explain the results of the Michelson–Morley experiment in the context of the assumed medium, aether, many new hypotheses were examined. One of the proposals was that instead of passing through a static and unmoving aether, massive objects like the Earth may drag some of the aether along with them, making it impossible to detect a "wind". Oliver Lodge (1893–1897) was one of the first to perform a test of this theory by using rotating and massive lead blocks in an experiment that attempted to cause an asymmetrical aether wind. His tests yielded no appreciable results differing from previous tests for the aether wind.
In the 1920s, Dayton Miller conducted repetitions of the Michelson–Morley experiments. He ultimately constructed an apparatus in such a way as to minimize the mass along the path of the experiment, conducting it at the peak of a tall hill in a building that was made of lightweight materials. He produced measurements showing a diurnal variance, suggesting detection of the "wind", which he ascribed to the lack of mass making while previous experiments were carried out with considerable mass around their apparatus.
The experiment
To test Miller's assertion, Hammar conducted the following exp
|
https://en.wikipedia.org/wiki/Hilbert%20curve
|
The Hilbert curve (also known as the Hilbert space-filling curve) is a continuous fractal space-filling curve first described by the German mathematician David Hilbert in 1891, as a variant of the space-filling Peano curves discovered by Giuseppe Peano in 1890.
Because it is space-filling, its Hausdorff dimension is 2 (precisely, its image is the unit square, whose dimension is 2 in any definition of dimension; its graph is a compact set homeomorphic to the closed unit interval, with Hausdorff dimension 2).
The Hilbert curve is constructed as a limit of piecewise linear curves. The length of the th curve is , i.e., the length grows exponentially with , even though each curve is contained in a square with area .
Images
Applications and mapping algorithms
Both the true Hilbert curve and its discrete approximations are useful because they give a mapping between 1D and 2D space that preserves locality fairly well. This means that two data points which are close to each other in one-dimensional space are also close to each other after folding. The converse cannot always be true.
Because of this locality property, the Hilbert curve is widely used in computer science. For example, the range of IP addresses used by computers can be mapped into a picture using the Hilbert curve. Code to generate the image would map from 2D to 1D to find the color of each pixel, and the Hilbert curve is sometimes used because it keeps nearby IP addresses close to each other in the picture. The locality property of the Hilbert curve has also been used to design algorithms for exploring regions with mobile robots.
In an algorithm called Riemersma dithering, grayscale photograph can be converted to a dithered black-and-white image using thresholding, with the leftover amount from each pixel added to the next pixel along the Hilbert curve. Code to do this would map from 1D to 2D, and the Hilbert curve is sometimes used because it does not create the distracting patterns that would be v
|
https://en.wikipedia.org/wiki/Gas-phase%20ion%20chemistry
|
Gas phase ion chemistry is a field of science encompassed within both chemistry and physics. It is the science that studies ions and molecules in the gas phase, most often enabled by some form of mass spectrometry. By far the most important applications for this science is in studying the thermodynamics and kinetics of reactions. For example, one application is in studying the thermodynamics of the solvation of ions. Ions with small solvation spheres of 1, 2, 3... solvent molecules can be studied in the gas phase and then extrapolated to bulk solution.
Theory
Transition state theory
Transition state theory is the theory of the rates of elementary reactions which assumes a special type of chemical equilibrium (quasi-equilibrium) between reactants and activated complexes.
RRKM theory
RRKM theory is used to compute simple estimates of the unimolecular ion decomposition reaction rates from a few characteristics of the potential energy surface.
Gas phase ion formation
The process of converting an atom or molecule into an ion by adding or removing charged particles such as electrons or other ions can occur in the gas phase. These processes are an important component of gas phase ion chemistry.
Associative ionization
Associative ionization is a gas phase reaction in which two atoms or molecules interact to form a single product ion.
where species A with excess internal energy (indicated by the asterisk) interacts with B to form the ion AB+.
One or both of the interacting species may have excess internal energy.
Charge-exchange ionization
Charge-exchange ionization (also called charge-transfer ionization) is a gas phase reaction between an ion and a neutral species
in which the charge of the ion is transferred to the neutral.
Chemical ionization
In chemical ionization, ions are produced through the reaction of ions of a reagent gas with other species. Some common reagent gases include: methane, ammonia, and isobutane.
Chemi-ionization
Chemi-ionization can
|
https://en.wikipedia.org/wiki/Psidium%20cattleyanum
|
Psidium cattleyanum (World Plants : Psidium cattleianum), commonly known as Cattley guava, strawberry guava or cherry guava, is a small tree (2–6 m tall) in the Myrtaceae (myrtle) family. The species is named in honour of English horticulturist William Cattley. Its genus name Psidium comes from the Latin psidion, or "armlet." The red-fruited variety, P. cattleyanum var. cattleyanum, is commonly known as purple guava, red cattley guava, red strawberry guava and red cherry guava. The yellow-fruited variety, P. cattleyanum var. littorale is variously known as yellow cattley guava, yellow strawberry guava, yellow cherry guava, lemon guava and in Hawaii as waiawī. Although P. cattleyanum has select economic uses, it is considered the most invasive plant in Hawaii.
Description
Psidium cattleyanum is a small, highly-branched tree that reaches a maximum height of 13 meters, although most individuals are between 2 and 4 m. P. cattleyanum has smooth, grey to reddish-brown bark, with oval to elliptical leaves that grow to 4.5 cm in length. It bears fruit when the plants are between 3 and 6 years old. This fruit has thin skin that ranges from yellow to a dark red or purple, is ovular in shape, and grows to around 4 cm in length. Its flowers grow either individually or in clusters of three, and each flower has five petals.
P. cattleyanum reproduces through setting seed and through cloning. Clonally produced suckers tend to have a greater leaf area. Though native to Brazil, it is now distributed throughout many tropical regions. It was introduced in Hawaii as early as 1825 to create an agricultural market for its fruits, but it has yet to be a commercially viable product. It is now highly prevalent in tropical rain forest ecosystems due mainly to accidental transportation and its invasive plant properties.
P. cattleyanum has modest economic impacts in Hawaii due to its edible fruits. However, products made from P. cattleyanum are not commercially available because of a lack
|
https://en.wikipedia.org/wiki/Allomone
|
An allomone (from Ancient Greek "other" and pheromone) is a type of semiochemical produced and released by an individual of one species that affects the behaviour of a member of another species to the benefit of the originator but not the receiver. Production of allomones is a common form of defense against predators, particularly by plant species against insect herbivores. In addition to defense, allomones are also used by organisms to obtain their prey or to hinder any surrounding competitors.
Many insects have developed ways to defend against these plant defenses (in an evolutionary arms race). One method of adapting to allomones is to develop a positive reaction to them; the allomone then becomes a kairomone. Others alter the allomones to form pheromones or other hormones, and yet others adopt them into their own defensive strategies, for example by regurgitating them when attacked by an insectivorous insect.
A third class of allelochemical (chemical used in interspecific communication), synomones, benefit both the sender and receiver.
"Allomone was proposed by Brown and Eisner (Brown, 1968) to denote those substances which convey an advantage upon the emitter. Because Brown and Eisner did not specify whether or not the receiver would benefit, the original definition of allomone includes both substances that benefit the receiver and the emitter, and substances that only benefit the emitter. An example of the first relationship would be a mutualistic relationship, and the latter would be a repellent secretion."
Examples
Antibiotics
Disrupt growth and development and reduce longevity of adults e.g. toxins or digestibility reducing factors.
Antixenotics
Disrupt normal host selection behaviour e.g. Repellents, suppressants, locomotory excitants.
Plants producing allomones
Desmodium (tick-trefoils)
Insects producing allomones
The larvae of the berothid lacewing Lomamyia latipennis feed on termites which they subdue with an aggressive allomone. The first in
|
https://en.wikipedia.org/wiki/Genetic%20hitchhiking
|
Genetic hitchhiking, also called genetic draft or the hitchhiking effect, is when an allele changes frequency not because it itself is under natural selection, but because it is near another gene that is undergoing a selective sweep and that is on the same DNA chain. When one gene goes through a selective sweep, any other nearby polymorphisms that are in linkage disequilibrium will tend to change their allele frequencies too. Selective sweeps happen when newly appeared (and hence still rare) mutations are advantageous and increase in frequency. Neutral or even slightly deleterious alleles that happen to be close by on the chromosome 'hitchhike' along with the sweep. In contrast, effects on a neutral locus due to linkage disequilibrium with newly appeared deleterious mutations are called background selection. Both genetic hitchhiking and background selection are stochastic (random) evolutionary forces, like genetic drift.
History
The term hitchhiking was coined in 1974 by Maynard Smith and John Haigh. Subsequently the phenomenon was studied by John H. Gillespie and others.
Outcomes
Hitchhiking occurs when a polymorphism is in linkage disequilibrium with a second locus that is undergoing a selective sweep. The allele that is linked to the adaptation will increase in frequency, in some cases until it becomes fixed in the population. The other allele, which is linked to the non-advantageous version, will decrease in frequency, in some cases until extinction. Overall, hitchhiking reduces the amount of genetic variation. A hitchhiker mutation (or passenger mutation in cancer biology) may itself be neutral, advantageous, or deleterious.
Recombination can interrupt the process of genetic hitchhiking, ending it before the hitchhiking neutral or deleterious allele becomes fixed or goes extinct. The closer a hitchhiking polymorphism is to the gene under selection, the less opportunity there is for recombination to occur. This leads to a reduction in genetic variation near
|
https://en.wikipedia.org/wiki/Invader%20potential
|
Ecologically, invader potential is the qualitative and quantitative measures of a given invasive species probability to invade a given ecosystem. This is often seen through climate matching. There are many reasons why a species may invade a new area. The term invader potential may also be interchangeable with invasiveness. Invader potential is a large threat to global biodiversity. It has been shown that there is an ecosystem function loss due to the introduction of species in areas they are not native to.
Invaders are species that, through biomass, abundance, and strong interactions with natives, have significantly altered the structure and composition of the established community. This differs greatly from the term "introduced", which merely refers to species that have been introduced to an environment, disregarding whether or not they have created a successful establishment.1 They are simply organisms that have been accidentally, or deliberately, placed into an unfamiliar area .2 Many times, in fact, species do not have a strong impact on the introduced habitat. This can be for a variety of reasons; either the newcomers are not abundant or because they are small and unobtrusive.1
Understanding the mechanisms of invader potential is important to understanding why species relocate and to predict future invasions. There are three predicted reasons as to why species invade an area. They are as follows: adaptation to physical environment, resource competition and/or utilization, and enemy release. Some of these reasons as to why species move seem relatively simple to understand. For example, species may adapt to the new physical environment through having great phenotypic plasticity and environmental tolerance. Species with high rates of these find it easier to adapt to new environments. In terms of resources, those with low resource requirements thrive in unknown areas more than those with complex resource needs. This is shown directly through Tilman's R* rule. Tho
|
https://en.wikipedia.org/wiki/Central%20canal
|
The central canal (also known as spinal foramen or ependymal canal) is the cerebrospinal fluid-filled space that runs through the spinal cord. The central canal lies below and is connected to the ventricular system of the brain, from which it receives cerebrospinal fluid, and shares the same ependymal lining. The central canal helps to transport nutrients to the spinal cord as well as protect it by cushioning the impact of a force when the spine is affected.
The central canal represents the adult remainder of the central cavity of the neural tube. It generally occludes (closes off) with age.
Structure
The central canal below at the ventricular system of the brain, beginning at a region called the obex where the fourth ventricle, a cavity present in the brainstem, narrows.
The central canal is located in the third of the spinal cord in the cervical and thoracic regions. In the lumbar spine it enlarges and is located more centrally. At the conus medullaris, where the spinal cord tapers, it is located more .
Terminal ventricle
The terminal ventricle (ventriculus terminalis, fifth ventricle or ampulla caudalis) is the widest part of the central canal of the spinal cord that is located at or near the conus medullaris. It was described by Stilling in 1859 and Krause in 1875. Krause introduced the term fifth ventricle after observation of normal ependymal cells. The central canal expands as a fusiform terminal ventricle, and approximately 8–10 mm in length in the conus medullaris (or conus terminalis). Although the terminal ventricle is visible in the fetus and children, it is usually absent in adults.
Sometimes, the terminal ventricle is observed by MRI or ultrasound in children less than 5 years old.
Microanatomy
The central canal shares the same ependymal lining as the ventricular system of the brain.
The canal is lined by ciliated, column-shaped cells, outside of which is a band of gelatinous substance, called the substantia gelatinosa centralis (or centra
|
https://en.wikipedia.org/wiki/Hierarchy%20%28mathematics%29
|
In mathematics, a hierarchy is a set-theoretical object, consisting of a preorder defined on a set. This is often referred to as an ordered set, though that is an ambiguous term that many authors reserve for partially ordered sets or totally ordered sets. The term pre-ordered set is unambiguous, and is always synonymous with a mathematical hierarchy. The term hierarchy is used to stress a hierarchical relation among the elements.
Sometimes, a set comes equipped with a natural hierarchical structure. For example, the set of natural numbers N is equipped with a natural pre-order structure, where whenever we can find some other number so that . That is, is bigger than only because we can get to from using . This idea can be applied to any commutative monoid. On the other hand, the set of integers Z requires a more sophisticated argument for its hierarchical structure, since we can always solve the equation by writing .
A mathematical hierarchy (a pre-ordered set) should not be confused with the more general concept of a hierarchy in the social realm, particularly when one is constructing computational models that are used to describe real-world social, economic or political systems. These hierarchies, or complex networks, are much too rich to be described in the category Set of sets. This is not just a pedantic claim; there are also mathematical hierarchies, in the general sense, that are not describable using set theory.
Other natural hierarchies arise in computer science, where the word refers to partially ordered sets whose elements are classes of objects of increasing complexity. In that case, the preorder defining the hierarchy is the class-containment relation. Containment hierarchies are thus special cases of hierarchies.
Related terminology
Individual elements of a hierarchy are often called levels and a hierarchy is said to be infinite if it has infinitely many distinct levels but said to collapse if it has only finitely many distinct levels.
Examp
|
https://en.wikipedia.org/wiki/IBM%203705%20Communications%20Controller
|
The IBM 3705 Communications Controller is a simple computer which attaches to an IBM System/360 or System/370. Its purpose is to connect communication lines to the mainframe channel. It was a first communications controller of the popular IBM 37xx series. It was announced in March 1972. Designed for semiconductor memory which was not ready at the time of announcement, the 3705-I had to use 1.2 microsecond core storage; the later 3705-II uses 1.0 microsecond SRAM. Solid Logic Technology components, similar to those in S/370, were used.
The 3705 normally occupies a single frame two feet wide and three feet deep. Up to three expansion frames can be attached for a theoretical capacity of 352 half-duplex lines and two independent channel adapters.
The 3704 is an entry level version of the 3705 with limited features.
Purpose
IBM intended it to be used in three ways:
Emulation of the older IBM 2703 Communications Controller and its predecessors. The relevant software is the Emulation Program or EP.
Connection of Systems Network Architecture (SNA) devices to a mainframe. The relevant software is Network Control Program (NCP). When used in this fashion, the 3705 is considered an SNA PU4.
Combining the two methods above in a configuration is called a Partitioned Emulation Program or PEP.
Architecture
The storage word length is 16 bits. The registers have the same width as the address bus. Their length varies between 16, 18 and 20 bits depending on the amount of storage installed. A particular interrupt level has eight registers. Register zero is the program counter which gave the address of the next instruction to be executed; the other seven are accumulators. The four odd-numbered accumulators can be addressed as eight single-byte accumulators.
Instructions are fairly simple. Most are register-to-register or register-immediate instructions which execute in a single memory cycle. There are eight storage reference instructions which require two or three storage cycle
|
https://en.wikipedia.org/wiki/List%20of%20computer%20worms
|
See also
Timeline of notable computer viruses and worms
Comparison of computer viruses
List of trojan horses
|
https://en.wikipedia.org/wiki/MSN%20Games
|
MSN Games (also known as Zone.com - formerly known as The Village, Internet Gaming Zone, MSN Gaming Zone, and MSN Games by Zone.com) is a casual gaming web site, with single player, multiplayer, PC download, and social casino video games. Games are available in free online, trial, and full feature pay-to-play versions.
MSN Games is a part of Xbox Game Studios, associated with the MSN portal, and is owned by Microsoft, headquartered in Redmond, Washington.
History
The first version of the site, which was then called "The Village", was founded by Kevin Binkley, Ted Griggs, and Hoon Im. In 1996, Steve Murch, an employee of Microsoft, convinced Bill Gates and Steve Ballmer to acquire the small online game site, then owned by Electric Gravity. The site was rebranded to "Internet Gaming Zone" and launched in 1996.
It started with a handful of card and board games like Hearts, Spades, Checkers, Backgammon, and Bridge.
For the following 5 years, the Internet Gaming Zone would be renamed several times and would increase in popularity with the introduction of popular retail- and MMORPG-games, such as MechWarrior, Rainbow Six, UltraCorps, Age of Empires, Asheron's Call and Fighter Ace.
The website also featured a community forum which was set-up in 2006. This lasted until the closure of MSN Groups in 2009.
Microsoft announced in July 2019 that it would be shutting down the Internet series of games built into Windows operating systems. Windows XP and ME games were shut down on July 31, 2019, while the remaining games on Windows 7 were shut down on January 22, 2020 (a little over a week after Microsoft ceased support for Windows 7).
CD-ROM matchmaking
MSN Games announced in early 2006 the retiring of support for CD-ROM games, chat lobbies, the ZoneFriends client and the Member Plus program, scheduled for June 19, 2006:
"...as of June 19, 2006, we will be retiring our CD-ROM matchmaking service, along with the original versions of several classic card and board games: Cl
|
https://en.wikipedia.org/wiki/Advanced%20Continuous%20Simulation%20Language
|
The Advanced Continuous Simulation Language, or ACSL (pronounced "axle"), is a computer language designed for modeling and evaluating the performance of continuous systems described by time-dependent, nonlinear differential equations. Like SIMCOS and TUTSIM, ACSL is a dialect of the Continuous System Simulation Language (CSSL), originally designed by the Simulation Councils Inc (SCI) in 1967 in an attempt to unify the continuous simulations field.
Language highlights
ACSL is an equation-oriented language consisting of a set of arithmetic operators, standard functions, a set of special ACSL statements, and a MACRO capability which allows extension of the special ACSL statements.
ACSL is intended to provide a simple method of representing mathematical models on a digital computer. Working from an equation description of the problem or a block diagram, the user writes ACSL statements to describe the system under investigation.
An important feature of ACSL is its sorting of the continuous model equations, in contrast to general purpose programming languages such as Fortran where program execution depends critically on statement order.
Typical applications
Applications of ACSL in new areas are being developed constantly. Typical areas in which ACSL is currently applied include control system design, aerospace simulation, chemical process dynamics, power plant dynamics, plant and animal growth, toxicology models, vehicle handling, microprocessor controllers, and robotics.
|
https://en.wikipedia.org/wiki/AMD%20FireMV
|
AMD FireMV, formerly ATI FireMV, is brand name for graphics cards marketed as a Multi-Display 2D video card, with 3D capabilities same as the low-end Radeon graphics products. It competes directly with Matrox professional video cards. FireMV cards aims at the corporate environment who require several displays attached to a single computer. FireMV cards has options of dual GPU, a total of four display output via a VHDCI connector, or single GPU, a total of two display output via a DMS-59 connector.
FireMV cards are available for PCI and PCI Express interfaces.
Although these are marketed by ATI as mainly 2D cards, the FireMV 2250 cards support OpenGL 2.0 since it is based on the RV516 GPU found in the Radeon X1000 Series released 2005.
The FireMV 2260 is the first video card to carry dual DisplayPort output in the workstation 2D graphics market, sporting DirectX 10.1 support.
Chipset table
See also
AMD Eyefinity – introduced with Radeon HD 5000 Series in September 2009
List of AMD graphics processing units
|
https://en.wikipedia.org/wiki/Tube%20lemma
|
In mathematics, particularly topology, the tube lemma, also called Wallace's theorem, is a useful tool in order to prove that the finite product of compact spaces is compact.
Statement
The lemma uses the following terminology:
If and are topological spaces and is the product space, endowed with the product topology, a slice in is a set of the form for .
A tube in is a subset of the form where is an open subset of . It contains all the slices for .
Using the concept of closed maps, this can be rephrased concisely as follows: if is any topological space and a compact space, then the projection map is closed.
Examples and properties
1. Consider in the product topology, that is the Euclidean plane, and the open set The open set contains but contains no tube, so in this case the tube lemma fails. Indeed, if is a tube containing and contained in must be a subset of for all which means contradicting the fact that is open in (because is a tube). This shows that the compactness assumption is essential.
2. The tube lemma can be used to prove that if and are compact spaces, then is compact as follows:
Let be an open cover of . For each , cover the slice by finitely many elements of (this is possible since is compact, being homeomorphic to ).
Call the union of these finitely many elements
By the tube lemma, there is an open set of the form containing and contained in
The collection of all for is an open cover of and hence has a finite subcover . Thus the finite collection covers .
Using the fact that each is contained in and each is the finite union of elements of , one gets a finite subcollection of that covers .
3. By part 2 and induction, one can show that the finite product of compact spaces is compact.
4. The tube lemma cannot be used to prove the Tychonoff theorem, which generalizes the above to infinite products.
Proof
The tube lemma follows from the generalized tube lemma by taking and
It therefore
|
https://en.wikipedia.org/wiki/Norpak
|
Norpak Corporation was a company headquartered in Kanata, Ontario, Canada, that specialized in the development of systems for television-based data transmission. In 2010, it was acquired by Ross Video Ltd. of Iroquois and Ottawa, Ontario.
Norpak developed the NABTS (North American Broadcast Teletext Standard) protocol for teletext in the 1980s, as an improved version to the then-incumbent World System Teletext, or WST, protocol. NABTS was designed to improve graphics capability over WST, but required a much more complex and expensive decoder, making NABTS somewhat of a market failure for teletext. However, NABTS still thrives as a data protocol for embedding almost any form of digital data within the VBI of an analog video signal.
Norpak's products, now part of and complementary to the Ross Video line, include equipment for embedding data in a television or video signal such as for closed captioning, XDS, V-chip data, non-teletext NABTS data for closed-circuit data transmission, and other data protocols for VBI transmission.
|
https://en.wikipedia.org/wiki/Ambelopoulia
|
Ambelopoulia is a controversial dish of grilled, fried, pickled or boiled songbirds which is a traditional dish enjoyed by native Cypriots and served in some Cypriot restaurants. It is illegal in Cyprus as it involves trapping wild birds such as blackcaps and European robins. Trapping kills birds indiscriminately, thus internationally protected species of migratory birds are killed as well. Enforcement of the ban has been lax, so many restaurants serve the dish without consequence. As a result, about 2.4 million birds across Cyprus are estimated to have been killed during 2010. According to a BirdLife Cyprus report released in 2014, over 1.5 million migrating songbirds are killed annually, and the number is increasing each year. In 2015 it was estimated that over 2 million birds were killed, including over 800,000 on the British territories Akrotiri and Dhekelia and a further 800,000 on them in autumn 2016.
The birds are trapped in either of two ways: Black, fine-mesh nylon fishing nets, which are difficult to see, are strung between planted acacia trees. Electronic bird calls lure the birds to entangle their wings and legs, or alternatively gravel is brought in by truck and is thrown at the base of the trees to scare the birds into the nets. Others are trapped using glue sticks made from the berries of a local tree or birdlime. The glue sticks are placed on the branches of trees, and any birds that perch on them are stuck until the trapper returns to kill them (usually with a tooth pick to the throat). Often the legs of the birds are so stuck to the glue sticks that they need to be pulled off. Protests against the removal of acacia scrub has resulted in remaining in 2016, compared with in 2014.
The trappers defend their activity by citing the practice as traditional Cypriot food gathering and claiming that this has been an important source of protein for the natives for many thousands of years, even though the practice has been illegal since 1974. BirdLife Cypr
|
https://en.wikipedia.org/wiki/HLH%20Orion
|
The Orion was a series of 32-bit super-minicomputers designed and produced in the 1980s by High Level Hardware Limited (HLH), a company based in Oxford, UK. The company produced four versions of the machine:
The original Orion, sometimes referred to as the "Microcodeable Orion".
The Orion 1/05, in which the microcodeable CPU was replaced with the much faster Fairchild Clipper RISC C-100 processor providing approximately 5.5 MIPS of integer performance and 1 Mflop of double precision floating point performance.
The Orion 1/07 which offered approximately 33% greater performance over the 1/05 (7.3 MIPS and 1.33 Mflops).
The Orion 1/10 based on a later generation C-300 Clipper from the Advanced Processor Division at Intergraph Corporation that required extensive cooling. The Orion 1/10 offered a further 30% improvement for integer and single precision floating point operations and over 150% improvement for double precision floating point (10 MIPS and 3 Mflops).
All four machines employed the same I/O sub-system.
Background
High Level Hardware was an independent British company formed in early 1982 by David G. Small and Timothy B. Robinson. David Small was previously a founder shareholder and director of Oxford-based Research Machines Limited. Both partners were previously senior members of Research Machine's Special Projects Group. In 1984, as a result of that research, High Level Hardware launched the Orion, a high performance, microcodeable, UNIX superminicomputer targeted particularly at scientific applications such as mathematical modeling, artificial intelligence and symbolic algebra.
In April 1987 High Level Hardware introduced a series of Orions based upon the Fairchild Clipper processor but abandoned the hardware market in late 1989 to concentrate on high-end Apple Macintosh sales.
Microcodeable Orion
The original Orion employed a processor architecture based on Am2900-series devices. This CPU was novel in that its microcode was writable; in other w
|
https://en.wikipedia.org/wiki/Quantum%20inequalities
|
Quantum inequalities are local constraints on the magnitude and extent of distributions of negative energy density in space-time. Initially conceived to clear up a long-standing problem in quantum field theory (namely, the potential for unconstrained negative energy density at a point), quantum inequalities have proven to have a diverse range of applications.
The form of the quantum inequalities is reminiscent of the uncertainty principle.
Energy conditions in classical field theory
Einstein's theory of General Relativity amounts to a description of the relationship between the curvature of space-time, on the one hand, and the distribution of matter throughout space-time on the other. This precise details of this relationship are determined by the Einstein equations
.
Here, the Einstein tensor describes the curvature of space-time, whilst the energy–momentum tensor describes the local distribution of matter. ( is a constant.) The Einstein equations express local relationships between the quantities involved—specifically, this is a system of coupled non-linear second order partial differential equations.
A very simple observation can be made at this point: the zero-point of energy-momentum is not arbitrary. Adding a "constant" to the right-hand side of the Einstein equations will effect a change in the Einstein tensor, and thus also in the curvature properties of space-time.
All known classical matter fields obey certain "energy conditions". The most famous classical energy condition is the "weak energy condition"; this asserts that the local energy density, as measured by an observer moving along a time-like world line, is non-negative. The weak energy condition is essential for many of the most important and powerful results of classical relativity theory—in particular, the singularity theorems of Hawking et al.
Energy conditions in quantum field theory
The situation in quantum field theory is rather different: the expectation value of the energy density
|
https://en.wikipedia.org/wiki/Algorithmic%20state%20machine
|
The algorithmic state machine (ASM) is a method for designing finite state machines (FSMs) originally developed by Thomas E. Osborne at the University of California, Berkeley (UCB) since 1960, introduced to and implemented at Hewlett-Packard in 1968, formalized and expanded since 1967 and written about by Christopher R. Clare since 1970. It is used to represent diagrams of digital integrated circuits. The ASM diagram is like a state diagram but more structured and, thus, easier to understand. An ASM chart is a method of describing the sequential operations of a digital system.
ASM method
The ASM method is composed of the following steps:
1. Create an algorithm, using pseudocode, to describe the desired operation of the device.
2. Convert the pseudocode into an ASM chart.
3. Design the datapath based on the ASM chart.
4. Create a detailed ASM chart based on the datapath.
5. Design the control logic based on the detailed ASM chart.
ASM chart
An ASM chart consists of an interconnection of four types of basic elements: state name, state box, decision box, and conditional outputs box. An ASM state, represented as a rectangle, corresponds to one state of a regular state diagram or finite state machine. The Moore type outputs are listed inside the box.
State Name: The name of the state is indicated inside the circle and the circle is placed in the top left corner or the name is placed without the circle.
State Box: The output of the state is indicated inside the rectangle box
Decision Box: A diamond indicates that the stated condition/expression is to be tested and the exit path is to be chosen accordingly. The condition expression contains one or more inputs to the FSM (Finite State Machine). An ASM condition check, indicated by a diamond with one input and two outputs (for true and false), is used to conditionally transfer between two State Boxes, to another Decision Box, or to a Conditional Output Box. The decision box contains the stated condition expressio
|
https://en.wikipedia.org/wiki/Radio%20access%20network
|
A radio access network (RAN) is part of a mobile telecommunication system implementing a radio access technology (RAT). Conceptually, it resides between a device such as a mobile phone, a computer, or any remotely controlled machine and provides connection with its core network (CN). Depending on the standard, mobile phones and other wireless connected devices are varyingly known as user equipment (UE), terminal equipment, mobile station (MS), etc. RAN functionality is typically provided by a silicon chip residing in both the core network as well as the user equipment.
See the following diagram:
CN
/ ⧵
/ ⧵
RAN RAN
/ ⧵ / ⧵
UE UE UE UE
Examples of RAN types are:
GRAN: GSM
GERAN: essentially the same as GRAN but specifying the inclusion of EDGE packet radio services
UTRAN: UMTS
E-UTRAN: The Long Term Evolution (LTE) high speed and low latency
It is also possible for a single handset/phone to be simultaneously connected to multiple RANs. Handsets capable of this are sometimes called dual-mode handsets. For instance it is common for handsets to support both GSM and UMTS (a.k.a. "3G") RATs. Such devices seamlessly transfer an ongoing call between different radio access networks without the user noticing any disruption in service.
See also
AirHop Communications
IP connectivity access network
C-RAN
Access network
|
https://en.wikipedia.org/wiki/Work%20%28thermodynamics%29
|
Thermodynamic work is one of the principal processes by which a thermodynamic system can interact with its surroundings and exchange energy. This exchange results in externally measurable macroscopic forces on the system's surroundings, which can cause mechanical work, to lift a weight, for example, or cause changes in electromagnetic, or gravitational variables. The surroundings also can perform work on a thermodynamic system, which is measured by an opposite sign convention.
For thermodynamic work, appropriately chosen externally measured quantities are exactly matched by values of or contributions to changes in macroscopic internal state variables of the system, which always occur in conjugate pairs, for example pressure and volume or magnetic flux density and magnetization.
In the International System of Units (SI), work is measured in joules (symbol J). The rate at which work is performed is power, measured in joules per second, and denoted with the unit watt (W).
History
1824
Work, i.e. "weight lifted through a height", was originally defined in 1824 by Sadi Carnot in his famous paper Reflections on the Motive Power of Fire, where he used the term motive power for work. Specifically, according to Carnot:
We use here motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.
1845
In 1845, the English physicist James Joule wrote a paper On the mechanical equivalent of heat for the British Association meeting in Cambridge. In this paper, he reported his best-known experiment, in which the mechanical power released through the action of a "weight falling through a height" was used to turn a paddle-wheel in an insulated barrel of water.
In this experiment, the motion of the paddle wheel, through agitation and friction, heated the body of water, s
|
https://en.wikipedia.org/wiki/Elastic%20energy
|
Elastic energy is the mechanical potential energy stored in the configuration of a material or physical system as it is subjected to elastic deformation by work performed upon it. Elastic energy occurs when objects are impermanently compressed, stretched or generally deformed in any manner. Elasticity theory primarily develops formalisms for the mechanics of solid bodies and materials. (Note however, the work done by a stretched rubber band is not an example of elastic energy. It is an example of entropic elasticity.) The elastic potential energy equation is used in calculations of positions of mechanical equilibrium. The energy is potential as it will be converted into other forms of energy, such as kinetic energy and sound energy, when the object is allowed to return to its original shape (reformation) by its elasticity.
The essence of elasticity is reversibility. Forces applied to an elastic material transfer energy into the material which, upon yielding that energy to its surroundings, can recover its original shape. However, all materials have limits to the degree of distortion they can endure without breaking or irreversibly altering their internal structure. Hence, the characterizations of solid materials include specification, usually in terms of strains, of its elastic limits. Beyond the elastic limit, a material is no longer storing all of the energy from mechanical work performed on it in the form of elastic energy.
Elastic energy of or within a substance is static energy of configuration. It corresponds to energy stored principally by changing the interatomic distances between nuclei. Thermal energy is the randomized distribution of kinetic energy within the material, resulting in statistical fluctuations of the material about the equilibrium configuration. There is some interaction, however. For example, for some solid objects, twisting, bending, and other distortions may generate thermal energy, causing the material's temperature to rise. Ther
|
https://en.wikipedia.org/wiki/X86%20memory%20models
|
In computing, the x86 memory models are a set of six different memory models of the x86 CPU operating in real mode which control how the segment registers are used and the default size of pointers.
Memory segmentation
Four registers are used to refer to four segments on the 16-bit x86 segmented memory architecture. DS (data segment), CS (code segment), SS (stack segment), and ES (extra segment). Another 16-bit register can act as an offset into a given segment, and so a logical address on this platform is written segment:offset, typically in hexadecimal notation. In real mode, in order to calculate the physical address of a byte of memory, the hardware shifts the contents of the appropriate segment register 4 bits left (effectively multiplying by 16), and then adds the offset.
For example, the logical address 7522:F139 yields the 20-bit physical address:
Note that this process leads to aliasing of memory, such that any given physical address has up to 4096 corresponding logical addresses. This complicates the comparison of pointers to different segments.
Pointer sizes
Pointer formats are known as near, far, or huge.
Near pointers are 16-bit offsets within the reference segment, i.e. DS for data and CS for code. They are the fastest pointers, but are limited to point to 64 KB of memory (to the associated segment of the data type). Near pointers can be held in registers (typically SI and DI).
mov bx, word [reg]
mov ax, word [bx]
mov dx, word [bx+2]
Far pointers are 32-bit pointers containing a segment and an offset. To use them the segment register ES is used by using the instruction les [reg]|[mem],dword [mem]|[reg]. They may reference up to 1024 KiB of memory. Note that pointer arithmetic (addition and subtraction) does not modify the segment portion of the pointer, only its offset. Operations which exceed the bounds of zero or 65535 (0xFFFF) will undergo modulo 64K operation just as any normal 16-bit operation. For example, if the segment regi
|
https://en.wikipedia.org/wiki/Motorola%206847
|
The MC6847 is a Video Display Generator (VDG) first introduced by Motorola in 1978 and used in the TRS-80 Color Computer, Dragon 32/64, Laser 200, TRS-80 MC-10/Matra Alice, NEC PC-6000 series, Acorn Atom, and the APF Imagination Machine, among others. It is a relatively simple display generator compared to other display chips of the time. It is capable of displaying alphanumeric text, semigraphics and raster graphics contained within a roughly square display matrix 256 pixels wide by 192 lines high.
The ROM includes a 5 x 7 pixel font, compatible with 6-bit ASCII. Effects such as inverse video or colored text (green on dark green; orange on dark orange) are possible.
The hardware palette is composed of twelve colors: black, green, yellow, blue, red, buff (almost-but-not-quite white), cyan, magenta, and orange (two extra colors, dark green and dark orange, are the ink colours for all alphanumeric text mode characters, and a light orange color is available as an alternative to green as the background color). According to the MC6847 datasheet, the colors are formed by the combination of three signals: with 6 possible levels, (or with 3 possible levels) and (or with 3 possible levels), based on the YPbPr colorspace, and then converted for output into a NTSC analog signal.
The low display resolution is a necessity of using television sets as display monitors. Making the display wider risked cutting off characters due to overscan. Compressing more dots into the display window would easily exceed the resolution of the television and be useless.
Variants
According to the datasheets, there are non-interlaced (6847) and interlaced (6847Y) variants, plus the 6847T1 (non-interlaced only). The chips can be found with ceramic (L suffix), plastic (P suffix) or CERDIP (S suffix) packages.
Die pictures
Signal levels and color palette
The chip outputs a NTSC-compatible progressive scan signal composed of one field of 262 lines 60 times per second.
According to the MC6847
|
https://en.wikipedia.org/wiki/Paul%20Matthieu%20Hermann%20Laurent
|
Paul Matthieu Hermann Laurent (2 September 1841, in Luxembourg City – 19 February 1908, in Paris, France) was a French mathematician. Despite his large body of works, Laurent series expansions for complex functions were not named after him, but after Pierre Alphonse Laurent.
Publications
|
https://en.wikipedia.org/wiki/Phase-shift%20oscillator
|
A phase-shift oscillator is a linear electronic oscillator circuit that produces a sine wave output. It consists of an inverting amplifier element such as a transistor or op amp with its output fed back to its input through a phase-shift network consisting of resistors and capacitors in a ladder network. The feedback network 'shifts' the phase of the amplifier output by 180 degrees at the oscillation frequency to give positive feedback. Phase-shift oscillators are often used at audio frequency as audio oscillators.
The filter produces a phase shift that increases with frequency. It must have a maximum phase shift of more than 180 degrees at high frequencies so the phase shift at the desired oscillation frequency can be 180 degrees. The most common phase-shift network cascades three identical resistor-capacitor stages that produce a phase shift of zero at low frequencies and 270° at high frequencies.
The first integrated circuit was a phase shift oscillator invented by Jack Kilby in 1958.
Implementations
Bipolar implementation
This schematic drawing shows the oscillator using a common-emitter connected bipolar transistor as an amplifier. The two resistors R and three capacitors C form the RC phase-shift network which provides feedback from collector to base of the transistor. Resistor Rb provides base bias current. Resistor Rc is the collector load resistor for the collector current. Resistor Rs isolates the circuit from the external load.
FET implementation
This circuit implements the oscillator with a FET. R1, R2, Rs, and Cs provide bias for the transistor. Note that the topology used for positive feedback is voltage series feedback.
Op-amp implementation
The implementation of the phase-shift oscillator shown in the diagram uses an operational amplifier (op-amp), three capacitors and four resistors.
The circuit's modeling equations for the oscillation frequency and oscillation criterion are complicated because each RC stage loads the preceding ones. Assum
|
https://en.wikipedia.org/wiki/Sensory%20gating
|
Sensory gating describes neural processes of filtering out redundant or irrelevant stimuli from all possible environmental stimuli reaching the brain. Also referred to as gating or filtering, sensory gating prevents an overload of information in the higher cortical centers of the brain. Sensory gating can also occur in different forms through changes in both perception and sensation, affected by various factors such as "arousal, recent stimulus exposure, and selective attention."
Although sensory gating is largely automatic, it also occurs within the context of attention processing as the brain selectively seeks for goal-relevant information. Previous studies have shown a correlation between sensory gating and different cognitive functions, but there is not yet a solid evidence implying that the relationship between sensory gating and cognitive functions are modality-independent.
Cocktail party effect
The cocktail party effect illustrates how the brain inhibits input from environmental stimuli, while still processing sensory input from the attended stimulus. The cocktail party effect demonstrates sensory gating in hearing, but the other senses also go through the same process protecting primary cortical areas from being overwhelmed.
Neural regions involved
Information from sensory receptors make their way to the brain through neurons and synapse at the thalamus. The pulvinar nuclei of the thalamus plays a major role in attention, and has a major role in filtering out unnecessary information in regards to sensory gating. In a proven clinical study, it has been found out that the two stimuli (S1 and S2) are transported within 500ms between the clicks and 8 seconds between the pairs, in which S1 is known to generate a trace of memory that lingers presumably in the hippocampal region while the S2 the arrives later to be compared with the first stimuli as it gets inhibited if provided with no new information. (Both S1 and S2 are commonly referred to auditory stimuli
|
https://en.wikipedia.org/wiki/Sympathetic%20ophthalmia
|
Sympathetic ophthalmia (SO), also called spared eye injury, is a diffuse granulomatous inflammation of the uveal layer of both eyes following trauma to one eye. It can leave the affected person completely blind. Symptoms may develop from days to several years after a penetrating eye injury. It typically results from a delayed hypersensitivity reaction.
Signs and symptoms
Eye floaters and loss of accommodation are among the earliest symptoms. The disease may progress to severe inflammation of the uveal layer of the eye (uveitis) with pain and sensitivity of the eyes to light. The affected eye often remains relatively painless while the inflammatory disease spreads through the uvea, where characteristic focal infiltrates in the choroid named Dalén–Fuchs nodules can be seen. The retina, however, usually remains uninvolved, although perivascular cuffing of the retinal vessels with inflammatory cells may occur. Swelling of the optic disc (papilledema), secondary glaucoma, vitiligo, and poliosis of the eyelashes may accompany SO.
Pathophysiology
Sympathetic ophthalmia is currently thought to be an autoimmune inflammatory response toward ocular antigens, specifically a delayed hypersensitivity to melanin-containing structures from the outer segments of the photoreceptor layer of the retina. The immune system, which normally is not exposed to ocular proteins, is introduced to the contents of the eye following traumatic injury. Once exposed, it senses these antigens as foreign, and begins attacking them. The onset of this process can be from days to years after the inciting traumatic event.
Diagnosis
Diagnosis is clinical, seeking a history of eye injury. An important differential diagnosis is Vogt–Koyanagi–Harada syndrome (VKH), which is thought to have the same pathogenesis, without a history of surgery or penetrating eye injury.
Still experimental, skin tests with soluble extracts of human or bovine uveal tissue are said to elicit delayed hypersensitivity responses in
|
https://en.wikipedia.org/wiki/Minimal%20instruction%20set%20computer
|
Minimal instruction set computer (MISC) is a central processing unit (CPU) architecture, usually in the form of a microprocessor, with a very small number of basic operations and corresponding opcodes, together forming an instruction set. Such sets are commonly stack-based rather than register-based to reduce the size of operand specifiers.
Such a stack machine architecture is inherently simpler since all instructions operate on the top-most stack entries.
One result of the stack architecture is an overall smaller instruction set, allowing a smaller and faster instruction decode unit with overall faster operation of individual instructions.
Characteristics and design philosophy
Separate from the stack definition of a MISC architecture, is the MISC architecture being defined by the number of instructions supported.
Typically a minimal instruction set computer is viewed as having 32 or fewer instructions, where NOP, RESET, and CPUID type instructions are usually not counted by consensus due to their fundamental nature.
32 instructions is viewed as the highest allowable number of instructions for a MISC, though 16 or 8 instructions are closer to what is meant by "Minimal Instructions".
A MISC CPU cannot have zero instructions as that is a zero instruction set computer.
A MISC CPU cannot have one instruction as that is a one instruction set computer.
The implemented CPU instructions should by default not support a wide set of inputs, so this typically means an 8-bit or 16-bit CPU.
If a CPU has an NX bit, it is more likely to be viewed as being a complex instruction set computer (CISC) or reduced instruction set computer (RISC).
MISC chips typically lack hardware memory protection of any kind, unless there is an application specific reason to have the feature.
If a CPU has a microcode subsystem, that excludes it from being a MISC.
The only addressing mode considered acceptable for a MISC CPU to have is load/store, the same as for reduced instruction s
|
https://en.wikipedia.org/wiki/Compaq%20Deskpro
|
The Compaq Deskpro is a line of business-oriented desktop computers manufactured by Compaq, then discontinued after the merger with Hewlett-Packard. Models were produced containing microprocessors from the 8086 up to the x86-based Intel Pentium 4.
History
Deskpro (8086) and Deskpro 286
The original Compaq Deskpro (released in 1984), available in several disk configurations, is an XT-class PC equipped with an 8 MHz 8086 CPU and Compaq's unique display hardware that combined Color Graphics Adapter graphics with high resolution Monochrome Display Adapter text. As a result, it was considerably faster than the original IBM PC, the XT and the AT, and had a much better quality text display compared to IBM PCs which were equipped with either the IBM Monochrome Display Adapter or Color Graphics Adapter cards.
Its hardware and BIOS were claimed to be 100% compatible with the IBM PC, like the earlier Compaq Portable. This compatibility had given Compaq the lead over companies like Columbia Data Products, Dynalogic, Eagle Computer and Corona Data Systems. The latter two companies were threatened by IBM for BIOS copyright infringement, and settled out of court, agreeing to re-implement their BIOS. Compaq used a clean room design reverse-engineered BIOS, avoiding legal jeopardy.
In 1985, Compaq released the Deskpro 286, which looks quite similar to the IBM PC/AT.
Deskpro 386
In September 1986, the Deskpro 386 was announced after Intel released its 80386 microprocessor, beating IBM by seven months on their comparable 386 computer, thus making a name for themselves. The IBM-made 386DX machine, the IBM PS/2 Model 80, reached the market almost a year later,
PC Tech Journal honored the Deskpro 386 with its 1986 Product of the Year award.
The Deskpro 386/25 was released August, 1988 and cost $10,299.
Other
The form factor for the Compaq Deskpro is mostly the desktop model which lies upon a desk, with a monitor placed on top of it. Compaq has produced many tower upright models t
|
https://en.wikipedia.org/wiki/Neon%20Museum
|
The Neon Museum in Las Vegas, Nevada, United States, features signs from old casinos and other businesses displayed outdoors on . The museum features a restored lobby shell from the defunct La Concha Motel as its visitors' center, which officially opened on October 27, 2012.
For many years, the Young Electric Sign Company (YESCO) stored many of these old signs in their "boneyard." The signs were slowly being destroyed by exposure to the elements.
The signs are considered by Las Vegas locals, business owners and government organizations to be not only artistically, but also historically, significant to the culture of the city. Each of the restored signs in the collection holds a story about who created it and why it is important.
History
The Neon Museum was founded in 1996 as a partnership between the Allied Arts Council of Southern Nevada and the City of Las Vegas. Today, it is an independent 501(c)3 non-profit. Located on Las Vegas Boulevard North, the Neon Museum includes the Neon Boneyard and the North Gallery.
The impetus behind the collecting of signs was the loss of the iconic sign from The Sands; after it was replaced with a new sign in the 1980s. There was no place to store the massive sign, and it was scrapped. After nearly 10 years of collecting signs, the Allied Arts Council of Southern Nevada and the city of Las Vegas worked together to create an institution to house and care for the saved signs. To mark its official opening in November 1996, the Neon Museum restored and installed the Hacienda Horse & Rider sign at the intersection of Las Vegas Boulevard and Fremont Street. However, access to the collection was provided by appointment only. Annual attendance was approximately 12–20,000 during this time.
In 2005, the historic La Concha lobby was donated to the museum by owners of the La Concha Motel, the Doumani family. Although it cost nearly $3 million to move and restore the La Concha, the plans to open a museum became concrete after the donation
|
https://en.wikipedia.org/wiki/Unit-weighted%20regression
|
In statistics, unit-weighted regression is a simplified and robust version (Wainer & Thissen, 1976) of multiple regression analysis where only the intercept term is estimated. That is, it fits a model
where each of the are binary variables, perhaps multiplied with an arbitrary weight.
Contrast this with the more common multiple regression model, where each predictor has its own estimated coefficient:
In the social sciences, unit-weighted regression is sometimes used for binary classification, i.e. to predict a yes-no answer where indicates "no", "yes". It is easier to interpret than multiple linear regression (known as linear discriminant analysis in the classification case).
Unit weights
Unit-weighted regression is a method of robust regression that proceeds in three steps. First, predictors for the outcome of interest are selected; ideally, there should be good empirical or theoretical reasons for the selection. Second, the predictors are converted to a standard form. Finally, the predictors are added together, and this sum is called the variate, which is used as the predictor of the outcome.
Burgess method
The Burgess method was first presented by the sociologist Ernest W. Burgess in a 1928 study to determine success or failure of inmates placed on parole. First, he selected 21 variables believed to be associated with parole success. Next, he converted each predictor to the standard form of zero or one (Burgess, 1928). When predictors had two values, the value associated with the target outcome was coded as one. Burgess selected success on parole as the target outcome, so a predictor such as a history of theft was coded as "yes" = 0 and "no" = 1. These coded values were then added to create a predictor score, so that higher scores predicted a better chance of success. The scores could possibly range from zero (no predictors of success) to 21 (all 21 predictors scored as predicting success).
For predictors with more than two values, the Burgess
|
https://en.wikipedia.org/wiki/Pre-ferment
|
A ferment (also known as bread starter) is a fermentation starter used in indirect methods of bread making. It may also be called mother dough.
A ferment and a longer fermentation in the bread-making process have several benefits: there is more time for yeast, enzyme and, if sourdough, bacterial actions on the starch and proteins in the dough; this in turn improves the keeping time of the baked bread, and it creates greater complexities of flavor. Though ferments have declined in popularity as direct additions of yeast in bread recipes have streamlined the process on a commercial level, ferments of various forms are widely used in artisanal bread recipes and formulas.
Classifications
In general, there are two ferment varieties: sponges, based on baker's yeast, and the starters of sourdough, based on wild yeasts and lactic acid bacteria. There are several kinds of pre-ferment commonly named and used in bread baking. They all fall on a varying process and time spectrum, from a mature mother dough of many generations of age to a first-generation sponge based on a fresh batch of baker's yeast:
Biga and poolish (or pouliche) are terms used in Italian and French baking, respectively, for sponges made with domestic baker's yeast. Poolish is a fairly wet sponge (typically one-to-one, this is made with a one-part-flour-to-one-part-water ratio by weight), and it is called biga liquida, whereas the "normal" biga is usually drier. Bigas can be held longer at their peak than wetter sponges, while a poolish is one known technique to increase a dough's extensibility.
Sourdough starter is likely the oldest, being reliant on organisms present in the grain and local environment. In general, these starters have fairly complex microbiological makeups, the most notable including wild yeasts, lactobacillus, and acetobacteria in symbiotic relationship referred to as a SCOBY. They are often maintained over long periods of time. For example, the Boudin Bakery in San Francisco has used t
|
https://en.wikipedia.org/wiki/Altera%20Hardware%20Description%20Language
|
Altera Hardware Description Language (AHDL) is a proprietary hardware description language (HDL) developed by Altera Corporation. AHDL is used for digital logic design entry for Altera's complex programmable logic devices (CPLDs) and field-programmable gate arrays (FPGAs). It is supported by Altera's MAX-PLUS and Quartus series of design software. AHDL has an Ada-like syntax and its feature set is comparable to the synthesizable portions of the Verilog and VHDL hardware description languages. In contrast to HDLs such as Verilog and VHDL, AHDL is a design-entry language only; all of its language constructs are synthesizable. By default, Altera software expects AHDL source files to have a .tdf extension (Text Design Files).
Example
% a simple AHDL up counter, released to public domain 13 November 2006 %
% [block quotations achieved with percent sign] %
% like c, ahdl functions must be prototyped %
% PROTOTYPE:
FUNCTION COUNTER (CLK)
RETURNS (CNTOUT[7..0]); %
% function declaration, where inputs, outputs, and
bidirectional pins are declared %
% also like c, square brackets indicate an array %
SUBDESIGN COUNTER
(
CLK :INPUT;
CNTOUT[7..0] :OUTPUT;
)
% variables can be anything from flip-flops (as in this case),
tri-state buffers, state machines, to user defined functions %
VARIABLE
TIMER[7..0]: DFF;
% as with all hardware description languages, think of this
less as an algorithm and more as wiring nodes together %
BEGIN
DEFAULTS
TIMER[].prn = VCC; % this takes care of d-ff resets %
TIMER[].clrn = VCC;
END DEFAULTS;
TIMER[].d = TIMER[].q + H"1";
END;
|
https://en.wikipedia.org/wiki/California%20Native%20Plant%20Society
|
The California Native Plant Society (CNPS) is a California environmental non-profit organization (501(c)3) that seeks to increase understanding of California's native flora and to preserve it for future generations. The mission of CNPS is to conserve California native plants and their natural habitats, and increase understanding, appreciation, and horticultural use of native plants throughout the entire state and California Floristic Province.
History
California Native Plant Society was founded in 1965 by professional botanists and grassroots activists who, after saving an important native plant garden in Berkeley's Tilden Regional Park, were inspired to create an ongoing organization with the mission to save and promote the native plants of California.
Structure
For 50 years, professional CNPS staff and volunteers have worked alongside scientists, government officials, and regional planners to protect habitats and species, and to advocate for well-informed environmental practices, regulations, and policies. The organization works at the local level through the various regional chapters, and at the state level through its five major programs, board of directors, Chapter Council, and state office.
CNPS continues to be a grassroots organization, with nearly 10,000 members and volunteers in 35 chapters covering the state of California and northwest Baja California. Chapter volunteers promote CNPS’s mission to conserve California’s native plants and their natural habitats, and to increase the horticultural uses of native plants at the local level. Membership is open to everyone, and chapter activities ranging from field trips, restoration activities, meetings, symposia, public garden maintenance, plant sales, and more are open to the public.
At the state organizational level, CNPS has five core programs in Conservation, Rare Plant Science, Vegetation Science, Education, and Horticulture. Each program has dedicated CNPS staff supported by volunteer committees consist
|
https://en.wikipedia.org/wiki/Variation%20and%20Evolution%20in%20Plants
|
Variation and Evolution in Plants is a book written by G. Ledyard Stebbins, published in 1950. It is one of the key publications embodying the modern synthesis of evolution and genetics, as the first comprehensive publication to discuss the relationship between genetics and natural selection in plants. The book has been described by plant systematist Peter H. Raven as "the most important book on plant evolution of the 20th century" and it remains one of the most cited texts on plant evolution.
Origin
The book is based on the Jesup Lectures that Stebbins delivered at Columbia University in October and November 1946 and is a synthesis of his ideas and the then current research on the evolution of seed plants in terms of genetics.
Contents
The book is written in fourteen parts:
Description and analysis of variation patterns
Examples of variation patterns within species and genera
The basis of individual variation
Natural selection and variation in populations
Genetic systems as factors in evolution
Isolation and the origin of species
Hybridization and its effects
Polyploidy I: occurrence and nature of polyploid types
Polyploidy II: geographic distribution and significance of polyploidy
Apomixis in relation to variation and evolution
Structural hybridity and the genetic system
Evolutionary trends I: the karyotype
Evolutionary trends II: External morphology
Fossils, modern distribution patterns and rates of evolution
Significance
The 643-page book cites more than 1,250 references and was the longest of the four books associated with the modern evolutionary synthesis. The other key works of the modern synthesis, whose publication also followed their authors' Jesup lectures, are Theodosius Dobzhansky's Genetics and the Origin of Species, Ernst Mayr's Systematics and the Origin of Species and George Gaylord Simpson's Tempo and Mode in Evolution. The great significance of Variation and Evolution in Plants is that it effectively killed any serious belief in altern
|
https://en.wikipedia.org/wiki/UNIVAC%201050
|
The UNIVAC 1050 was a variable word-length (one to 16 characters) decimal and binary computer. It was initially announced in May 1962 as an off-line input-output processor for larger UNIVAC systems.
Instructions were fixed length (30 bits – five characters), consisting of a five-bit "op code", a three-bit index register specifier, one reserved bit, a 15-bit address, and a six-bit "detail field" whose function varies with each instruction. The memory was up to 32K of six-bit characters.
Like the IBM 1401, the 1050 was commonly used as an off-line peripheral controller in many installations of both large "scientific computers and large "business computers". In these installations the big computer (e.g., a UNIVAC III) did all of its input-output on magnetic tapes and the 1050 was used to format input data from other peripherals (e.g., punched card readers) on the tapes and transfer output data from the tapes to other peripherals (e.g., punched card punches or the line printer).
A version used by the U.S. Air Force, the U1050-II real-time system, had some extra peripherals. The most significant of these was the FASTRAND 1 Drum Storage Unit. This physically large device had two contra-rotating drums mounted horizontally, one above the other in a pressurized cabinet. Read-write heads were mounted on a horizontally moving beam between the drums, driven by a voice coil servo external to the pressurized cabinet. This high-speed access subsystem allowed the real-time operation. Another feature was the communications subsystem with modem links to remote sites. A Uniservo VI-C tape drive provided an audit trail for the transactions. Other peripherals were the card reader and punch, and printer. The operator's console had the 'stop and go' buttons and a Teletype Model 33 teleprinter for communication and control. The initial Air Force order in November 1963 was for 152 systems.
Subsequently, UNIVAC released the 1050 Model III (1050-III) and 1050 Model IV (1050-IV) f
|
https://en.wikipedia.org/wiki/Actuarial%20present%20value
|
The actuarial present value (APV) is the expected value of the present value of a contingent cash flow stream (i.e. a series of payments which may or may not be made). Actuarial present values are typically calculated for the benefit-payment or series of payments associated with life insurance and life annuities. The probability of a future payment is based on assumptions about the person's future mortality which is typically estimated using a life table.
Life insurance
Whole life insurance pays a pre-determined benefit either at or soon after the insured's death. The symbol (x) is used to denote "a life aged x" where x is a non-random parameter that is assumed to be greater than zero. The actuarial present value of one unit of whole life insurance issued to (x) is denoted by the symbol or in actuarial notation. Let G>0 (the "age at death") be the random variable that models the age at which an individual, such as (x), will die. And let T (the future lifetime random variable) be the time elapsed between age-x and whatever age (x) is at the time the benefit is paid (even though (x) is most likely dead at that time). Since T is a function of G and x we will write T=T(G,x). Finally, let Z be the present value random variable of a whole life insurance benefit of 1 payable at time T. Then:
where i is the effective annual interest rate and δ is the equivalent force of interest.
To determine the actuarial present value of the benefit we need to calculate the expected value of this random variable Z. Suppose the death benefit is payable at the end of year of death. Then T(G, x) := ceiling(G - x) is the number of "whole years" (rounded upwards) lived by (x) beyond age x, so that the actuarial present value of one unit of insurance is given by:
where is the probability that (x) survives to age x+t, and is the probability that (x+t) dies within one year.
If the benefit is payable at the moment of death, then T(G,x): = G - x and the actuarial present value of one un
|
https://en.wikipedia.org/wiki/Two-dimensional%20electron%20gas
|
A two-dimensional electron gas (2DEG) is a scientific model in solid-state physics. It is an electron gas that is free to move in two dimensions, but tightly confined in the third. This tight confinement leads to quantized energy levels for motion in the third direction, which can then be ignored for most problems. Thus the electrons appear to be a 2D sheet embedded in a 3D world. The analogous construct of holes is called a two-dimensional hole gas (2DHG), and such systems have many useful and interesting properties.
Realizations
Most 2DEGs are found in transistor-like structures made from semiconductors. The most commonly encountered 2DEG is the layer of electrons found in MOSFETs (metal–oxide–semiconductor field-effect transistors). When the transistor is in inversion mode, the electrons underneath the gate oxide are confined to the semiconductor-oxide interface, and thus occupy well defined energy levels. For thin-enough potential wells and temperatures not too high, only the lowest level is occupied (see the figure caption), and so the motion of the electrons perpendicular to the interface can be ignored. However, the electron is free to move parallel to the interface, and so is quasi-two-dimensional.
Other methods for engineering 2DEGs are high-electron-mobility-transistors (HEMTs) and rectangular quantum wells. HEMTs are field-effect transistors that utilize the heterojunction between two semiconducting materials to confine electrons to a triangular quantum well. Electrons confined to the heterojunction of HEMTs exhibit higher mobilities than those in MOSFETs, since the former device utilizes an intentionally undoped channel thereby mitigating the deleterious effect of ionized impurity scattering. Two closely spaced heterojunction interfaces may be used to confine electrons to a rectangular quantum well. Careful choice of the materials and alloy compositions allow control of the carrier densities within the 2DEG.
Electrons may also be confined to the surf
|
https://en.wikipedia.org/wiki/Exception%20handling%20syntax
|
Exception handling syntax is the set of keywords and/or structures provided by a computer programming language to allow exception handling, which separates the handling of errors that arise during a program's operation from its ordinary processes. Syntax for exception handling varies between programming languages, partly to cover semantic differences but largely to fit into each language's overall syntactic structure. Some languages do not call the relevant concept "exception handling"; others may not have direct facilities for it, but can still provide means to implement it.
Most commonly, error handling uses a try...[catch...][finally...] block, and errors are created via a throw statement, but there is significant variation in naming and syntax.
Catalogue of exception handling syntaxes
Ada
Exception declarations
Some_Error : exception;
Raising exceptions
raise Some_Error;
raise Some_Error with "Out of memory"; -- specific diagnostic message
Exception handling and propagation
with Ada.Exceptions, Ada.Text_IO;
procedure Foo is
Some_Error : exception;
begin
Do_Something_Interesting;
exception -- Start of exception handlers
when Constraint_Error =>
... -- Handle constraint error
when Storage_Error =>
-- Propagate Storage_Error as a different exception with a useful message
raise Some_Error with "Out of memory";
when Error : others =>
-- Handle all others
Ada.Text_IO.Put("Exception: ");
Ada.Text_IO.Put_Line(Ada.Exceptions.Exception_Name(Error));
Ada.Text_IO.Put_Line(Ada.Exceptions.Exception_Message(Error));
end Foo;
Assembly language
Most assembly languages will have a macro instruction or an interrupt address available for the particular system to intercept events such as illegal op codes, program check, data errors, overflow, divide by zero, and other such. IBM and Univac mainframes had the STXIT macro. Digital Equipment Corporation RT11 systems had trap vectors for program errors, i/o interrupts, and such. DOS h
|
https://en.wikipedia.org/wiki/Micro%20ISV
|
A micro ISV (abbr. mISV or μISV), a term coined by Eric Sink, is an independent software vendor with fewer than 10 or even just one software developer. In such an environment the company owner develops software, manages sales and does public relations.
The term has come to include more than just a "one-man shop," but any ISV with more than 10 employees is generally not considered a micro ISV. Small venture capital-funded software shops are also generally not considered micro ISVs.
Micro ISVs sell their software through a number of marketing models. The shareware marketing model (where potential customers can try the software before they buy it), along with the freeware marketing model, have become the dominant methods of marketing packaged software with even the largest ISVs offering their enterprise solutions as trials via free download, e.g. Oracle's Oracle database.
Microsoft and other micro ISV outreach efforts
Microsoft has a dedicated MicroISV/Shareware Evangelist, Michael Lehman. Part of the Microsoft micro ISV technical evangelism program includes Project Glidepath, which is a kind of framework to assist micro ISVs in bringing a product from concept through development and on to market.
Although not specifically targeted at micro ISVs, the Microsoft Empower Program for ISVs is used by many micro ISVs. Microsoft Empower Program members are required to release at least one software title for the Windows family of operating systems within 18 months of joining the program. The Microsoft Action Pack Subscription is similar to the Empower Program in some ways.
Alternatively, rapid application development PaaS platforms like Wolf Frameworks have partner programs specifically targeted towards Micro ISVs enabling them with software, hardware and even free development services.
Industry shows
The Software Industry Conference is an annual event in the United States attended by many micro ISVs. The European Software Conference is also attended by many micro ISV
|
https://en.wikipedia.org/wiki/Position%20weight%20matrix
|
A position weight matrix (PWM), also known as a position-specific weight matrix (PSWM) or position-specific scoring matrix (PSSM), is a commonly used representation of motifs (patterns) in biological sequences.
PWMs are often derived from a set of aligned sequences that are thought to be functionally related and have become an important part of many software tools for computational motif discovery.
Background
Creation
Conversion of sequence to position probability matrix
A PWM has one row for each symbol of the alphabet (4 rows for nucleotides in DNA sequences or 20 rows for amino acids in protein sequences) and one column for each position in the pattern. In the first step in constructing a PWM, a basic position frequency matrix (PFM) is created by counting the occurrences of each nucleotide at each position. From the PFM, a position probability matrix (PPM) can now be created by dividing that former nucleotide count at each position by the number of sequences, thereby normalising the values. Formally, given a set X of N aligned sequences of length l, the elements of the PPM M are calculated:
where i (1,...,N), j (1,...,l), k is the set of symbols in the alphabet and I(a=k) is an indicator function where I(a=k) is 1 if a=k and 0 otherwise.
For example, given the following DNA sequences:
{| class="wikitable"
|-
|
GAGGTAAAC
TCCGTAAGT
CAGGTTGGA
ACAGTCAGT
TAGGTCATT
TAGGTACTG
ATGGTAACT
CAGGTATAC
TGTGTGAGT
AAGGTAAGT
|}
The corresponding PFM is:
Therefore, the resulting PPM is:
Both PPMs and PWMs assume statistical independence between positions in the pattern, as the probabilities for each position are calculated independently of other positions. From the definition above, it follows that the sum of values for a particular position (that is, summing over all symbols) is 1. Each column can therefore be regarded as an independent multinomial distribution. This makes it easy to calculate the probability of a sequence given a PPM, by multiplying the relevant pro
|
https://en.wikipedia.org/wiki/Jack%20Edmonds
|
Jack R. Edmonds (born April 5, 1934) is an American-born and educated computer scientist and mathematician who lived and worked in Canada for much of his life. He has made fundamental contributions to the fields of combinatorial optimization, polyhedral combinatorics, discrete mathematics and the theory of computing. He was the recipient of the 1985 John von Neumann Theory Prize.
Early career
Edmonds attended Duke University before completing his undergraduate degree at George Washington University in 1957. He thereafter received a master's degree in 1960 at the University of Maryland under Bruce L. Reinhart with a thesis on the problem of embedding graphs into surfaces. From 1959 to 1969 he worked at the National Institute of Standards and Technology (then the National Bureau of Standards), and was a founding member of Alan Goldman’s newly created Operations Research Section in 1961. Goldman proved to be a crucial influence by enabling Edmonds to work in a RAND Corporation-sponsored workshop in Santa Monica, California. It is here that Edmonds first presented his findings on defining a class of algorithms that could run more efficiently. Most combinatorics scholars, during this time, were not focused on algorithms. However Edmonds was drawn to them and these initial investigations were key developments for his later work between matroids and optimization. He spent the years from 1961 to 1965 on the subject of NP versus P and in 1966 originated the conjectures NP ≠ P and NP ∩ coNP = P.
Research
Edmonds's 1965 paper “Paths, Trees and Flowers” was a preeminent paper in initially suggesting the possibility of establishing a mathematical theory of efficient combinatorial algorithms.
One of his earliest and notable contributions is the blossom algorithm for constructing maximum matchings on graphs, discovered in 1961 and published in 1965. This was the first polynomial-time algorithm for maximum matching in graphs. Its generalization to weighted graphs was a conceptua
|
https://en.wikipedia.org/wiki/Electrical%20resistivity%20tomography
|
Electrical resistivity tomography (ERT) or electrical resistivity imaging (ERI) is a geophysical technique for imaging sub-surface structures from electrical resistivity measurements made at the surface, or by electrodes in one or more boreholes. If the electrodes are suspended in the boreholes, deeper sections can be investigated. It is closely related to the medical imaging technique electrical impedance tomography (EIT), and mathematically is the same inverse problem. In contrast to medical EIT, however, ERT is essentially a direct current method. A related geophysical method, induced polarization (or spectral induced polarization), measures the transient response and aims to determine the subsurface chargeability properties.
Electrical resistivity measurements can be used for identification and quantification of depth of groundwater, detection of clays, and measurement of groundwater conductivity.
History
The technique evolved from techniques of electrical prospecting that predate digital computers, where layers or anomalies were sought rather than images.
Early work on the mathematical problem in the 1930s assumed a layered medium (see for example Langer, Slichter). Andrey Nikolayevich Tikhonov who is best known for his work on regularization of inverse problems also worked on this problem. He explains in detail how to solve the ERT problem in a simple case of 2-layered medium. During the 1940s, he collaborated with geophysicists and without the aid of computers they discovered large deposits of copper. As a result, they were awarded a State Prize of Soviet Union.
When adequate computers became widely available, the inverse problem of ERT could be solved numerically. The work of Loke and Barker at Birmingham University was among the first such solution and their approach is still widely used.
With the advancement in the field of Electrical Resistivity Tomography (ERT) from 1D to 2D and nowadays 3D, ERT has explored many fields. The applications of ERT incl
|
https://en.wikipedia.org/wiki/Bashing%20%28pejorative%29
|
Bashing is a harsh, gratuitous, prejudicial attack on a person, group, or subject. Literally, bashing is a term meaning to hit or assault, but when it is used as a suffix, or in conjunction with a noun indicating the subject being attacked, it is normally used to imply that the act is motivated by bigotry.
The term is also used metaphorically, to describe verbal or critical assaults. Topics which attract bashing tend to be highly partisan and personally sensitive topics for the bashers, the victims, or both. Common areas include religion, nationality, sexuality, and politics.
Physical bashing is differentiated from regular assault because it is a motivated assault, which may be considered a hate crime. In relation to non-physical bashing, the term is used to imply that a verbal or critical attack is similarly unacceptable and similarly prejudicial. Use of the term in this manner is an abusive ad hominem action, used to denounce the attack and admonish the attackers by comparing them to perpetrators of physical bashing.
Karen Franklin, in her paper "Psychosocial motivations of hate crimes perpetrators" identifies the following motivations for bashing: socially instilled prejudice or partisan conflict; the perception that the bashing subject is in some way contrary to, or in offense to, an underlying ideology; group or peer influence.
Physical bashing
One of the more common uses of the term bashing is to describe assault or vilification of people perceived to be homosexual. Gay-bashing is related to homophobia and religious objections to homosexuality.
Non-physical bashing
In cases of non-physical bashing, the term is normally used with the intention of having a pejorative effect on the identified bashers by comparing them to perpetrators of criminal assaults. Sometimes this label is applied to criticisms that are not particularly vehement nor even inappropriate. In these cases, the term can be seen to be applied purely for partisan benefit. The use of the te
|
https://en.wikipedia.org/wiki/Claw-free%20permutation
|
In the mathematical and computer science field of cryptography, a group of three numbers (x,y,z) is said to be a claw of two permutations f0 and f1 if
f0(x) = f1(y) = z.
A pair of permutations f0 and f1 are said to be claw-free if there is no efficient algorithm for computing a claw.
The terminology claw free was introduced by Goldwasser, Micali, and Rivest in their 1984 paper, "A Paradoxical Solution to the Signature Problem" (and later in a more complete journal paper), where they showed that the existence of claw-free pairs of trapdoor permutations implies the existence of digital signature schemes secure against adaptive chosen-message attack. This construction was later superseded by the construction of digital signatures from any one-way trapdoor permutation. The existence of trapdoor permutations does not by itself imply claw-free permutations exist; however, it has been shown that claw-free permutations do exist if factoring is hard.
The general notion of claw-free permutation (not necessarily trapdoor) was further studied by Ivan Damgård in his PhD thesis The Application of Claw Free Functions in Cryptography (Aarhus University, 1988), where he showed how to construct
Collision Resistant Hash Functions from claw-free permutations. The notion of claw-freeness is closely related to that of collision resistance in hash functions. The distinction is that claw-free permutations are pairs of functions in which it is hard to create a collision between them, while a collision-resistant hash function is a single function in which it's hard to find a collision, i.e. a function H is collision resistant if it's hard to find a pair of distinct values x,y such that
H(x) = H(y).
In the hash function literature, this is commonly termed a hash collision. A hash function where collisions are difficult to find is said to have collision resistance.
Bit commitment
Given a pair of claw-free permutations f0 and f1 it is straightforward to create a commitment scheme.
|
https://en.wikipedia.org/wiki/Plant%20Physiology%20%28journal%29
|
Plant Physiology is a monthly peer-reviewed scientific journal that covers research on physiology, biochemistry, cellular and molecular biology, genetics, biophysics, and environmental biology of plants. The journal has been published since 1926 by the American Society of Plant Biologists. The current editor-in-chief is Yunde Zhao (University of California San Diego. According to the Journal Citation Reports, the journal has a 2021 impact factor of 8.005.
|
https://en.wikipedia.org/wiki/Centre%20for%20Environment%2C%20Fisheries%20and%20Aquaculture%20Science
|
The Centre for Environment, Fisheries and Aquaculture Science (Cefas) is an executive agency of the United Kingdom government Department for Environment, Food and Rural Affairs (Defra). It carries out a wide range of research, advisory, consultancy, monitoring and training activities for a large number of customers around the world.
Cefas employs over 550 staff based primarily at two specialist laboratories within the UK, with additional staff based at small, port-based offices in Scarborough, Hayle, and Plymouth. In 2014 Cefas established a permanent base in the Middle East by opening an office in Kuwait, and since opened an office in Oman. They also operate an ocean-going research vessel Cefas Endeavour.
Customers
The primary customer for Cefas is their parent organisation Defra. They also undertake work for international and UK government departments (central and local), the World Bank, the European Commission, the United Nations Food and Agriculture Organization (FAO), commercial organisations, non-governmental and environmental organisations, regulators and enforcement agencies, local authorities and other public bodies.
There is an increasing focus on commercial research and consultancy as the level of funding available from Defra gradually reduces.
History
Known previously as the Directorate of Fisheries Research, the name and status was changed in 1997 to 'Centre for Environment, Fisheries and Aquaculture Science' (Cefas). At this time it became an Executive Agency of what was then the Ministry of Agriculture, Fisheries and Food (United Kingdom) and is now the Department of Environment, Food and Rural Affairs (Defra).
Lowestoft laboratory
In 1902, the Marine Biological Association opened a sub-station in Pakefield a suburb of Lowestoft, Suffolk to research the Fishing industry. This was part of the UK contribution to the newly created International Council for the Exploration of the Sea (ICES). By 1921 the station had been expanded to include a labo
|
https://en.wikipedia.org/wiki/Kurt%20Otto%20Friedrichs
|
Kurt Otto Friedrichs (September 28, 1901 – December 31, 1982) was a German-American mathematician. He was the co-founder of the Courant Institute at New York University, and a recipient of the National Medal of Science.
Biography
Friedrichs was born in Kiel, Schleswig-Holstein on September 28, 1901. His family soon moved to Düsseldorf, where he grew up. He attended several different universities in Germany studying the philosophical works of Heidegger and Husserl, but finally decided that mathematics was his real calling. During the 1920s, Friedrichs pursued this field in Göttingen, which had a renowned Mathematical Institute under the direction of Richard Courant. Courant became a close colleague and lifelong friend of Friedrichs.
In 1931, Friedrichs became a full professor of mathematics at the Technische Hochschule in Braunschweig. In early February 1933, a few days after Hitler became the Chancellor of Germany, Friedrichs met and immediately fell in love with a young Jewish student, Nellie Bruell. Their relationship became increasingly challenging and difficult because of the anti-Semitic Nuremberg Laws of Hitler's government. In 1937, both Friedrichs and Nellie Bruell managed to emigrate separately to New York City where they finally married. Their long and very happy marriage produced five children.
Courant had left Germany in 1933 and had founded an institute for graduate studies in mathematics at New York University. Friedrichs joined him when he arrived in 1937 and remained there for forty years. He was instrumental in the development of the Courant Institute of Mathematical Sciences, which eventually became one of the most distinguished research institutes for applied mathematics in the world. Friedrichs died in New Rochelle, New York on December 31, 1982.
Friedrichs's greatest contribution to applied mathematics was his work on partial differential equations. He also did major research and wrote many books and papers on existence theory, numerical met
|
https://en.wikipedia.org/wiki/Optical%20tomography
|
Optical tomography is a form of computed tomography that creates a digital volumetric model of an object by reconstructing images made from light transmitted and scattered through an object. Optical tomography is used mostly in medical imaging research. Optical tomography in industry is used as a sensor of thickness and internal structure of semiconductors.
Principle
Optical tomography relies on the object under study being at least partially light-transmitting or translucent, so it works best on soft tissue, such as breast and brain tissue.
The high scatter-based attenuation involved is generally dealt with by using intense, often pulsed or intensity modulated, light sources, and highly sensitive light sensors, and the use of infrared light at frequencies where body tissues are most transmissive. Soft tissues are highly scattering but weakly absorbing in the near-infrared and red parts of the spectrum, so that this is the wavelength range usually used.
Types
Diffuse optical tomography
In near-infrared diffuse optical tomography (DOT), transmitted diffuse photons are collected and a diffusion equation is used to reconstruct an image from them.
Time-of-flight diffuse optical tomography
A variant of optical tomography uses optical time-of-flight sampling as an attempt to distinguish transmitted light from scattered light. This concept has been used in several academic and commercial systems for breast cancer imaging and cerebral measurement. The key to separation of absorption from scatter is the use of either time-resolved or frequency domain data which is then matched with a diffusion theory based estimate of how the light propagated through the tissue. The measurement of time of flight or frequency domain phase shift is essential to allow separation of absorption from scatter with reasonable accuracy.
Fluorescence molecular tomography
In fluorescence molecular tomography, the fluorescence signal transmitted through the tissue is normalized by the excitation
|
https://en.wikipedia.org/wiki/Knuth%27s%20Algorithm%20X
|
Algorithm X is an algorithm for solving the exact cover problem. It is a straightforward recursive, nondeterministic, depth-first, backtracking algorithm used by Donald Knuth to demonstrate an efficient implementation called DLX, which uses the dancing links technique.
The exact cover problem is represented in Algorithm X by a matrix A consisting of 0s and 1s. The goal is to select a subset of the rows such that the digit 1 appears in each column exactly once.
Algorithm X works as follows:
The nondeterministic choice of r means that the algorithm recurses over independent subalgorithms; each subalgorithm inherits the current matrix A, but reduces it with respect to a different row r.
If column c is entirely zero, there are no subalgorithms and the process terminates unsuccessfully.
The subalgorithms form a search tree in a natural way, with the original problem at the root and with level k containing each subalgorithm that corresponds to k chosen rows.
Backtracking is the process of traversing the tree in preorder, depth first.
Any systematic rule for choosing column c in this procedure will find all solutions, but some rules work much better than others.
To reduce the number of iterations, Knuth suggests that the column-choosing algorithm select a column with the smallest number of 1s in it.
Example
For example, consider the exact cover problem specified by the universe U = {1, 2, 3, 4, 5, 6, 7} and the collection of sets = {A, B, C, D, E, F}, where:
A = {1, 4, 7};
B = {1, 4};
C = {4, 5, 7};
D = {3, 5, 6};
E = {2, 3, 6, 7}; and
F = {2, 7}.
This problem is represented by the matrix:
{| class="wikitable"
! !! 1 !! 2 !! 3 !! 4 !! 5 !! 6 !! 7
|-
! A
| 1 || 0 || 0 || 1 || 0 || 0 || 1
|-
! B
| 1 || 0 || 0 || 1 || 0 || 0 || 0
|-
! C
| 0 || 0 || 0 || 1 || 1 || 0 || 1
|-
! D
| 0 || 0 || 1 || 0 || 1 || 1 || 0
|-
! E
| 0 || 1 || 1 || 0 || 0 || 1 || 1
|-
! F
| 0 || 1 || 0 || 0 || 0 || 0 || 1
|}
Algorithm X with Knuth's suggested heuristic for selecting columns
|
https://en.wikipedia.org/wiki/Machine%20perfusion
|
Machine perfusion (MP) is a technique used in organ transplantation as a means of preserving the organs which are to be transplanted.
Machine perfusion has various forms and can be categorised according to the temperature of the perfusate: cold (4 °C) and warm (37 °C). Machine perfusion has been applied to renal transplantation, liver transplantation and lung transplantation. It is an alternative to static cold storage (SCS).
Research and development
A record-long of human transplant organ preservation with machine perfusion of a liver for 3 days rather than usually <12 hours was reported in 2022. It could possibly be extended to 10 days and prevent substantial cell damage by low temperature preservation methods. Alternative approaches include novel cryoprotectant solvents.
There is a novel organ perfusion system under development that can restore, i.e. on the cellular level, multiple vital (pig) organs one hour after death (during which the body had a prolonged warm ischaemia), and a similar method/system for reviving (pig) brains hours after death. The system for cellular recovery could be used to preserve donor organs or for revival-treatments in medical emergencies.
History of kidney preservation techniques
An essential preliminary to the development of kidney storage and transplantation was the work of Alexis Carrel in developing methods for vascular anastomosis. Carrel went on to describe the first kidney transplants, which were performed in dogs in 1902; Ullman independently described similar experiments in the same year. In these experiments kidneys were transplanted without there being any attempt at storage.
The crucial step in making in vitro storage of kidneys possible, was the demonstration by Fuhrman in 1943, of a reversible effect of hypothermia on the metabolic processes of isolated tissues. Prior to this, kidneys had been stored at normal body temperatures using blood or diluted blood perfusates, but no successful reimplantations had been made
|
https://en.wikipedia.org/wiki/List%20of%20heaviest%20people
|
This is a list of the heaviest people who have been weighed and verified, living and dead. The list is organised by the peak weight reached by an individual and is limited to those who are over .
Heaviest people ever recorded
See also
Big Pun (1971–2000), American rapper whose weight at death was .
Edward Bright (1721–1750) and Daniel Lambert (1770–1809), men from England who were famous in their time for their obesity.
Happy Humphrey, the heaviest professional wrestler, weighing in at at his peak.
Israel Kamakawiwoʻole (1959–1997), Hawaiian singer whose weight peaked at .
Paul Kimelman (born 1947), holder of Guinness World Record for the greatest weight-loss in the shortest amount of time, 1982
Billy and Benny McCrary, holders of Guinness World Records's World's Heaviest Twins.
Alayna Morgan (1948–2009), heavy woman from Santa Rosa, California.
Ricky Naputi (1973–2012), heaviest man from Guam.
Carl Thompson (1982–2015), heaviest man in the United Kingdom whose weight at death was .
Renee Williams (1977–2007), woman from Austin, Texas.
Yokozuna, the heaviest WWE wrestler, weighing between and at his peak.
Barry Austin and Jack Taylor, two obese British men documented in the comedy-drama The Fattest Man in Britain.
Yamamotoyama Ryūta, heaviest Japanese-born sumo wrestler; is also thought to be the heaviest Japanese person ever at .
|
https://en.wikipedia.org/wiki/Hopf%20bifurcation
|
In the mathematical theory of bifurcations, a Hopf bifurcation is a critical point where, as a parameter changes, a system's stability switches and a periodic solution arises. More accurately, it is a local bifurcation in which a fixed point of a dynamical system loses stability, as a pair of complex conjugate eigenvalues—of the linearization around the fixed point—crosses the complex plane imaginary axis as a parameter crosses a threshold value. Under reasonably generic assumptions about the dynamical system, the fixed point becomes a small-amplitude limit cycle as the parameter changes.
A Hopf bifurcation is also known as a Poincaré–Andronov–Hopf bifurcation, named after Henri Poincaré, Aleksandr Andronov and Eberhard Hopf.
Overview
Supercritical and subcritical Hopf bifurcations
The limit cycle is orbitally stable if a specific quantity called the first Lyapunov coefficient is negative, and the bifurcation is supercritical. Otherwise it is unstable and the bifurcation is subcritical.
The normal form of a Hopf bifurcation is the following time-dependent differential equation:
where z, b are both complex and λ is a real parameter.
Write: The number α is called the first Lyapunov coefficient.
If α is negative then there is a stable limit cycle for λ > 0:
where
The bifurcation is then called supercritical.
If α is positive then there is an unstable limit cycle for λ < 0. The bifurcation is called subcritical.
Intuition
The normal form of the supercritical Hopf bifurcation can be expressed intuitively in polar coordinates,
where is the instantaneous amplitude of the oscillation and is its instantaneous angular position. The angular velocity is fixed. When , the differential equation for has an unstable fixed point at and a stable fixed point at . The system thus describes a stable circular limit cycle with radius and angular velocity . When then is the only fixed point and it is stable. In that case, the system describes a spiral that con
|
https://en.wikipedia.org/wiki/Brain%20of%20Albert%20Einstein
|
The brain of Albert Einstein has been a subject of much research and speculation. Albert Einstein's brain was removed within seven and a half hours of his death. His apparent regularities or irregularities in the brain have been used to support various ideas about correlations in neuroanatomy with general or mathematical intelligence. Studies have suggested an increased number of glial cells in Einstein's brain.
Fate of the brain
Einstein's autopsy was conducted in the lab of Thomas Stoltz Harvey. Shortly after Einstein's death in 1955, Harvey removed and weighed the brain at 1230g. Harvey then took the brain to a lab at the University of Pennsylvania where he dissected it into several pieces. Some of the pieces he kept to himself while others were given to leading pathologists. He hoped that cytoarchitectonics, the study of brain cells under a microscope, would reveal useful information. Harvey injected 50% formalin through the internal carotid arteries and afterward suspended the intact brain in 10% formalin. He also photographed the brain from many angles.
Harvey dissected the brain into about 240 blocks (each about 1 cm3) and encased the segments in a plastic-like material called collodion. Harvey also removed Einstein's eyes. He gave them to Henry Abrams, Einstein's ophthalmologist.
Whether or not Einstein's brain was preserved with his prior consent is a matter of dispute. Ronald Clark's 1979 biography of Einstein states "he had insisted that his brain should be used for research and that he be cremated." More recent research has suggested that the brain was removed and preserved without the permission of either Einstein or his close relatives. Hans Albert Einstein, the physicist's elder son, endorsed the removal after the event. However, he insisted that his father's brain should be used only for research to be published in scientific journals of high standing.
In 1978, Einstein's brain was rediscovered in Harvey's possession by journalist Steven Levy. It
|
https://en.wikipedia.org/wiki/Acoustic%20music
|
Acoustic music is music that solely or primarily uses instruments that produce sound through acoustic means, as opposed to electric or electronic means. While all music was once acoustic, the retronym "acoustic music" appeared after the advent of electric instruments, such as the electric guitar, electric violin, electric organ and synthesizer. Acoustic string instrumentations had long been a subset of popular music, particularly in folk. It stood in contrast to various other types of music in various eras, including big band music in the pre-rock era, and electric music in the rock era.
Music reviewer Craig Conley suggests, "When music is labeled acoustic, unplugged, or unwired, the assumption seems to be that other types of music are cluttered by technology and overproduction and therefore aren't as pure."
Types of acoustic instruments
Acoustic instruments can be split into six groups: string instruments, wind instruments, percussion, other instruments, ensemble instruments, and unclassified instruments.
String instruments have a tightly stretched string that, when set in motion, creates energy at (almost) harmonically related frequencies.
Wind instruments are in the shape of a pipe and energy is supplied as an air stream into the pipe.
Percussion instruments make sound when they are struck, as with a hand or a stick.
History
The original acoustic instrument was the human voice, which produces sound by funneling air across the vocal cords. The first constructed acoustic instrument is believed to be the flute. The oldest surviving flute is as much as 43,000 years old. The flute is believed to have originated in Central Europe.
By 1800, the most popular acoustic plucked-string instruments closely resembled the modern-day guitar, but with a smaller body. As the century continued, Spanish luthier Antonio de Torres Jurado took these smaller instruments and expanded the bodies to create guitars. Guitar use and popularity grew in Europe throughout the late 18th
|
https://en.wikipedia.org/wiki/Scalar%E2%80%93tensor%20theory
|
In theoretical physics, a scalar–tensor theory is a field theory that includes both a scalar field and a tensor field to represent a certain interaction. For example, the Brans–Dicke theory of gravitation uses both a scalar field and a tensor field to mediate the gravitational interaction.
Tensor fields and field theory
Modern physics tries to derive all physical theories from as few principles as possible. In this way, Newtonian mechanics as well as quantum mechanics are derived from Hamilton's principle of least action. In this approach, the behavior of a system is not described via forces, but by functions which describe the energy of the system. Most important are the energetic quantities known as the Hamiltonian function and the Lagrangian function. Their derivatives in space are known as Hamiltonian density and the Lagrangian density. Going to these quantities leads to the field theories.
Modern physics uses field theories to explain reality. These fields can be scalar, vectorial or tensorial. An example of a scalar field is the temperature field. An example of a vector field is the wind velocity field. An example of a tensor field is the stress tensor field in a stressed body, used in continuum mechanics.
Gravity as field theory
In physics, forces (as vectorial quantities) are given as the derivative (gradient) of scalar quantities named potentials. In classical physics before Einstein, gravitation was given in the same way, as consequence of a gravitational force (vectorial), given through a scalar potential field, dependent of the mass of the particles. Thus, Newtonian gravity is called a scalar theory. The gravitational force is dependent of the distance r of the massive objects to each other (more exactly, their centre of mass). Mass is a parameter and space and time are unchangeable.
Einstein's theory of gravity, the General Relativity (GR) is of another nature. It unifies space and time in a 4-dimensional manifold called space-time. In GR there i
|
https://en.wikipedia.org/wiki/Wet-folding
|
Wet-folding is an origami technique developed by Akira Yoshizawa that employs water to dampen the paper so that it can be manipulated more easily. This process adds an element of sculpture to origami, which is otherwise purely geometric. Wet-folding is used very often by professional folders for non-geometric origami, such as animals. Wet-folders usually employ thicker paper than what would usually be used for normal origami, to ensure that the paper does not tear.
One of the most prominent users of the wet-folding technique is Éric Joisel, who specialized in origami animals, humans, and legendary creatures. He also created origami masks. Other folders who practice this technique are Robert J. Lang and John Montroll.
The process of wet-folding allows a folder to preserve a curved shape more easily. It also reduces the number of wrinkles substantially. Wet-folding allows for increased rigidity and structure due to a process called sizing. Sizing is a water-soluble adhesive, usually methylcellulose or methyl acetate, that may be added during the manufacture of the paper. As the paper dries, the chemical bonds of the fibers of the paper tighten together which results in a crisper and stronger sheet. In order to moisten the paper, an artist typically wipes the sheet with a dampened cloth. The amount of moisture added to the paper is crucial because too little will cause the paper to dry quickly and spring back into its original position before the folding is complete, while too much will either fray the edges of the paper or will cause the paper to split at high-stress points.
Notes and references
See also
Papier-mâché
External links
Mini-documentary about Joisel at YouTube
An illustrated introduction to wet-folding
Origami
Mathematics and art
|
https://en.wikipedia.org/wiki/Single-crossing%20condition
|
In monotone comparative statics, the single-crossing condition or single-crossing property refers to a condition where the relationship between two or more functions is such that they will only cross once. For example, a mean-preserving spread will result in an altered probability distribution whose cumulative distribution function will intersect with the original's only once.
The single-crossing condition was posited in Samuel Karlin's 1968 monograph 'Total Positivity'. It was later used by Peter Diamond, Joseph Stiglitz, and Susan Athey, in studying the economics of uncertainty.
The single-crossing condition is also used in applications where there are a few agents or types of agents that have preferences over an ordered set. Such situations appear often in information economics, contract theory, social choice and political economics, among other fields.
Example using cumulative distribution functions
Cumulative distribution functions F and G satisfy the single-crossing condition if there exists a such that
and
;
that is, function crosses the x-axis at most once, in which case it does so from below.
This property can be extended to two or more variables. Given x and t, for all x'>x, t'>t,
and
.
This condition could be interpreted as saying that for x'>x, the function g(t)=F(x',t)-F(x,t) crosses the horizontal axis at most once, and from below. The condition is not symmetric in the variables (i.e., we cannot switch x and t in the definition; the necessary inequality in the first argument is weak, while the inequality in the second argument is strict).
Use in Social Choice
In the study of social choice, the single-crossing condition is a condition on preferences. It is especially useful because utility functions are generally increasing (i.e. the assumption that an agent will prefer or at least consider equivalent two dollars to one dollar is unobjectionable).
Specifically, a set of agents with some unidimensional characteristic and preferences
|
https://en.wikipedia.org/wiki/Stormbringer%20%28video%20game%29
|
Stormbringer is a computer game written by David Jones and released in 1987 by Mastertronic on the Mastertronic Added Dimension label. It was originally released on the ZX Spectrum, Commodore 64, Amstrad CPC and MSX. A version for the Atari ST was published in 1988. It is the fourth and final game in the Magic Knight series. The in-game music is by David Whittaker.
Plot
Magic Knight returns home, having obtained a second-hand time machine from the Tyme Guardians at the end of Knight Tyme. However, there has been an accident whilst travelling back and there are now two Magic Knights - the other being "Off-White Knight", the dreaded Stormbringer (so called because of his storm cloud which he plans to use to destroy Magic Knight). Magic Knight cannot kill Off-White Knight without destroying himself in the process. His only option is to find Off-White Knight and merge with him.
Gameplay
Gameplay takes the form of a graphic adventure, with commands being inputted via the "Windimation" menu-driven interface, in the style of the previous two games, Spellbound and Knight Tyme (1986).
Magic Knight again has a limited amount of strength which is consumed by performing actions and moving from screen to screen as well as being sapped by various enemies such as the Stormbringer's storm cloud and spinning axes and balls that bounce around some rooms and should be avoided. The need for the player to monitor Magic Knight's strength and avoid enemies means that Stormbringers gameplay is closer to the arcade adventure feel of Spellbound rather than the much more pure graphic adventure feel of Knight Tyme''.
As with the previous two Magic Knight games, there are characters with whom Magic Knight can interact and have help him. Magic Knight's spellcasting abilities are also important for solving the game's puzzles including the "merge" spell to be used when he finds Off-White Knight.
External links
Information about the Atari ST version
1987 video games
Amstrad CPC games
Atari Ja
|
https://en.wikipedia.org/wiki/Spectre%20GCR
|
The Spectre GCR is a hardware and software package for the Atari ST computers. The hardware consists of a cartridge that plugs into the Atari ST's cartridge port and a cable that connects between the cartridge and one of the floppy ports on the ST. Designed by David Small and sold through his company Gadgets by Small, it allows the Atari ST to run most Macintosh software. It is Small's third Macintosh emulator for the ST, replacing his previous Magic Sac and Spectre 128.
The Spectre GCR requires the owner to provide official Apple Macintosh 128K ROMs and Macintosh Operating System 6.0.8 disks. This avoids any legal issues of copying Apple's software. The emulator runs best with a high-resolution monochrome monitor, such as Atari's own SM124, but will run on color displays by either displaying a user-selectable half of the Macintosh screen, or missing out alternate lines to fit the lower resolution color display. The Spectre GCR plugs into the cartridge slot and floppy port, and modifies the frequency of the data to/from the single-speed floppy drive of the Atari ST, allowing it to read Macintosh GCR format discs which require a multi-speed floppy drive.
The manual claims the speed to be 20% faster than an actual Mac Plus with a 30% larger screen area and resolution. Although Spectre GCR runs in 1MB of memory, 2MB or more is recommended.
|
https://en.wikipedia.org/wiki/Dinostratus
|
Dinostratus (; c. 390 – c. 320 BCE) was a Greek mathematician and geometer, and the brother of Menaechmus. He is known for using the quadratrix to solve the problem of squaring the circle.
Life and work
Dinostratus' chief contribution to mathematics was his solution to the problem of squaring the circle. To solve this problem, Dinostratus made use of the trisectrix of Hippias, for which he proved a special property (Dinostratus' theorem) that allowed him the squaring of the circle. Due to his work the trisectrix later became known as the quadratrix of Dinostratus as well. Although Dinostratus solved the problem of squaring the circle, he did not do so using ruler and compass alone, and so it was clear to the Greeks that his solution violated the foundational principles of their mathematics. Over 2,200 years later Ferdinand von Lindemann would prove that it is impossible to square a circle using straight edge and compass alone.
Citations and footnotes
|
https://en.wikipedia.org/wiki/Natural%20competence
|
In microbiology, genetics, cell biology, and molecular biology, competence is the ability of a cell to alter its genetics by taking up extracellular ("naked") DNA from its environment in the process called transformation. Competence may be differentiated between natural competence, a genetically specified ability of bacteria which is thought to occur under natural conditions as well as in the laboratory, and induced or artificial competence, which arises when cells in laboratory cultures are treated to make them transiently permeable to DNA. Competence allows for rapid adaptation and DNA repair of the cell. This article primarily deals with natural competence in bacteria, although information about artificial competence is also provided.
History
Natural competence was discovered by Frederick Griffith in 1928, when he showed that a preparation of killed cells of a pathogenic bacterium contained something that could transform related non-pathogenic cells into the pathogenic type. In 1944 Oswald Avery, Colin MacLeod, and Maclyn McCarty demonstrated that this 'transforming factor' was pure DNA
. This was the first compelling evidence that DNA carries the genetic information of the cell.
Since then, natural competence has been studied in a number of different bacteria, particularly Bacillus subtilis, Streptococcus pneumoniae (Griffith's "pneumococcus"), Neisseria gonorrhoeae, Haemophilus influenzae and members of the Acinetobacter genus. Areas of active research include the mechanisms of DNA transport, the regulation of competence in different bacteria, and the evolutionary function of competence.
Mechanisms of DNA uptake
In the laboratory, DNA is provided by the researcher, often as a genetically engineered fragment or plasmid. During uptake, DNA is transported across the cell membrane(s), and the cell wall if one is present. Once the DNA is inside the cell it may be degraded to nucleotides, which are reused for DNA replication and other metabolic functions.
|
https://en.wikipedia.org/wiki/TigerSHARC
|
TigerSHARC refers to a family of microprocessors currently manufactured by Analog Devices Inc (ADI).
See also
SHARC
Blackfin
External links
TigerSHARC processor website
Digital signal processors
VLIW microprocessors
|
https://en.wikipedia.org/wiki/Extract
|
An extract (essence) is a substance made by extracting a part of a raw material, often by using a solvent such as ethanol, oil or water. Extracts may be sold as tinctures, absolutes or in powder form.
The aromatic principles of many spices, nuts, herbs, fruits, etc., and some flowers, are marketed as extracts, among the best known of true extracts being almond, cinnamon, cloves, ginger, lemon, nutmeg, orange, peppermint, pistachio, rose, spearmint, vanilla, violet, rum, and wintergreen.
Extraction techniques
Most natural essences are obtained by extracting the essential oil from the feedstock, such as blossoms, fruit, and roots, or from intact plants through multiple techniques and methods:
Expression (juicing, pressing) involves physical extraction material from feedstock, used when the oil is plentiful and easily obtained from materials such as citrus peels, olives, and grapes.
Absorption (steeping, decoction). Extraction is done by soaking material in a solvent, as used for vanilla beans or tea leaves.
Maceration, as used to soften and degrade material without heat, normally using oils, such as for peppermint extract and wine making.
Distillation or separation process, creating a higher concentration of the extract by heating material to a specific boiling point, then collecting this and condensing the extract, leaving the unwanted material behind, as used for lavender extract.
The distinctive flavors of nearly all fruits are desirable adjuncts to many food preparations, but only a few are practical sources of sufficiently concentrated flavor extract, such as from lemons, oranges, and vanilla beans.
Artificial extracts
The majority of concentrated fruit flavors, such as banana, cherry, peach, pineapple, raspberry, and strawberry, are produced by combining a variety of esters with special oils. Suitable coloring is generally obtained by the use of dyes. Among the esters most generally employed are ethyl acetate and ethyl butyrate. The chief factors
|
https://en.wikipedia.org/wiki/Inducer
|
In molecular biology, an inducer is a molecule that regulates gene expression. An inducer functions in two ways; namely:
By disabling repressors. The gene is expressed because an inducer binds to the repressor. The binding of the inducer to the repressor prevents the repressor from binding to the operator. RNA polymerase can then begin to transcribe operon genes.
By binding to activators. Activators generally bind poorly to activator DNA sequences unless an inducer is present. Activator binds to an inducer and the complex binds to the activation sequence and activates target gene. Removing the inducer stops transcription.
Because a small inducer molecule is required, the increased expression of the target gene is called induction. The lactose operon is one example of an inducible system.
Function
Repressor proteins bind to the DNA strand and prevent RNA polymerase from being able to attach to the DNA and synthesize mRNA. Inducers bind to repressors, causing them to change shape and preventing them from binding to DNA. Therefore, they allow transcription, and thus gene expression, to take place.
For a gene to be expressed, its DNA sequence must be copied (in a process known as transcription) to make a smaller, mobile molecule called messenger RNA (mRNA), which carries the instructions for making a protein to the site where the protein is manufactured (in a process known as translation). Many different types of proteins can affect the level of gene expression by promoting or preventing transcription. In prokaryotes (such as bacteria), these proteins often act on a portion of DNA known as the operator at the beginning of the gene. The promoter is where RNA polymerase, the enzyme that copies the genetic sequence and synthesizes the mRNA, attaches to the DNA strand.
Some genes are modulated by activators, which have the opposite effect on gene expression as repressors. Inducers can also bind to activator proteins, allowing them to bind to the operator DNA where they
|
https://en.wikipedia.org/wiki/Hippocrates%20of%20Chios
|
Hippocrates of Chios (; c. 470 – c. 410 BC) was an ancient Greek mathematician, geometer, and astronomer.
He was born on the isle of Chios, where he was originally a merchant. After some misadventures (he was robbed by either pirates or fraudulent customs officials) he went to Athens, possibly for litigation, where he became a leading mathematician.
On Chios, Hippocrates may have been a pupil of the mathematician and astronomer Oenopides of Chios. In his mathematical work there probably was some Pythagorean influence too, perhaps via contacts between Chios and the neighboring island of Samos, a center of Pythagorean thinking: Hippocrates has been described as a 'para-Pythagorean', a philosophical 'fellow traveler'. "Reduction" arguments such as reductio ad absurdum argument (or proof by contradiction) have been traced to him, as has the use of power to denote the square of a line.
Mathematics
The major accomplishment of Hippocrates is that he was the first to write a systematically organized geometry textbook, called Elements (Στοιχεῖα, Stoicheia), that is, basic theorems, or building blocks of mathematical theory. From then on, mathematicians from all over the ancient world could, at least in principle, build on a common framework of basic concepts, methods, and theorems, which stimulated the scientific progress of mathematics.
Only a single, famous fragment of Hippocrates' Elements is existent, embedded in the work of Simplicius. In this fragment the area is calculated of some so-called Hippocratic lunes. This was part of a research program to square the circle, that is, to construct a square with the same area as a circle. The strategy, apparently, was to divide a circle into a number of crescent-shaped parts. If it were possible to calculate the area of each of those parts, then the area of the circle as a whole would be known too. Only much later was it proven (by Ferdinand von Lindemann, in 1882) that this approach had no chance of success, because the fa
|
https://en.wikipedia.org/wiki/Monoisotopic%20mass
|
Monoisotopic mass (Mmi) is one of several types of molecular masses used in mass spectrometry. The theoretical monoisotopic mass of a molecule is computed by taking the sum of the accurate masses (including mass defect) of the most abundant naturally occurring stable isotope of each atom in the molecule. For small molecules made up of low atomic number elements the monoisotopic mass is observable as an isotopically pure peak in a mass spectrum. This differs from the nominal molecular mass, which is the sum of the mass number of the primary isotope of each atom in the molecule and is an integer. It also is different from the molar mass, which is a type of average mass. For some atoms like carbon, oxygen, hydrogen, nitrogen, and sulfur, the Mmi of these elements is exactly the same as the mass of its natural isotope, which is the lightest one. However, this does not hold true for all atoms. Iron's most common isotope has a mass number of 56, while the stable isotopes of iron vary in mass number from 54 to 58. Monoisotopic mass is typically expressed in daltons (Da), also called unified atomic mass units (u).
Nominal mass vs monoisotopic mass
Nominal mass
Nominal mass is a term used in high level mass spectrometric discussions, it can be calculated using the mass number of the most abundant isotope of each atom, without regard for the mass defect. For example, when calculating the nominal mass of a molecule of nitrogen (N2) and ethylene (C2H4) it comes out as.
N2
(2*14)= 28 Da
C2H4
(2*12)+(4*1)= 28 Da
What this means, is when using mass spectrometer with insufficient source of power "low resolution" like a quadrupole mass analyser or a quadrupolar ion trap, these two molecules won't be able to be distinguished after ionization, this will be shown by the cross lapping of the m/z peaks. If a high-resolution instrument like an orbitrap or an ion cyclotron resonance is used, these two molecules can be distinguished.
Monoisotopic mass
When calculating
|
https://en.wikipedia.org/wiki/Source%20transformation
|
Source transformation is the process of simplifying a circuit solution, especially with mixed sources, by transforming voltage sources into current sources, and vice versa, using Thévenin's theorem and Norton's theorem respectively.
Process
Performing a source transformation consists of using Ohm's law to take an existing voltage source in series with a resistance, and replacing it with a current source in parallel with the same resistance, or vice versa. The transformed sources are considered identical and can be substituted for one another in a circuit.
Source transformations are not limited to resistive circuits. They can be performed on a circuit involving capacitors and inductors as well, by expressing circuit elements as impedances and sources in the frequency domain. In general, the concept of source transformation is an application of Thévenin's theorem to a current source, or Norton's theorem to a voltage source. However, this means that source transformation is bound by the same conditions as Thevenin's theorem and Norton's theorem; namely that the load behaves linearly, and does not contain dependent voltage or current sources.
Source transformations are used to exploit the equivalence of a real current source and a real voltage source, such as a battery. Application of Thévenin's theorem and Norton's theorem gives the quantities associated with the equivalence. Specifically, given a real current source, which is an ideal current source in parallel with an impedance , applying a source transformation gives an equivalent real voltage source, which is an ideal voltage source in series with the impedance. The impedance retains its value and the new voltage source has value equal to the ideal current source's value times the impedance, according to Ohm's Law . In the same way, an ideal voltage source in series with an impedance can be transformed into an ideal current source in parallel with the same impedance, where the new ideal current source has
|
https://en.wikipedia.org/wiki/Meiotic%20drive
|
Meiotic drive is a type of intragenomic conflict, whereby one or more loci within a genome will affect a manipulation of the meiotic process in such a way as to favor the transmission of one or more alleles over another, regardless of its phenotypic expression. More simply, meiotic drive is when one copy of a gene is passed on to offspring more than the expected 50% of the time. According to Buckler et al., "Meiotic drive is the subversion of meiosis so that particular genes are preferentially transmitted to the progeny. Meiotic drive generally causes the preferential segregation of small regions of the genome".
Meiotic drive in plants
The first report of meiotic drive came from Marcus Rhoades who in 1942 observed a violation of Mendelian segregation ratios for the R locus - a gene controlling the production of the purple pigment anthocyanin in maize kernels - in a maize line carrying abnormal chromosome 10 (Ab10). Ab10 differs from the normal chromosome 10 by the presence of a 150-base pair heterochromatic region called 'knob', which functions as a centromere during division (hence called 'neocentromere') and moves to the spindle poles faster than the centromeres during meiosis I and II. The mechanism for this was later found to involve the activity of a kinesin-14 gene called Kinesin driver (Kindr). Kindr protein is a functional minus-end directed motor, displaying quicker minus-end directed motility than an endogenous kinesin-14, such as Kin11. As a result Kindr outperforms the endogenous kinesins, pulling the 150 bp knobs to the poles faster than the centromeres and causing Ab10 to be preferentially inherited during meiosis
Meiotic drive in animals
The unequal inheritance of gametes has been observed since the 1950s, in contrast to Gregor Mendel's First and Second Laws (the law of segregation and the law of independent assortment), which dictate that there is a random chance of each allele being passed on to offspring. Examples of selfish drive genes in ani
|
https://en.wikipedia.org/wiki/Structural%20load
|
A structural load or structural action is a force, deformation, or acceleration applied to structural elements. A load causes stress, deformation, and displacement in a structure. Structural analysis, a discipline in engineering, analyzes the effects of loads on structures and structural elements. Excess load may cause structural failure, so this should be considered and controlled during the design of a structure. Particular mechanical structures—such as aircraft, satellites, rockets, space stations, ships, and submarines—are subject to their own particular structural loads and actions. Engineers often evaluate structural loads based upon published regulations, contracts, or specifications. Accepted technical standards are used for acceptance testing and inspection.
Types
Dead loads are static forces that are relatively constant for an extended time. They can be in tension or compression. The term can refer to a laboratory test method or to the normal usage of a material or structure.
Live loads are usually variable or moving loads. These can have a significant dynamic element and may involve considerations such as impact, momentum, vibration, slosh dynamics of fluids, etc.
An impact load is one whose time of application on a material is less than one-third of the natural period of vibration of that material.
Cyclic loads on a structure can lead to fatigue damage, cumulative damage, or failure. These loads can be repeated loadings on a structure or can be due to vibration.
Loads on architectural and civil engineering structures
Structural loads are an important consideration in the design of buildings. Building codes require that structures be designed and built to safely resist all actions that they are likely to face during their service life, while remaining fit for use. Minimum loads or actions are specified in these building codes for types of structures, geographic locations, usage and building materials. Structural loads are split into categories by
|
https://en.wikipedia.org/wiki/Curve%20orientation
|
In mathematics, an orientation of a curve is the choice of one of the two possible directions for travelling on the curve. For example, for Cartesian coordinates, the -axis is traditionally oriented toward the right, and the -axis is upward oriented.
In the case of a planar simple closed curve (that is, a curve in the plane whose starting point is also the end point and which has no other self-intersections), the curve is said to be positively oriented or counterclockwise oriented, if one always has the curve interior to the left (and consequently, the curve exterior to the right), when traveling on it. Otherwise, that is if left and right are exchanged, the curve is negatively oriented or clockwise oriented. This definition relies on the fact that every simple closed curve admits a well-defined interior, which follows from the Jordan curve theorem.
The inner loop of a beltway road in a country where people drive on the right side of the road is an example of a negatively oriented (clockwise) curve. In trigonometry, the unit circle is traditionally oriented counterclockwise.
The concept of orientation of a curve is just a particular case of the notion of orientation of a manifold (that is, besides orientation of a curve one may also speak of orientation of a surface, hypersurface, etc.).
Orientation of a curve is associated with parametrization of its points by a real variable. A curve may have equivalent parametrizations when there is a continuous increasing monotonic function relating the parameter of one curve to the parameter of the other. When there is a decreasing continuous function relating the parameters, then the parametric representations are opposite and the orientation of the curve is reversed.
Orientation of a simple polygon
In two dimensions, given an ordered set of three or more connected vertices (points) (such as in connect-the-dots) which forms a simple polygon, the orientation of the resulting polygon is directly related to the sign of
|
https://en.wikipedia.org/wiki/Rib%20fracture
|
A rib fracture is a break in a rib bone. This typically results in chest pain that is worse with inspiration. Bruising may occur at the site of the break. When several ribs are broken in several places a flail chest results. Potential complications include a pneumothorax, pulmonary contusion, and pneumonia.
Rib fractures usually occur from a direct blow to the chest such as during a motor vehicle collision or from a crush injury. Coughing or metastatic cancer may also result in a broken rib. The middle ribs are most commonly fractured. Fractures of the first or second ribs are more likely to be associated with complications. Diagnosis can be made based on symptoms and supported by medical imaging.
Pain control is an important part of treatment. This may include the use of paracetamol (acetaminophen), NSAIDs, or opioids. A nerve block may be another option. While fractured ribs can be wrapped, this may increase complications. In those with a flail chest, surgery may improve outcomes. They are a common injury following trauma.
Signs and symptoms
This typically results in chest pain that is worse with inspiration. Bruising may occur at the site of the break.
Complications
When several ribs are broken in several places a flail chest results. Potential complications include a pneumothorax, pulmonary contusion, and pneumonia.
Causes
Rib fractures can occur with or without direct trauma during recreational activity. Cardiopulmonary resuscitation (CPR) has also been known to cause thoracic injury, including but not limited to rib and sternum fractures. They can also occur as a consequence of diseases such as cancer or rheumatoid arthritis. While for elderly individuals a fall can cause a rib fracture, in adults automobile accidents are a common event for such an injury.
Diagnosis
Signs of a broken rib may include:
Pain on inhalation
Swelling in chest area
Bruise in chest area
Increasing shortness of breath
Coughing up blood (rib may have damaged lung)
Plain X-ra
|
https://en.wikipedia.org/wiki/Walking%20wounded
|
In first aid and triage, the walking wounded are injured persons who are of a relatively low priority. These patients are conscious and breathing and will often have only relatively minor injuries; thus they are capable of walking. Depending on the resources available, and the abilities of the injured persons, walking wounded may sometimes be called upon to assist treatment of more seriously injured patients or assist with other tasks.
In most mass casualty situations, the walking wounded are the largest category of casualty.
Classification
According to Simple Triage and Rapid Treatment (START) documentation, walking wounded are determined by requesting those on the scene who may self-evacuate, to do so immediately to a designated refuge. Any casualties able to respond to this command and move themselves to the designated position are considered walking wounded.
According to the Revised Trauma Score (RTS) system of triaging, walking wounded can be considered to be those scoring a 12.
|
https://en.wikipedia.org/wiki/Counting%20sheep
|
Counting sheep is a mental exercise used in some Western cultures as a means of putting oneself to sleep.
In most depictions of the activity, the practitioner envisions an endless series of identical white sheep jumping over a fence, while counting them as they do so. The idea, presumably, is to induce boredom while occupying the mind with something simple, repetitive, and rhythmic, all of which are known to help humans sleep.
Although the practice is largely a stereotype, and rarely used as a solution for insomnia, it has been so commonly referenced by cartoons, comic strips, and other mass media, that it has become deeply engrained into popular culture's notion of sleep. The term "counting sheep" has entered the English language as an idiomatic term for insomnia. Sheep themselves have become associated with sleep, or lack thereof.
Effectiveness
The effectiveness of the method may depend upon the mental power required. An experiment conducted by researchers at Oxford University, though not involving livestock as the object of visualization, found that subjects who imagined "a beach or a waterfall" were forced to expend more mental energy, and fell asleep faster, than those asked to "simply distract from thoughts, worries and concerns." Sleep could be achieved by any number of complex activities that expend mental energy.
Origin
An early reference to counting sheep as a means of attaining sleep can be found in Illustrations of Political Economy by Harriet Martineau, from 1832:
"It was a sight of monotony to behold one sheep after another follow the adventurous one, each in turn placing its fore-feet on the breach in the fence, bringing up its hind legs after it, looking around for an instant from the summit, and then making the plunge into the dry ditch, tufted with locks of wool. The process might have been more composing if the field might have been another man's property, or if the flock had been making its way out instead of in; but the recollection of the
|
https://en.wikipedia.org/wiki/Hryvnia%20sign
|
The hryvnia sign (₴) is a currency symbol, used for the Ukrainian hryvnia currency since 2004.
Description
The hryvnia sign is a cursive minuscule Ukrainian Cyrillic letter He (г), or a mirrored letter S, with a double horizontal stroke, symbolising stability, similar to that used in other currency symbols such as ¥ or €. Hryvnia is abbreviated "грн" (hrn) in Ukrainian. The hryvnia sign ₴ was released in March 2004.
The specific design of the hryvnia sign was a result of a public contest held by the National Bank of Ukraine in 2003. The bank announced that it would not take any special steps of promoting the sign, but expressed expectations that the recognition and the technical possibilities of rendering the sign would follow. As soon as the sign was announced, a proposal to encode it was written. The sign is Unicode encoded as since version 4.1 (2005).
The symbol appears in the filigree of the 1 hryvnia and the recently introduced 1,000 hryven banknote.
See also
Ukrainian hryvnia
Currency symbol
|
https://en.wikipedia.org/wiki/Jon%20Bentley%20%28computer%20scientist%29
|
Jon Louis Bentley (born February 20, 1953) is an American computer scientist who is credited with the heuristic-based partitioning algorithm k-d tree.
Education and career
Bentley received a B.S. in mathematical sciences from Stanford University in 1974, and M.S. and PhD in 1976 from the University of North Carolina at Chapel Hill; while a student, he also held internships at the Xerox Palo Alto Research Center and Stanford Linear Accelerator Center. After receiving his Ph.D., he joined the faculty at Carnegie Mellon University as an assistant professor of computer science and mathematics. At CMU, his students included Brian Reid, John Ousterhout, Jeff Eppinger, Joshua Bloch, and James Gosling, and he was one of Charles Leiserson's advisors. Later, Bentley moved to Bell Laboratories, where he co-authored an optimized Quicksort algorithm with Doug McIlroy.
He found an optimal solution for the two dimensional case of Klee's measure problem: given a set of n rectangles, find the area of their union. He and Thomas Ottmann invented the Bentley–Ottmann algorithm, an efficient algorithm for finding all intersecting pairs among a collection of line segments. He wrote the Programming Pearls column for the Communications of the ACM magazine, and later collected the articles into two books of the same name.
Bentley received the Dr. Dobb's Excellence in Programming award in 2004.
Bibliography
Programming Pearls (2nd edition), .
More Programming Pearls: Confessions of a Coder, .
Writing Efficient Programs, .
Divide and Conquer Algorithms for Closest Point Problems in Multidimensional Space, Ph.D. thesis.
|
https://en.wikipedia.org/wiki/Concepts%2C%20Techniques%2C%20and%20Models%20of%20Computer%20Programming
|
Concepts, Techniques, and Models of Computer Programming is a textbook published in 2004 about general computer programming concepts from MIT Press written by Université catholique de Louvain professor Peter Van Roy and Royal Institute of Technology, Sweden professor Seif Haridi.
Using a carefully selected progression of subsets of the Oz programming language, the book explains the most important programming concepts, techniques, and models (paradigms).
Translations of this book have been published in French (by Dunod Éditeur, 2007), Japanese (by Shoeisha, 2007) and Polish (by Helion, 2005).
External links
Official CTM site, with supplementary material
Yves Deville et al. review
CTM wiki
2004 non-fiction books
Computer_programming_books
Computer science books
MIT Press books
|
https://en.wikipedia.org/wiki/MojoWorld%20Generator
|
MojoWorld was a commercial, fractal-based modelling program for the creation of digital landscapes, and attracted a following among artists who create space art and science fiction scenes. Originally created by Ken Musgrave, it was marketed commercially by his Pandromeda Inc. company.
Functionality
MojoWorld could generate entire planets through mathematics and procedural generation, using a simple graphical interface and a planet-generation Wizard. The resulting terrain could then be navigated in 3D space much like a videogame, allowing users to easily find exactly the right place for a scenic landscape picture. MojoWorld also allowed the user to edit the landscape and scene, and then have it rendered to an image by the computer.
As well as making still renders of any size, 360-degree views of the planet could also be shared by having the software render a set of 6 x 90-degree tiles covering the entire view. This could be assembled in Quicktime QTVR and shown on the Web. After Quicktime became defunct, tile assemblage was handled by Pano2VR. In 2004 a wholly free MojoWorld 3 Viewer was also released, which enabled anyone to experience, view and render from a saved MojoWorld planet file. Render size for the free Viewer was capped at "1024 x 465 pixels with MojoWorld watermark", and animations could be rendered at "320 x 240 pixels".
The software was supported by a detailed 500-page manual. Users expanded the software's functionality with free plugins, such a volumetrics plugin.
Versions
MojoWorld 3.0 was released in 2004 in Standard and Professional versions, with Pro adding official plugins such as MojoTree (forest generation) and a library of plants and planets. Version 3.1 added native support for .pz3 files created with Poser 6. The final release was MojoWorld 3.1.1 in October 2005, featuring procedural forest generation, boulders, and rocks, and enhanced atmospheres. Many users later migrated to the somewhat similar but more complex Vue landscape softwar
|
https://en.wikipedia.org/wiki/Introgression
|
Introgression, also known as introgressive hybridization, in genetics is the transfer of genetic material from one species into the gene pool of another by the repeated backcrossing of an interspecific hybrid with one of its parent species. Introgression is a long-term process, even when artificial; it may take many hybrid generations before significant backcrossing occurs. This process is distinct from most forms of gene flow in that it occurs between two populations of different species, rather than two populations of the same species.
Introgression also differs from simple hybridization. Simple hybridization results in a relatively even mixture; gene and allele frequencies in the first generation will be a uniform mix of two parental species, such as that observed in mules. Introgression, on the other hand, results in a complex, highly variable mixture of genes, and may only involve a minimal percentage of the donor genome.
Definition
Introgression or introgressive hybridization is the incorporation (usually via hybridization and backcrossing) of novel genes and/or alleles from one taxon into the gene pool of a second, distinct taxon. This introgression is considered 'adaptive' if the genetic transfer results in an overall increase in the recipient taxon's fitness.
Ancient introgression events can leave traces of extinct species in present-day genomes, a phenomenon known as ghost introgression.
Source of variation
Introgression is an important source of genetic variation in natural populations and may contribute to adaptation and even adaptive radiation. It can occur across hybrid zones due to chance, selection or hybrid zone movement. There is evidence that introgression is a ubiquitous phenomenon in plants and animals, including humans, in which it may have introduced the microcephalin D allele.
It has been proposed that, historically, introgression with wild animals is a large contributor to the wide range of diversity found in domestic animals, rather th
|
https://en.wikipedia.org/wiki/Atiyah%E2%80%93Bott%20fixed-point%20theorem
|
In mathematics, the Atiyah–Bott fixed-point theorem, proven by Michael Atiyah and Raoul Bott in the 1960s, is a general form of the Lefschetz fixed-point theorem for smooth manifolds M, which uses an elliptic complex on M. This is a system of elliptic differential operators on vector bundles, generalizing the de Rham complex constructed from smooth differential forms which appears in the original Lefschetz fixed-point theorem.
Formulation
The idea is to find the correct replacement for the Lefschetz number, which in the classical result is an integer counting the correct contribution of a fixed point of a smooth mapping
Intuitively, the fixed points are the points of intersection of the graph of f with the diagonal (graph of the identity mapping) in , and the Lefschetz number thereby becomes an intersection number. The Atiyah–Bott theorem is an equation in which the LHS must be the outcome of a global topological (homological) calculation, and the RHS a sum of the local contributions at fixed points of f.
Counting codimensions in , a transversality assumption for the graph of f and the diagonal should ensure that the fixed point set is zero-dimensional. Assuming M a closed manifold should ensure then that the set of intersections is finite, yielding a finite summation as the RHS of the expected formula. Further data needed relates to the elliptic complex of vector bundles , namely a bundle map
for each j, such that the resulting maps on sections give rise to an endomorphism of an elliptic complex . Such an endomorphism has Lefschetz number
which by definition is the alternating sum of its traces on each graded part of the homology of the elliptic complex.
The form of the theorem is then
Here trace means the trace of at a fixed point x of f, and is the determinant of the endomorphism at x, with the derivative of f (the non-vanishing of this is a consequence of transversality). The outer summation is over the fixed points x, and the inner summation ove
|
https://en.wikipedia.org/wiki/Sony%20Reader
|
The was a line of e-book readers manufactured by Sony. The first model was the PRS-500 released in September 2006 and was related to the earlier Sony Librie, the first commercial E Ink e-reader in 2004 using an electronic paper display developed by E Ink Corporation. The last model was the PRS-T3, after which Sony announced it would no longer release a new consumer e-reader.
Sony sold e-books for the Reader from the Sony eBook Library in the US, UK, Japan, Germany, Austria, Canada, France, Italy, and Spain. The Reader also could display Adobe PDFs, ePub format, RSS newsfeeds, JPEGs, and Sony's proprietary BBeB ("BroadBand eBook") format. Some Readers could play MP3 and unencrypted AAC audio files. Compatibility with Adobe digital rights management (DRM) protected PDF and ePub files allowed Sony Reader owners to borrow ebooks from lending libraries in many countries. The DRM rules of the Reader allowed any purchased e-book to be read on up to six devices, at least one of which must be a personal computer running Windows or Mac OS X. Although the owner could not share purchased eBooks on others' devices and accounts, the ability to register five Readers to a single account and share books accordingly was a possible workaround.
Models and availability
Ten models were produced. The PRS-500 (PRS standing for Portable Reader System) was made available in the United States in September 2006. On 1 November 2006, Readers went on display and for sale at Borders bookstores throughout the US. Borders had an exclusive contract for the Reader until the end of 2006. From April 2007, Sony Reader has been sold in the US by multiple merchants, including Fry's Electronics, Costco, Borders and Best Buy. The eBook Store from Sony is only available to US or Canadian residents or to customers who purchased a US-model reader with bundled eBook Store credit.
On July 24, 2007, Sony announced that the PRS-505 Reader would be available in the UK with a launch date of September 3, 2008. Wa
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.