source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Ecological%20pyramid
|
An ecological pyramid (also trophic pyramid, Eltonian pyramid, energy pyramid, or sometimes food pyramid') is a graphical representation designed to show the biomass or bioproductivity at each trophic level in an ecosystem.
A pyramid of energy shows how much energy is retained in the form of new biomass from each trophic level, while a pyramid of biomass shows how much biomass (the amount of living or organic matter present in an organism) is present in the organisms. There is also a pyramid of numbers representing the number of individual organisms at each trophic level. Pyramids of energy are normally upright, but other pyramids can be inverted(pyramid of biomass for marine region) or take other shapes.(spindle shaped pyramid)
Ecological pyramids begin with producers on the bottom (such as plants) and proceed through the various trophic levels (such as herbivores that eat plants, then carnivores that eat flesh, then omnivores that eat both plants and flesh, and so on). The highest level is the top of the food chain.
Biomass can be measured by a bomb calorimeter.
Pyramid of Energy
A pyramid of energy or pyramid of productivity shows the production or turnover (the rate at which energy or mass is transferred from one trophic level to the next) of biomass at each trophic level. Instead of showing a single snapshot in time, productivity pyramids show the flow of energy through the food chain. Typical units are grams per square meter per year or calories per square meter per year. As with the others, this graph shows producers at the bottom and higher trophic levels on top.
When an ecosystem is healthy, this graph produces a standard ecological pyramid. This is because, in order for the ecosystem to sustain itself, there must be more energy at lower trophic levels than there is at higher trophic levels. This allows organisms on the lower levels to not only maintain a stable population, but also to transfer energy up the pyramid. The exception to this generalizati
|
https://en.wikipedia.org/wiki/Bracewell%20probe
|
A Bracewell probe is a hypothetical concept for an autonomous interstellar space probe dispatched for the express purpose of communication with one or more alien civilizations. It was proposed by Ronald N. Bracewell in a 1960 paper, as an alternative to interstellar radio communication between widely separated civilizations.
Description
A Bracewell probe would be constructed as an autonomous robotic interstellar space probe with a high level of artificial intelligence, and all relevant information that its home civilization might wish to communicate to another culture. It would seek out technological civilizations—or alternatively monitor worlds where there is a likelihood of technological civilizations arising—and communicate over "short" distances (compared to the interstellar distances between inhabited worlds) once it discovered a civilization that meets its contact criteria. It would make its presence known, carry out a dialogue with the contacted culture, and presumably communicate the results of its encounter to its place of origin. In essence, such probes would act as an autonomous local representative of their home civilization and would act as the point of contact between the cultures.
Since a Bracewell probe can communicate much faster, over shorter distances, and over large spans of time, it can communicate with alien cultures more efficiently than radio message exchange might. The disadvantage to this approach is that such probes cannot communicate anything not in their data storage, nor can their contact criteria or policies for communication be quickly updated by their "base of operations".
While a Bracewell probe need not be a von Neumann probe as well, the two concepts are compatible, and a self-replicating device as proposed by von Neumann would greatly speed up a Bracewell probe's search for alien civilizations.
It is also possible that such a probe (or system of probes if launched as a von Neumann–Bracewell probe) may outlive the civilizatio
|
https://en.wikipedia.org/wiki/Data%20scrubbing
|
Data scrubbing is an error correction technique that uses a background task to periodically inspect main memory or storage for errors, then corrects detected errors using redundant data in the form of different checksums or copies of data. Data scrubbing reduces the likelihood that single correctable errors will accumulate, leading to reduced risks of uncorrectable errors.
Data integrity is a high-priority concern in writing, reading, storage, transmission, or processing of the computer data in computer operating systems and in computer storage and data transmission systems. However, only a few of the currently existing and used file systems provide sufficient protection against data corruption.
To address this issue, data scrubbing provides routine checks of all inconsistencies in data and, in general, prevention of hardware or software failure. This "scrubbing" feature occurs commonly in memory, disk arrays, file systems, or FPGAs as a mechanism of error detection and correction.
RAID
With data scrubbing, a RAID controller may periodically read all hard disk drives in a RAID array and check for defective blocks before applications might actually access them. This reduces the probability of silent data corruption and data loss due to bit-level errors.
In Dell PowerEdge RAID environments, a feature called "patrol read" can perform data scrubbing and preventive maintenance.
In OpenBSD, the bioctl(8) utility allows the system administrator to control these patrol reads through the BIOCPATROL ioctl on the /dev/bio pseudo-device; as of 2019, this functionality is supported in some device drivers for LSI Logic and Dell controllers — this includes mfi(4) since OpenBSD 5.8 (2015) and mfii(4) since OpenBSD 6.4 (2018).
In FreeBSD and DragonFly BSD, patrol can be controlled through a RAID controller-specific utility mfiutil(8) since FreeBSD 8.0 (2009) and 7.3 (2010). The implementation from FreeBSD was used by the OpenBSD developers for adding patrol support to their
|
https://en.wikipedia.org/wiki/Baire%20function
|
In mathematics, Baire functions are functions obtained from continuous functions by transfinite iteration of the operation of forming pointwise limits of sequences of functions. They were introduced by René-Louis Baire in 1899. A Baire set is a set whose characteristic function is a Baire function.
Classification of Baire functions
Baire functions of class α, for any countable ordinal number α, form a vector space of real-valued functions defined on a topological space, as follows.
The Baire class 0 functions are the continuous functions.
The Baire class 1 functions are those functions which are the pointwise limit of a sequence of Baire class 0 functions.
In general, the Baire class α functions are all functions which are the pointwise limit of a sequence of functions of Baire class less than α.
Some authors define the classes slightly differently, by removing all functions of class less than α from the functions of class α. This means that each Baire function has a well defined class, but the functions of given class no longer form a vector space.
Henri Lebesgue proved that (for functions on the unit interval) each Baire class of a countable ordinal number contains functions not in any smaller class, and that there exist functions which are not in any Baire class.
Baire class 1
Examples:
The derivative of any differentiable function is of class 1. An example of a differentiable function whose derivative is not continuous (at x = 0) is the function equal to when x ≠ 0, and 0 when x = 0. An infinite sum of similar functions (scaled and displaced by rational numbers) can even give a differentiable function whose derivative is discontinuous on a dense set. However, it necessarily has points of continuity, which follows easily from The Baire Characterisation Theorem (below; take K = X = R).
The characteristic function of the set of integers, which equals 1 if x is an integer and 0 otherwise. (An infinite number of large discontinuities.)
Thomae's function, which
|
https://en.wikipedia.org/wiki/Liners
|
"Liners" is a horticultural term referring to very young plants, usually grown for sale to retailers or wholesalers, who then grow them to a larger size before selling them to consumers. Liners are usually grown from seed, but may also be grown from cuttings or tissue culture. They are grown in plastic trays with many "cells," each of which contains a single liner plant. Liners will typically range in size from a 36 cell tray up to a 288 cell tray. The most common size used in commercial nurseries is between 50 and 72 cells. The term "liner", is typically used for perennial, ornamental, and woody seedlings. Annuals in this form are usually referred to as plugs.
According to the U.S. Department of Agriculture, plants can only be defined as "liners" if they are at least an inch in diameter but not more than 3 inches in diameter at the widest point. Plant liners must also have established root systems that touch the outer edges of the container and stay intact when lifted from the container.
The U.S. Department of Agriculture's National Plant Materials Manual defines "liner" as plant material which is grown in one location and then “lined-out” in another location for finishing off. Plants may be started in seedbeds and lifted bare-root or grown in containers. Either type of these liners may finish their production cycle in the ground or in containers.
A liner traditionally refers to lining out nursery stock in a field row. The term has evolved to mean a small plant produced from a rooted cutting, seedling, plug, or tissue culture plantlet. Direct sticking or direct rooting into smaller liner pots is commonly done in United States propagation nurseries. Seedlings and rooted cuttings can also be transplanted into small liner pots and allowed to become established during liner production, before being transplanted to larger containers (upcanned) or outplanted into the field.
|
https://en.wikipedia.org/wiki/Buddy%20box
|
Buddy box or buddy boxing is a colloquialism referring to two R/C aircraft radio systems joined together for pilot training purposes.
This training system is universal among the six major R/C radio manufacturers (Spektrum, Futaba, JR, Hitec, Sanwa/Airtronics and KO Propo) which means that transmitters do not have to be the same brand in order to be joined via an umbilical cable. There are, however, two different types of DIN cable connectors used for the purpose and the two are incompatible. Therefore, both transmitters must have the same type of receptacle in order to operate together.
Buddy boxing is accomplished by joining the student and master transmitters via the aforementioned cable and making sure that the servo reversing switches and trims are set identical on both. The student is given control of the aircraft via a long-handled, spring-loaded switch on the top left corner of most transmitters located on the master transmitter, normally held by the instructor. When the switch is pulled forward and held on by the instructor's left index finger, control of the aircraft is at the student's transmitter. Should the instructor judge that the student is encountering difficulty in flight, control is transferred to the master transmitter merely by releasing the switch. (On some older Futaba radios such as the popular 6XA, the trainer switch is actually a push-button located in the corner, the aforementioned corner toggle switch is reserved for channel 5 - landing gear.)
The two transmitters need not be on the same frequency. The master transmitter is the one that actually flies the plane; buddy boxing turns the student transmitter into a "dummy" remote control of the master. The student transmitter is operated with power switched off as power for both is provided by the master. The student transmitter will power up via the umbilical despite being switched off.
Radio control
Radio-controlled aircraft
|
https://en.wikipedia.org/wiki/Pyramid%20Head
|
Pyramid Head, also known as , "Red Pyramid" or is a character from the Silent Hill series, a survival horror video game series created by Japanese company Konami.
Introduced in the 2001 installment Silent Hill 2, he is a type of monster that serves as the secondary antagonist, stalking James Sunderland, the primary player character, who comes to the town of Silent Hill after receiving a letter from his deceased wife, Mary. The Silent Hill series, particularly the second installment, frequently utilizes psychology and symbolism: Pyramid Head represents James' wish to be punished for Mary's death. Masahiro Ito, the designer of Silent Hill 2s monsters, created the character because he wanted "a monster with a hidden face". Known for his large triangular helmet that conceals his head, Pyramid Head lacks a voice, a visible face, and his appearance stems from the town's past as a place of execution.
He has since appeared in the 2006 film Silent Hill as "Red Pyramid", in the 2007 first-person shooter Silent Hill: The Arcade as a boss, and in the sixth installment of the series, Silent Hill: Homecoming, as the "Bogeyman". He has also made an appearance outside of the Silent Hill series as a selectable character in the 2008 Nintendo DS title New International Track & Field, Super Bomberman R, and Dead by Daylight as a playable Killer in 2020. Positively received in Silent Hill 2 for his role as an element of James' psyche, he has been cited by reviewers as an iconic villain of the series and part of Silent Hill 2s appeal.
Concept and design
Ito wanted to create "a monster with a hidden face", but became unhappy with his designs, which resembled humans wearing masks. He then drew a monster with a pyramid-shaped helmet. According to Ito, the triangle's sharp right and acute angles suggest the possibility of pain. Of the creatures that appear in Silent Hill 2, only Pyramid Head features an "overtly masculine" appearance. He resembles a pale, muscular man covered with a whi
|
https://en.wikipedia.org/wiki/Osteitis
|
Osteitis is inflammation of bone. More specifically, it can refer to one of the following conditions:
Osteomyelitis, or infectious osteitis, mainly bacterial osteitis
Alveolar osteitis or "dry socket"
Condensing osteitis (or Osteitis condensans)
Osteitis deformans (or Paget's disease of bone)
Osteitis fibrosa cystica (or Osteitis fibrosa, or Von Recklinghausen's disease of bone)
Osteitis pubis
Radiation osteitis
Osteitis condensans ilii
Panosteitis, a long bone condition in large breed dogs
In horses, pedal osteitis is frequently confused with laminitis.
See also
Osteochondritis
SAPHO syndrome
|
https://en.wikipedia.org/wiki/Montina
|
Montina is a brand name of a type of flour created from milled Indian ricegrass (Achnatherum hymenoides), a type of grass native to the western United States. Indian rice grass was grown and used by Native Americans as much as 7,000 years ago. The grass is not related to rice, and the flour is gluten-free.
Indian ricegrass is grown by local farmers and processed at individually owned and dedicated gluten-free plants. The majority of farms producing Montina in the United States use non GMOs in the growing of the ricegrass or in the processing of Montina.
|
https://en.wikipedia.org/wiki/Laurance%20Doyle
|
Laurance R. Doyle (born 1953) is an American scientist who received his Ph.D. from the Ruprecht Karl University of Heidelberg.
Doyle has worked at the SETI Institute since 1987 where he is a principal investigator and astrophysicist. His main area of study has been the formation and detection of extrasolar planets, but he has also worked on communications theory. In particular he has written on how patterns in animal communication relate to humans with an emphasis on cetaceans.
Early life
Doyle grew up on a dairy farm in Cambria, California and therefore, didn't have much access to information about stars. But by reading books at the local library, Doyle was able to develop his knowledge in astronomy, and eventually obtain his Bachelor's and Master's of Science degrees in astronomy from San Diego State University.
Career
His first job was at the Jet Propulsion Laboratory as an imaging engineer, where he was in charge of analyzing pictures of Jupiter and Saturn sent from the spacecraft Voyager. He moved to Heidelberg, Germany, to help analyze images of Halley's Comet. He got his doctorate in Astrophysics at the University of Heidelberg.
Doyle is currently seeking to compare dolphin whistles and baby babble in an attempt to make predictions about extraterrestrial communications. He believes that by measuring the complexity of communications for different species on Earth, we could get a good indication of how advanced an extraterrestrial signal is using an application of Zipf's law. His study determined that babies babble over 800 different sounds with the same amount of frequency as dolphins. As they grow older, those sounds decrease to around 50 and become more repetitious. The study found that baby dolphins develop similarly in regards to their whistling.
Doyle is faculty at Principia College and the founding Director of Principia College's Institute for the Metaphysics of Physics, founded in 2014.
In popular culture
In May 2005, he appeared on a National Geo
|
https://en.wikipedia.org/wiki/Nectar%20guide
|
Nectar guides are markings or patterns seen in flowers of some angiosperm species, that guide pollinators to their rewards. Rewards commonly take the form of nectar, pollen, or both, but various plants produce oil, resins, scents, or waxes. Such patterns also are known as "pollen guides" and "honey guides", though some authorities argue for the abandonment of such terms in favour of floral guides (see for example Dinkel & Lunau). Pollinator visitation can select for various floral traits, including nectar guides through a process called pollinator-mediated selection.
These patterns are sometimes visible to humans; for instance, the Dalmatian toadflax (Linaria genistifolia) has yellow flowers with orange nectar guides. However, in some plants, such as sunflowers, they are visible only when viewed in ultraviolet light. Under ultraviolet, the flowers have a darker center, where the nectaries are located, and often specific patterns upon the petals as well. This is believed to make the flowers more attractive to pollinators such as honey bees and other insects that can see ultraviolet. This page on butterflies shows an animated comparison of black-eyed Susan (Rudbeckia hirta) flowers in visible and UV light.
The ultraviolet color, invisible to humans, has been referred to as bee violet, and mixtures of greenish (yellow) wavelengths (roughly 540 nm) with ultraviolet are called bee purple by analogy with purple in human vision.
|
https://en.wikipedia.org/wiki/Arthroconidium
|
Arthroconidia are a type of fungal spore typically produced by segmentation of pre-existing fungal hyphae.
Background
These spores are asexual and are generally not as durable and environmentally persistent as, for instance, bacterial endospores or chlamydospores. Some medically significant pathogens, such as Coccidioides immitis, and Coccidioides posadasii, both causative agents of coccidioidomycosis (also known as San Joaquin Valley fever), are transmitted through airborne arthroconidia. The small size of the arthroconidia, 3 to 5 µm, allow them to lodge themselves into the terminal bronchioles of the lung. There, they develop into a thick-walled spherule filled with endospores that cause a pyogenic (pus-causing) inflammation.
See also
Conidium
|
https://en.wikipedia.org/wiki/Single-unit%20recording
|
In neuroscience, single-unit recordings (also, single-neuron recordings) provide a method of measuring the electro-physiological responses of a single neuron using a microelectrode system. When a neuron generates an action potential, the signal propagates down the neuron as a current which flows in and out of the cell through excitable membrane regions in the soma and axon. A microelectrode is inserted into the brain, where it can record the rate of change in voltage with respect to time. These microelectrodes must be fine-tipped, impedance matching; they are primarily glass micro-pipettes, metal microelectrodes made of platinum, tungsten, iridium or even iridium oxide. Microelectrodes can be carefully placed close to the cell membrane, allowing the ability to record extracellularly.
Single-unit recordings are widely used in cognitive science, where it permits the analysis of human cognition and cortical mapping. This information can then be applied to brain–machine interface (BMI) technologies for brain control of external devices.
Overview
There are many techniques available to record brain activity—including electroencephalography (EEG), magnetoencephalography (MEG), and functional magnetic resonance imaging (fMRI)—but these do not allow for single-neuron resolution. Neurons are the basic functional units in the brain; they transmit information through the body using electrical signals called action potentials. Currently, single-unit recordings provide the most precise recordings from a single neuron. A single unit is defined as a single, firing neuron whose spike potentials are distinctly isolated by a recording microelectrode.
The ability to record signals from neurons is centered around the electric current flow through the neuron. As an action potential propagates through the cell, the electric current flows in and out of the soma and axons at excitable membrane regions. This current creates a measurable, changing voltage potential within (and outside) t
|
https://en.wikipedia.org/wiki/Microsoft%20Write
|
Microsoft Write is a basic word processor included with Windows 1.0 and later, until Windows NT 3.51. Throughout its lifespan it was minimally updated, and is comparable to early versions of MacWrite. Early versions of Write only work with Write Document (.wri) files, which are a subset of the Rich Text Format (RTF). After Windows 3.0, Write became capable of reading and composing early Word Document (.doc) files. With Windows 3.1, Write became OLE capable. In Windows 95, Write was replaced with WordPad; attempting to open Write from the Windows folder will open WordPad instead.
Being a word processor, Write features additional document formatting features that are not found in Notepad (a simple text editor), such as a choice of font, text decorations and paragraph indentation for different parts of the document. Unlike versions of WordPad before Windows 7, Write could justify a paragraph.
Platforms
Atari ST
In 1986, Atari announced an agreement with Microsoft to bring Microsoft Write to the Atari ST.
Unlike the Windows version, Microsoft Write for the Atari ST is the Atari version of Microsoft Word 1.05 released for the Apple Macintosh while sharing the same name as the program included with Microsoft Windows during the 80s and early 90s. While the program was announced in 1986, various delays caused the program to arrive in 1988. The Atari version is a one time release and was never updated.
Microsoft Write for the Atari ST retailed at $129.95 and is one of two high-profile PC word processors that were released on the Atari platform. The other application is WordPerfect.
Macintosh
In October 1987, Microsoft released Microsoft Write for Macintosh. Write is a version of Microsoft Word with limited features that Microsoft hoped would replace aging MacWrite in the Macintosh word processor market. Write was priced well below Word, though at the time MacWrite was included with new Macintoshes. Write is best described as Word locked in "Short Menus" mode, and a
|
https://en.wikipedia.org/wiki/Defense%20Data%20Network
|
The Defense Data Network (DDN) was a computer networking effort of the United States Department of Defense from 1983 through 1995. It was based on ARPANET technology.
History
As an experiment, from 1971 to 1977, the Worldwide Military Command and Control System (WWMCCS) purchased and operated an ARPANET-type system from BBN Technologies for the Prototype WWMCCS Intercomputer Network (PWIN). The experiments proved successful enough that it became the basis of the much larger WIN system. Six initial WIN sites in 1977 increased to 20 sites by 1981.
In 1975, the Defense Communication Agency (DCA) took over operation of the ARPANET as it became an operational tool in addition to an ongoing research project. At that time, the Automatic Digital Network (AUTODIN), carried most of the Defense Department's message traffic. Starting in 1972, attempts had been made to introduce some packet switching into its planned replacement, AUTODIN II. AUTODIN II development proved unsatisfactory, however, and in 1982, AUTODIN II was canceled, to be replaced by a combination of several packet-based networks that would connect military installations.
The DCA used "Defense Data Network" (DDN) as the program name for this new network. Under its initial architecture, as developed by the Institute for Defense Analysis, the DDN would consist of two separate instances: the unclassified MILNET, which would be split off the ARPANET; and a classified network, also based on ARPANET technology, which would provide services for WIN, DODIIS, and SACDIN. C/30 packet switches, developed by BBN Technologies as upgraded Interface Message Processors, would provide the network technology. End-to-end encryption would be provided by ARPANET encryption devices, namely the Internet Private Line Interface (IPLI) or Blacker.
After MILNET was split away, the ARPANET would continue be used as an Internet backbone for researchers, but be slowly phased out. Both networks carried unclassified information, and we
|
https://en.wikipedia.org/wiki/Kelvin%20water%20dropper
|
The Kelvin water dropper, invented by Scottish scientist William Thomson (Lord Kelvin) in 1867, is a type of electrostatic generator. Kelvin referred to the device as his water-dropping condenser. The apparatus is variously called the Kelvin hydroelectric generator, the Kelvin electrostatic generator, or Lord Kelvin's thunderstorm. The device uses falling water to generate voltage differences by electrostatic induction occurring between interconnected, oppositely charged systems. This eventually leads to an electric arc discharging in the form of a spark. It is used in physics education to demonstrate the principles of electrostatics.
Description
A typical setup is shown in Fig. 1. A reservoir of water or other conducting liquid (top, grey) is connected to two hoses that release two falling streams of drops, which land in two buckets or containers (bottom, blue and red). Each stream passes (without touching) through a metal ring or open cylinder which is electrically connected to the opposite receiving container; the left ring (blue) is connected to the right bucket, while the right ring (red) is connected to the left bucket. The containers must be electrically insulated from each other and from electrical ground. Similarly, the rings must be electrically isolated from each other and their environment. It is necessary for the streams to break into separate droplets before reaching the containers. Typically, the containers are made of metal and the rings are connected to them by wires.
The simple construction makes this device popular in physics education as a laboratory experiment for students.
Principles of operation
A small initial difference in electric charge between the two buckets, which always exists because the buckets are insulated from each other, is necessary to begin the charging process. Suppose, therefore, that the right bucket has a small positive charge. Now the left ring also has some positive charge because it is connected to the bucke
|
https://en.wikipedia.org/wiki/Radio-controlled%20glider
|
A radio-controlled glider is a type of radio-controlled aircraft that normally does not have any form of propulsion. They are able to sustain continuous flight by exploiting the lift produced by slopes and thermals, controlled remotely from the ground with a transmitter. They can be constructed from a variety of materials, including wood, plastic, polymer foams, and composites, and can vary in wing loading from very light to relatively heavy, depending on their intended use.
International radio-controlled glider competitions are regulated by the Fédération Aéronautique Internationale (FAI) although many countries have their own national classes.
Launching methods
Hand launch
Hand launching is the simplest way to get a model glider into the air. Depending on craft design and the conditions at launch—the pilot or an assistant need only to gently 'throw' it into the wind, at an angle deemed best suited, usually between horizontal and 45 degrees of zenith. In this manner a successful launch is possible with very little effort. This method is usually utilised when slope soaring, where with a little experience, it is possible to simply hold the craft above the head at the correct angle and let go.
Towline launch
In this method another person runs along the ground pulling a line with the glider attached to the end, while the pilot steers it. It can be performed on any flat piece of terrain, as the glider is given sufficient altitude during the launch.
A variation of this method uses a pulley with the line staked to the ground and the line passing around it before going to the glider. The tow man runs with the pulley (still running away from the pilot) which doubles his effective speed. A variation of this is used in F3J competition when two tow men run with the pulley to generate much faster launches (although the models have to be sufficiently strong to handle the loads placed upon them by this method) which allows the model to use the energy to "zoom" (the model i
|
https://en.wikipedia.org/wiki/Lacandon%20Jungle
|
The Lacandon Jungle (Spanish: Selva Lacandona) is an area of rainforest which stretches from Chiapas, Mexico, into Guatemala. The heart of this rainforest is located in the Montes Azules Biosphere Reserve in Chiapas near the border with Guatemala in the Montañas del Oriente region of the state. Although much of the jungle outside the reserve has been cleared, the Lacandon is still one of the largest montane rainforests in Mexico. It contains 1,500 tree species, 33% of all Mexican bird species, 25% of all Mexican animal species, 56% of all Mexican diurnal butterflies and 16% of all Mexico's fish species.
The Lacandon in Chiapas is also home to a number of important Mayan archaeological sites including Palenque, Yaxchilan and Bonampak, with numerous smaller sites which remain partially or fully unexcavated. This rainforest, especially the area inside the Biosphere Reserve, is a source of political tension, pitting the EZLN or Zapatistas and their indigenous allies who want to farm the land against international environmental groups and the Lacandon Maya, the original indigenous group of the area and the one that holds the title to most of the lands in Montes Azures.
Environment
The jungle has approximately 1.9 million hectares stretching from southeast Chiapas into northern Guatemala and into the southern Yucatán Peninsula. The Chiapas portion is located on the Montañas del Oriente (Eastern Mountains) centered on a series of canyonlike valleys called the Cañadas, between smaller mountain ridges oriented from northwest to southeast. It is bordered by the Guatemalan border on two sides with Comitán de Domínguez to the southwest and the city of Palenque to north. Dividing the Chiapas part of the forest from the Guatemalan side is the Usumacinta River, which is the largest river in Mexico and the seventh largest in the world based on volume of water.
The core of the Chiapas forest is the Montes Azules Biosphere Reserve, but it also includes some other protected areas
|
https://en.wikipedia.org/wiki/Generalized%20expected%20utility
|
Generalized expected utility is a decision-making metric based on any of a variety of theories that attempt to resolve some discrepancies between expected utility theory and empirical observations, concerning choice under risky (probabilistic) circumstances. Given its motivations and approach, generalized expected utility theory may properly be regarded as a subfield of behavioral economics, but it is more frequently located within mainstream economic theory.
The expected utility model developed by John von Neumann and Oskar Morgenstern dominated decision theory from its formulation in 1944 until the late 1970s, not only as a prescriptive, but also as a descriptive model, despite powerful criticism from Maurice Allais and Daniel Ellsberg who showed that, in certain choice problems, decisions were usually inconsistent with the axioms of expected utility theory. These problems are usually referred to as the Allais paradox and Ellsberg paradox.
Beginning in 1979 with the publication of the prospect theory of Daniel Kahneman and Amos Tversky, a range of generalized expected utility models were developed with the aim of resolving the Allais and Ellsberg paradoxes, while maintaining many of the attractive properties of expected utility theory. Important examples were anticipated utility theory, later referred to as rank-dependent utility theory, weighted utility (Chew 1982), and expected uncertain utility theory. A general representation, using the concept of the local utility function was presented by Mark J. Machina. Since then, generalizations of expected utility theory have proliferated, but the probably most frequently used model is nowadays cumulative prospect theory, a rank-dependent development of prospect theory, introduced in 1992 by Daniel Kahneman and Amos Tversky.
|
https://en.wikipedia.org/wiki/Orthotropic%20material
|
In material science and solid mechanics, orthotropic materials have material properties at a particular point which differ along three orthogonal axes, where each axis has twofold rotational symmetry. These directional differences in strength can be quantified with Hankinson's equation.
They are a subset of anisotropic materials, because their properties change when measured from different directions.
A familiar example of an orthotropic material is wood. In wood, one can define three mutually perpendicular directions at each point in which the properties are different. It is most stiff (and strong) along the grain (axial direction), because most cellulose fibrils are aligned that way. It is usually least stiff in the radial direction (between the growth rings), and is intermediate in the circumferential direction. This anisotropy was provided by evolution, as it best enables the tree to remain upright.
Because the preferred coordinate system is cylindrical-polar, this type of orthotropy is also called polar orthotropy.
Another example of an orthotropic material is sheet metal formed by squeezing thick sections of metal between heavy rollers. This flattens and stretches its grain structure. As a result, the material becomes anisotropic — its properties differ between the direction it was rolled in and each of the two transverse directions. This method is used to advantage in structural steel beams, and in aluminium aircraft skins.
If orthotropic properties vary between points inside an object, it possesses both orthotropy and inhomogeneity. This suggests that orthotropy is the property of a point within an object rather than for the object as a whole (unless the object is homogeneous). The associated planes of symmetry are also defined for a small region around a point and do not necessarily have to be identical to the planes of symmetry of the whole object.
Orthotropic materials are a subset of anisotropic materials; their properties depend on the directio
|
https://en.wikipedia.org/wiki/Tree%20of%20life%20%28Kabbalah%29
|
The Tree of Life (Hebrew: עֵץ חַיִּים ʿĒṣ Ḥayyīm) is a diagram used in Kabbalah and various other mystical traditions. It is usually referred to as the Kabbalistic tree of life in order to distinguish it from the biblical tree of life and the archetypal tree of life found in many cultures.
Scholars have asserted that the concept of a tree of life with different spheres encompassing aspects of reality traces its origins back to Assyria in the 9th century BC. The Assyrians assigned values and specific numbers to their deities similar to those used by the later Jewish Kabbalah.
The beginnings of the Jewish Kabbalah are traced back by scholars to the Medieval Age, originating in the Bahir and the Zohar. Although the earliest extant Hebrew kabbalistic manuscripts dating to the late 13th century contain diagrams, including one labelled "Tree of Wisdom," the now iconic "Tree of life" emerged over the course of the fourteenth century. Scholars now believe that it should be regarded as primarily indebted to the Porphyrian tree rather than to any speculative ancient sources, Assyrian or otherwise.
The iconic representation first appeared in print on the cover of the Latin translation of Gates of Light in the year 1516. Scholars have traced the origin of the art in the Porta Lucis cover to Johann Reuchlin.
Description
The tree of life usually consists of 10 or 11 nodes symbolizing different archetypes and 22 paths connecting the nodes. The nodes are often arranged into three columns to represent that they belong to a common category.
In the Jewish Kabbalah, the nodes are called sephiroth. They are usually represented as spheres and the paths are usually represented as lines. The nodes usually represent encompassing aspects of existence, God, or the human psyche. The paths usually represent the relationship between the concepts ascribed to the spheres or a symbolic description of the requirements to go from one sphere to another.
The columns are usually symbolized as pil
|
https://en.wikipedia.org/wiki/Bandwidth-delay%20product
|
In data communications, the bandwidth-delay product is the product of a data link's capacity (in bits per second) and its round-trip delay time (in seconds). The result, an amount of data measured in bits (or bytes), is equivalent to the maximum amount of data on the network circuit at any given time, i.e., data that has been transmitted but not yet acknowledged. The bandwidth-delay product was originally proposed as a rule of thumb for sizing router buffers in conjunction with congestion avoidance algorithm random early detection (RED).
A network with a large bandwidth-delay product is commonly known as a long fat network (LFN). As defined in , a network is considered an LFN if its bandwidth-delay product is significantly larger than 105 bits (12,500 bytes).
Details
Ultra-high speed local area networks (LANs) may fall into this category, where protocol tuning is critical for achieving peak throughput, on account of their extremely high bandwidth, even though their delay is not great. While a connection with 1 Gbit/s and a round-trip time below 100 μs is no LFN, a connection with 100 Gbit/s would need to stay below 1 μs RTT to not be considered an LFN.
An important example of a system where the bandwidth-delay product is large is that of geostationary satellite connections, where end-to-end delivery time is very high and link throughput may also be high. The high end-to-end delivery time makes life difficult for stop-and-wait protocols and applications that assume rapid end-to-end response.
A high bandwidth-delay product is an important problem case in the design of protocols such as Transmission Control Protocol (TCP) in respect of TCP tuning, because the protocol can only achieve optimum throughput if a sender sends a sufficiently large quantity of data before being required to stop and wait until a confirming message is received from the receiver, acknowledging successful receipt of that data. If the quantity of data sent is insufficient compared with the ban
|
https://en.wikipedia.org/wiki/Agreement%20on%20the%20Application%20of%20Sanitary%20and%20Phytosanitary%20Measures
|
The Agreement on the Application of Sanitary and Phytosanitary Measures, also known as the SPS Agreement or just SPS, is an international treaty of the World Trade Organization (WTO). It was negotiated during the Uruguay Round of the General Agreement on Tariffs and Trade (GATT), and entered into force with the establishment of the WTO at the beginning of 1995. Broadly, the sanitary and phytosanitary ("SPS") measures covered by the agreement are those aimed at the protection of human, animal or plant life or health from certain risks.
Under the SPS agreement, the WTO sets constraints on member-states' policies relating to food safety (bacterial contaminants, pesticides, inspection and labelling) as well as animal and plant health (phytosanitation) with respect to imported pests and diseases. There are 3 standards organizations who set standards that WTO members should base their SPS methodologies on. As provided for in Article 3, they are the Codex Alimentarius Commission (Codex), World Organization for Animal Health (OIE) and the Secretariat of the International Plant Protection Convention (IPPC).
The SPS agreement is closely linked to the Agreement on Technical Barriers to Trade, which was signed in the same year and has similar goals. The TBT Emerged from the Tokyo Round of WTO negotiations and was negotiated with the aim of ensuring non-discrimination in the adoption and implementation of technical regulations and standards.
History and framework
As GATT's preliminary focus had been lowering tariffs, the framework that preceded the SPS Agreement was not adequately equipped to deal with the problems of non-tariff barriers (NTBs) to trade and the need for an independent agreement addressing this became critical. The SPS Agreement is an ambitious attempt to deal with NTBs arising from cross-national differences in technical standards without diminishing governments prerogative to implement measures to guard against diseases and pests.
Main provisions
Article
|
https://en.wikipedia.org/wiki/Kistler%20Prize
|
The Kistler Prize (1999-2011) was awarded annually to recognize original contributions "to the understanding of the connection between human heredity and human society," and was named after its benefactor, physicist and inventor Walter Kistler. The prize was awarded by the Foundation For the Future and it included a cash award of US$100,000 and a 200-gram gold medallion.
Recipients
The recipients have been:
2000 – Edward O. Wilson
2001 – Richard Dawkins
2002 – Luigi Luca Cavalli-Sforza
2003 – Arthur Jensen
2004 – Vincent Sarich
2005 – Thomas J. Bouchard
2006 – Doreen Kimura
2007 – Spencer Wells
2008 – Craig Venter
2009 – Svante Pääbo
2010 – Leroy Hood
2011 – Charles Murray
Walter P. Kistler Book Award
The Walter P. Kistler Book Award was established in 2003 to recognize authors of science books that "significantly increase the knowledge and understanding of the public regarding subjects that will shape the future of our species." The award includes a cash prize of US$10,000 and is formally presented in ceremonies that are open to the public.
The recipients have been:
2003 – Gregory Stock for Redesigning Humans: Our Inevitable Genetic Future
2004 – Spencer Wells for The Journey of Man: A Genetic Odyssey
2005 – Steven Pinker for The Blank Slate
2006 – William H. Calvin for A Brain for All Seasons: Human Evolution and Abrupt Climate Change
2007 – Eric Chaisson for Epic of Evolution: Seven Ages of the Cosmos
2008 – Christopher Stringer for Homo britannicus: The Incredible Story of Human Life in Britain
2009 – David Archer (scientist) for The Long Thaw: How Humans are Changing the Next 100,000 Years of Earth's Climate
2011 – Laurence C. Smith for The World in 2050: Four Forces Shaping Civilization's Northern Future
Foundation For the Future
The mission of the Foundation For the Future is to increase and diffuse knowledge concerning the long-term future of humanity. It conducts a broad range of programs and activities to promote an understanding of the factors that
|
https://en.wikipedia.org/wiki/Metalaxyl
|
Metalaxyl is an acylalanine fungicide with systemic function. Its chemical name is methyl N-(methoxyacetyl)-N-(2,6-xylyl)-DL-alaninate. It can be used to control Pythium in a number of vegetable crops, and Phytophthora in peas. Metalaxyl-M is the ISO common name and Ridomil Gold is the trade name for the optically pure (-) / D / R active stereoisomer, which is also known as mefenoxam.
It is the active ingredient in the seed treatment agent Apron XL LS.
The fungicide has suffered severe problems. The fungicide was marketed for use against Phytophthora infestans. However, in the summer of 1980, in the Republic of Ireland, the crop was devastated by a potato blight epidemic after a resistant race of the oomycete appeared. Irish farmers later successfully sued the company for their losses.
Maximum pesticide residue limits for the EU/UK are set at 0.5 mg/kg for oranges and 1.0 mg/kg for apples. As early as 1998 Pythium was known to be widely developing resistance to metalaxyl which was the most effective control at the time. Various Pythium populations have been known to have resistance to mefenoxam since the 1980s and metalaxyl since 1984. There is wide variability in resistance/sensitivity between Pythium species, with some populations showing complete ineffectiveness.
|
https://en.wikipedia.org/wiki/Internship%20%28medicine%29
|
A medical (or surgical) intern is a physician in training who has completed medical school and has a medical degree, but does not yet have a license to practice medicine unsupervised. Medical education generally ends with a period of practical training similar to internship, but the way the overall program of academic and practical medical training is structured differs depending upon the country, as does the terminology used (see medical education and medical school for further details).
Australia
In Australia, medical graduates must complete one year in an accredited hospital post before they receive full registration. This year of conditional registration is called the intern year. An internship is not necessarily completed in a hospital at the same state as the graduate's medical school.
Brazil
In Brazil, medical school consists of six years or twelve semesters. The final two years (or one and a half years, depending on the University in question) are the internship. During this time, students work extensive hospital hours and do basic hospital work while supervised by residents and staff. This period is usually divided among internal medicine, surgery, gynecology and obstetrics, pediatrics, emergency medicine, family medicine, and a final elective period in which the student chooses an area for further experience. On conclusion of the internship, the student becomes a doctor and may work unsupervised or enter a residency program to gain a specialty.
Chile
After high school, a medical education in Chile takes seven years—five years as a medical student and two years as an intern, earning the degree of Médico Cirujano (equivalent to general practitioner in the US). Internships minimally include the four basic specialties (internal medicine, general surgery, gynecology and obstetrics, and pediatrics). After completing the internship, the new physician may work in primary care, hospitals, or apply to residencies for a specialty.
DR Congo
DR Congo has a two-yea
|
https://en.wikipedia.org/wiki/Chlorothalonil
|
Chlorothalonil (2,4,5,6-tetrachloroisophthalonitrile) is an organic compound mainly used as a broad spectrum, nonsystemic fungicide, with other uses as a wood protectant, pesticide, acaricide, and to control mold, mildew, bacteria, algae. Chlorothalonil-containing products are sold under the names Bravo, Echo, and Daconil. It was first registered for use in the US in 1966. In 1997, the most recent year for which data are available, it was the third most used fungicide in the US, behind only sulfur and copper, with used in agriculture that year. Including nonagricultural uses, the United States Environmental Protection Agency (EPA) estimates, on average, almost were used annually from 1990 to 1996.
Uses
In the US, chlorothalonil is used predominantly on peanuts (about 34% of usage), potatoes (about 12%), and tomatoes (about 7%), although the EPA recognizes its use on many other crops. It is also used on golf courses and lawns (about 10%) and as a preservative additive in some paints (about 13%), resins, emulsions, and coatings.
Chlorothalonil is commercially available in many different formulations and delivery methods. It is applied as a dust, dry or water-soluble grains, a wettable powder, a liquid spray, a fog, and a dip. It may be applied by hand, by ground sprayer, or by aircraft.
Mechanism of action
Chlorothalonil reacts with glutathione giving an glutathione adduct with elimination of HCl. Its mechanism of action is similar to that of trichloromethyl sulfenyl fungicides such as captan and folpet.
Toxicity
Acute
According to the EPS, chlorothalonil is a toxicity category I eye irritant, producing severe eye irritation. It is in toxicity category II, "moderately toxic", if inhaled (inhaled 0.094 mg/L in rats.) For skin contact and ingestion, chlorothalonil is rated toxicity category IV, "practically nontoxic", meaning the oral and dermal is greater than 10,000 mg/kg.
Chronic
Long-term exposure to chlorothalonil resulted in kidney damage and tumors in
|
https://en.wikipedia.org/wiki/Iprodione
|
Iprodione is a hydantoin fungicide and nematicide.
Application
Iprodione is used on crops affected by Botrytis bunch rot, Brown rot, Sclerotinia and other fungal diseases in plants. It is currently applied in a variety of crops: fruit, vegetables, ornamental trees and shrubs and on lawns. It is a contact fungicide that inhibits the germination of fungal spores and it blocks the growth of the fungal mycelium.
It has been marketed under the brand name "Rovral" and "Chipco green" (both brands of Bayer CropScience). This chemical was developed originally by Rhône-Poulenc Agrochimie (later Aventis CropScience and in 2002 acquired by Bayer). As of 2004 there were no composition patents on iprodione.
DevGen NV (Now part of Syngenta) discovered that iprodione kills nematodes and filed for patent protection for those uses.
Iprodione was approved in the Turkish market under the brand name Devguard for use on tomatoes and cucumbers in 2009, and was approved in the US as Enclosure for use in commercial peanut production in May 2010.
Iprodione was approved in Europe in 2010, but approval was not renewed in 2017.
|
https://en.wikipedia.org/wiki/Kundt%27s%20tube
|
Kundt's tube is an experimental acoustical apparatus invented in 1866 by German physicist August Kundt for the measurement of the speed of sound in a gas or a solid rod. The experiment is still taught today due to its ability to demonstrate longitudinal waves in a gas (which can often be difficult to visualise). It is used today only for demonstrating standing waves and acoustical forces.
How it works
The tube is a transparent horizontal pipe which contains a small amount of a fine powder such as cork dust, talc or lycopodium. At one end of the tube is a source of sound at a single frequency (a pure tone). Kundt used a metal rod resonator that he caused to vibrate or 'ring' by rubbing it, but modern demonstrations usually use a loudspeaker attached to a signal generator producing a sine wave. The other end of the tube is blocked by a movable piston which can be used to adjust the length of the tube.
The sound generator is turned on and the piston is adjusted until the sound from the tube suddenly gets much louder. This indicates that the tube is at resonance. This means the length of the round-trip path of the sound waves, from one end of the tube to the other and back again, is a multiple of the wavelength λ of the sound waves. Therefore, the length of the tube is a multiple of half a wavelength. At this point, the sound waves in the tube are in the form of standing waves, and the amplitude of vibrations of air is zero at equally spaced intervals along the tube, called the nodes. The powder is caught up in the moving air and settles in little piles or lines at these nodes, because the air is still and quiet there. The distance between the piles is one half wavelength λ/2 of the sound. By measuring the distance between the piles, the wavelength λ of the sound in air can be found. If the frequency f of the sound is known, multiplying it by the wavelength gives the speed of sound c in the air:
The detailed motion of the powder is actually due to an
|
https://en.wikipedia.org/wiki/Pore%20water%20pressure
|
Pore water pressure (sometimes abbreviated to pwp) refers to the pressure of groundwater held within a soil or rock, in gaps between particles (pores). Pore water pressures below the phreatic level of the groundwater are measured with piezometers. The vertical pore water pressure distribution in aquifers can generally be assumed to be close to hydrostatic.
In the unsaturated ("vadose") zone, the pore pressure is determined by capillarity and is also referred to as tension, suction, or matric pressure. Pore water pressures under unsaturated conditions are measured with tensiometers, which operate by allowing the pore water to come into equilibrium with a reference pressure indicator through a permeable ceramic cup placed in contact with the soil.
Pore water pressure is vital in calculating the stress state in the ground soil mechanics, from Terzaghi's expression for the effective stress of the soil.
General principles
Pressure develops due to:
Water elevation difference: water flowing from a higher elevation to a lower elevation and causing a velocity head, or with water flow, as exemplified in Bernoulli's energy equations.
Hydrostatic water pressure: resulting from the weight of material above the point measured.
Osmotic pressure: inhomogeneous aggregation of ion concentrations, which causes a force in water particles as they attract by the molecular laws of attraction.
Absorption pressure: attraction of surrounding soil particles to one another by adsorbed water films.
Matric suction: the defining trait of unsaturated soil, this term corresponds to the pressure dry soil exerts on the surrounding material to equalise the moisture content in the overall block of soil and is defined as the difference between pore air pressure,, and pore water pressure, .
Below the water table
The buoyancy effects of water have a large impact on certain soil properties, such as the effective stress present at any point in a soil medium. Consider an arbitrary point five meters be
|
https://en.wikipedia.org/wiki/Effective%20stress
|
The effective stress can be defined as the stress, depending on the applied tension and pore pressure , which controls the strain or strength behaviour of soil and rock (or a generic porous body) for whatever pore pressure value or, in other terms, the stress which applied over a dry porous body (i.e. at ) provides the same strain or strength behaviour which is observed at ≠ 0. In the case of granular media it can be viewed as a force that keeps a collection of particles rigid. Usually this applies to sand, soil, or gravel, as well as every kind of rock and several other porous materials such as concrete, metal powders, biological tissues etc. The usefulness of an appropriate ESP formulation consists in allowing to assess the behaviour of a porous body for whatever pore pressure value on the basis of experiments involving dry samples (i.e. carried out at zero pore pressure).
History
Karl von Terzaghi first proposed the relationship for effective stress in 1925. For him, the term "effective" meant the calculated stress that was effective in moving soil, or causing displacements. It has been often interpreted as the average stress carried by the soil skeleton. Afterwards, different formulations have been proposed for the effective stress. Maurice Biot fully developed the three-dimensional soil consolidation theory, extending the one-dimensional model previously developed by Terzaghi to more general hypotheses and introducing the set of basic equations of Poroelasticity. Alec Skempton in his work in 1960, has carried out an extensive review of available formulations and experimental data in literature about effective stress valid in soil, concrete and rock, in order to reject some of these expressions, as well as clarify what expression was appropriate according to several work hypotheses, such as stress–strain or strength behaviour, saturated or nonsaturated media, rock/concrete or soil behaviour, etc.
Description
Effective stress (σ') acting on a soil is calcula
|
https://en.wikipedia.org/wiki/4Pi%20microscope
|
A 4Pi microscope is a laser scanning fluorescence microscope with an improved axial resolution. With it the typical range of the axial resolution of 500–700 nm can be improved to 100–150 nm, which corresponds to an almost spherical focal spot with 5–7 times less volume than that of standard confocal microscopy.
Working principle
The improvement in resolution is achieved by using two opposing objective lenses, which both are focused to the same geometrical location. Also the difference in optical path length through each of the two objective lenses is carefully aligned to be minimal. By this method, molecules residing in the common focal area of both objectives can be illuminated coherently from both sides and the reflected or emitted light can also be collected coherently, i.e. coherent superposition of emitted light on the detector is possible. The solid angle that is used for illumination and detection is increased and approaches its maximum. In this case the sample is illuminated and detected from all sides simultaneously.
The operation mode of a 4Pi microscope is shown in the figure. The laser light is divided by a beam splitter and directed by mirrors towards the two opposing objective lenses. At the common focal point superposition of both focused light beams occurs. Excited molecules at this position emit fluorescence light, which is collected by both objective lenses, combined by the same beam splitter and deflected by a dichroic mirror onto a detector. There superposition of both emitted light pathways can take place again.
In the ideal case each objective lens can collect light from a solid angle of . With two objective lenses one can collect from every direction (solid angle ). The name of this type of microscopy is derived from the maximal possible solid angle for excitation and detection. Practically, one can achieve only aperture angles of about 140° for an objective lens, which corresponds to .
The microscope can be operated in three different w
|
https://en.wikipedia.org/wiki/Expense%20ratio
|
The expense ratio of a stock or asset fund is the total percentage of fund assets used for administrative, management, advertising (12b-1), and all other expenses. An expense ratio of 1% per annum means that each year 1% of the fund's total assets will be used to cover expenses. The expense ratio does not include sales loads or brokerage commissions.
Expense ratios are important to consider when choosing a fund, as they can significantly affect returns. Factors influencing the expense ratio include the size of the fund (small funds often have higher ratios as they spread expenses among a smaller number of investors), sales charges, and the management style of the fund. A typical annual expense ratio for a U.S. domestic stock fund is about 1%, although some passively managed funds (such as index funds) have significantly lower ratios.
One notable component of the expense ratio of U.S. funds is the "12b-1 fee", which represents expenses used for advertising and promotion of the fund. 12b-1 fees are generally limited to a maximum of 1.00% per year (.75% distribution and .25% shareholder servicing) under Financial Industry Regulatory Authority Rules.
The term "expense ratio" is also a key measure of performance for a nonprofit organization. The term is sometimes used in other contexts as well.
Waivers, reimbursements and recoupments
Some funds will execute "waiver or reimbursement agreements" with the fund's adviser or other service providers, especially when a fund is new and expenses tend to be higher (due to a small asset base). These agreements generally reduce expenses to some pre-determined level or by some pre-determined amount. Sometimes, these waiver/reimbursement amounts must be repaid by the fund during a period that generally cannot exceed 3 years from the year in which the original expense was incurred. If a recoupment plan is in effect, the effect may be to require future shareholders to absorb expenses of the fund incurred during prior years.
|
https://en.wikipedia.org/wiki/Mercury%20%28cipher%20machine%29
|
Mercury was a British cipher machine used by the Air Ministry from 1950 until at least the early 1960s. Mercury was an online rotor machine descended from Typex, but modified to achieve a longer cycle length using a so-called double-drum basket system.
History
Mercury was designed by Wing Commander E. W. Smith and F. Rudd, who were awarded £2,250 and £750 respectively in 1960 for their work in the design of the machine. E. W. Smith, one of the developers of TypeX, had designed the double-drum basket system in 1943, on his own initiative, to fulfil the need for an on-line system.
Mercury prototypes were operational by 1948, and the machine was in use by 1950. Over 200 Mercury machines had been made by 1959 with over £250,000 spent on its production. Mercury links were installed between the UK and various Overseas stations, including in Canada, Australia, Singapore, Cyprus, Germany, France, Middle East, Washington, Nairobi and Colombo. The machine was used for UK diplomatic messaging for more or less a decade, but saw almost no military use.
In 1960, it was anticipated that the machine would remain in use until 1963, when it would be made obsolete by the arrival of BID 610 (Alvis) equipment.
A miniaturised version of Mercury was designed, named Ariel, but this machine appears not to have been adopted for operational use.
Design
In the Mercury system, two series of rotors were used. The first series, dubbed the control maze, had four rotors, and stepped cyclometrically as in Typex. Five outputs from the control maze were used to determine the stepping of five rotors in the second series of rotors, the message maze, the latter used to encrypt and decrypt the plaintext and ciphertext. A sixth rotor in the message maze was controlled independently and stepped in the opposite direction to the others. All ten rotors were interchangeable in any part of either maze. Using rotors to control the stepping of other rotors was a feature of an earlier cipher machine, the US
|
https://en.wikipedia.org/wiki/Polysorbate%2080
|
Polysorbate 80 is a nonionic surfactant and emulsifier often used in pharmaceuticals, foods, and cosmetics. This synthetic compound is a viscous, water-soluble yellow liquid.
Chemistry
Polysorbate 80 is derived from polyethoxylated sorbitan and oleic acid. The hydrophilic groups in this compound are polyethers also known as polyoxyethylene groups, which are polymers of ethylene oxide. In the nomenclature of polysorbates, the numeric designation following polysorbate refers to the lipophilic group, in this case, the oleic acid (see polysorbate for more detail).
The full chemical names for polysorbate 80 are:
Polyoxyethylene (80) sorbitan monooleate
(x)-sorbitan mono-9-octadecenoate poly(oxy-1,2-ethanediyl)
The critical micelle concentration of polysorbate 80 in pure water is reported as 0.012 mM.
Other names
E number: E433
Brand names:
Kolliphor PS 80 - Kolliphor is a registered trademark of BASF
Alkest TW 80
Scattics
Canarcel
Poegasorb 80
Montanox 80 – Montanox is a registered trademark of Seppic
Tween 80 – Tween is a registered trademark of Croda Americas, Inc.
Kotilen-80 - Kotilen is a registered trademark of Kolb AG
Uses
Food
Polysorbate 80 is used as an emulsifier in foods, though research suggests it may "profoundly impact intestinal microbiota in a manner that promotes gut inflammation and associated disease states."
For example, in ice cream, polysorbate is added up to 0.5% (v/v) concentration to make the ice cream smoother and easier to handle, as well as increasing its resistance to melting. Adding this substance prevents milk proteins from completely coating the fat droplets. This allows them to join in chains and nets, which hold air in the mixture, and provide a firmer texture that holds its shape as the ice cream melts.
Health and beauty
Polysorbate 80 is also used as a surfactant in soaps and cosmetics (including eyedrops), or a solubilizer, such as in a mouthwash. The cosmetic grade of polysorbate 80 may have more impurities than the food gra
|
https://en.wikipedia.org/wiki/Scroll%20%28art%29
|
The scroll in art is an element of ornament and graphic design featuring spirals and rolling incomplete circle motifs, some of which resemble the edge-on view of a book or document in scroll form, though many types are plant-scrolls, which loosely represent plant forms such as vines, with leaves or flowers attached. Scrollwork is a term for some forms of decoration dominated by spiralling scrolls, today used in popular language for two-dimensional decorative flourishes and arabesques of all kinds, especially those with circular or spiralling shapes.
Scroll decoration has been used for the decoration of a vast range of objects, in all Eurasian cultures, and most beyond. A lengthy evolution over the last two millennia has taken forms of plant-based scroll decoration from Greco-Roman architecture to Chinese pottery, and then back across Eurasia to Europe. They are very widespread in architectural decoration, woodcarving, painted ceramics, mosaic, and illuminated manuscripts (mostly for borders).
In the usual artistic convention, scrolls "apparently do not succumb to gravitational forces, as garlands and festoons do, or oppose them, in the manner of vertically growing trees. This gives scrolls a relentless power. Even if attached to walls, they are more deeply embedded in the architectural order than the festoon, which are fictitiously hanging on them."
Terminology
Typically in true scrolls the main "stem" lines do not cross over each other, or not significantly. When crossing stems become a dominant feature in the design, terms such as interlace or arabesque are used instead. Many scrolls run along a relatively narrow band, such as a frieze panel or the border of a carpet or piece of textile or ceramics, and so are often called "running scrolls", while others spread to cover wide areas, and are often infinitely expandible. Similar motifs made up of straight lines and right angles, such as the "Greek key", are more often called meanders.
In art history, a "flo
|
https://en.wikipedia.org/wiki/Scaffold%20protein
|
In biology, scaffold proteins are crucial regulators of many key signalling pathways. Although scaffolds are not strictly defined in function, they are known to interact and/or bind with multiple members of a signalling pathway, tethering them into complexes. In such pathways, they regulate signal transduction and help localize pathway components (organized in complexes) to specific areas of the cell such as the plasma membrane, the cytoplasm, the nucleus, the Golgi, endosomes, and the mitochondria.
History
The first signaling scaffold protein discovered was the Ste5 protein from the yeast Saccharomyces cerevisiae. Three distinct domains of Ste5 were shown to associate with the protein kinases Ste11, Ste7, and Fus3 to form a multikinase complex.
Function
Scaffold proteins act in at least four ways: tethering signaling components, localizing these components to specific areas of the cell, regulating signal transduction by coordinating positive and negative feedback signals, and insulating correct signaling proteins from competing proteins.
Tethering signaling components
This particular function is considered a scaffold's most basic function. Scaffolds assemble signaling components of a cascade into complexes. This assembly may be able to enhance signaling specificity by preventing unnecessary interactions between signaling proteins, and enhance signaling efficiency by increasing the proximity and effective concentration of components in the scaffold complex. A common example of how scaffolds enhance specificity is a scaffold that binds a protein kinase and its substrate, thereby ensuring specific kinase phosphorylation. Additionally, some signaling proteins require multiple interactions for activation and scaffold tethering may be able to convert these interactions into one interaction that results in multiple modifications. Scaffolds may also be catalytic as interaction with signaling proteins may result in allosteric changes of these signaling component
|
https://en.wikipedia.org/wiki/Scalar%20%28mathematics%29
|
A scalar is an element of a field which is used to define a vector space.
In linear algebra, real numbers or generally elements of a field are called scalars and relate to vectors in an associated vector space through the operation of scalar multiplication (defined in the vector space), in which a vector can be multiplied by a scalar in the defined way to produce another vector. Generally speaking, a vector space may be defined by using any field instead of real numbers (such as complex numbers). Then scalars of that vector space will be elements of the associated field (such as complex numbers).
A scalar product operation – not to be confused with scalar multiplication – may be defined on a vector space, allowing two vectors to be multiplied in the defined way to produce a scalar. A vector space equipped with a scalar product is called an inner product space.
A quantity described by multiple scalars, such as having both direction and magnitude, is called a vector.
The term scalar is also sometimes used informally to mean a vector, matrix, tensor, or other, usually, "compound" value that is actually reduced to a single component. Thus, for example, the product of a 1 × n matrix and an n × 1 matrix, which is formally a 1 × 1 matrix, is often said to be a scalar.
The real component of a quaternion is also called its scalar part.
The term scalar matrix is used to denote a matrix of the form kI where k is a scalar and I is the identity matrix.
Etymology
The word scalar derives from the Latin word scalaris, an adjectival form of scala (Latin for "ladder"), from which the English word scale also comes. The first recorded usage of the word "scalar" in mathematics occurs in François Viète's Analytic Art (In artem analyticem isagoge) (1591):
Magnitudes that ascend or descend proportionally in keeping with their nature from one kind to another may be called scalar terms.
(Latin: Magnitudines quae ex genere ad genus sua vi proportionaliter adscendunt vel descendunt, voce
|
https://en.wikipedia.org/wiki/NUbuntu
|
nUbuntu or Network Ubuntu was a project to take the existing Ubuntu operating system LiveCD and Full Installer and remaster it with tools needed for penetration testing servers and networks. The main idea is to keep Ubuntu's ease of use and mix it with popular penetration testing tools. Besides usage for network and server testing, nUbuntu will be made to be a desktop distribution for advanced Linux users.
Contents
nUbuntu uses the light window manager Fluxbox.
It includes some of the most used security programs for Linux, such as Wireshark, nmap, dSniff, and Ettercap.
History
2005-12-18 - nUbuntu Project is born, developers release Testing 1
2006-01-16 - nUbuntu Live developers release Stable 1
2006-06-26 - nUbuntu Live developers release version 6.06
2006-10-16 - nUbuntu featured in Hacker Japan , a Japanese Hacker Magazine
2006-11-21 - nUbuntu Live developers release version 6.10
As of April 4, 2010, the official website is closed with no explanation.
Releases
Below is a list of previous and current releases.
Further reading
Russ McRee (Nov 2007) Security testing with nUbuntu, Linux Magazine, issue 84
External links
Operating system distributions bootable from read-only media
Ubuntu derivatives
Pentesting software toolkits
Linux distributions
de:Liste von Linux-Distributionen#Ubuntu-Derivate
|
https://en.wikipedia.org/wiki/Prix%20Wilder-Penfield
|
The Prix Wilder-Penfield is an award by the government of Quebec that is part of the Prix du Québec, which "goes to scientists whose research aims fall within the field of biomedicine. These fields include the medical sciences, the natural sciences, and engineering". It is named in honour of Wilder Penfield.
Winners
Source:
See also
List of biochemistry awards
List of medicine awards
List of prizes named after people
|
https://en.wikipedia.org/wiki/Scalar%20%28physics%29
|
In physics, scalars (or scalar quantities) are physical quantities that are unaffected by changes to a vector space basis (i.e., a coordinate system transformation). Scalars are often accompanied by units of measurement, as in "10cm".
Examples of scalar quantities are mass, distance, charge, volume, time, speed, and the magnitude of physical vectors in general (such as velocity).
A change of a vector space basis changes the description of a vector in terms of the basis used but does not change the vector itself, while a scalar has nothing to do with this change. In classical physics, like Newtonian mechanics, rotations and reflections preserve scalars, while in relativity, Lorentz transformations or space-time translations preserve scalars. The term "scalar" has origin in the multiplication of vectors by a unitless scalar, which is a uniform scaling transformation.
Relationship with the mathematical concept
A scalar in physics is also a scalar in mathematics, as an element of a mathematical field used to define a vector space. For example, the magnitude (or length) of an electric field vector is calculated as the square root of its absolute square (the inner product of the electric field with itself); so, the inner product's result is an element of the mathematical field for the vector space in which the electric field is described. As the vector space in this example and usual cases in physics is defined over the mathematical field of real numbers or complex numbers, the magnitude is also an element of the field, so it is mathematically a scalar. Since the inner product is independent of any vector space basis, the electric field magnitude is also physically a scalar.
The mass of an object is unaffected by a change of vector space basis so it is also a physical scalar, described by a real number as an element of the real number field. Since a field is a vector space with addition defined based on vector addition and multiplication defined as scalar multiplicat
|
https://en.wikipedia.org/wiki/Cellular%20component
|
Cellular components are the complex biomolecules and structures of which cells, and thus living organisms, are composed. Cells are the structural and functional units of life. The smallest organisms are single cells, while the largest organisms are assemblages of trillions of cells. DNA, double stranded macromolecule that carries the hereditary information of the cell and found in all living cells; each cell carries chromosome(s) having a distinctive DNA sequence.
Examples include macromolecules such as proteins and nucleic acids, biomolecular complexes such as a ribosome, and structures such as membranes, and organelles. While the majority of cellular components are located within the cell itself, some may exist in extracellular areas of an organism.
Cellular components may also be called biological matter or biological material. Most biological matter has the characteristics of soft matter, being governed by relatively small energies. All known life is made of biological matter. To be differentiated from other theoretical or fictional life forms, such life may be called carbon-based, cellular, organic, biological, or even simply living – as some definitions of life exclude hypothetical types of biochemistry.
See also
Cell (biology)
Cell biology
Biomolecule
Organelle
Tissue (biology)
External links
https://web.archive.org/web/20130918033010/http://bioserv.fiu.edu/~walterm/FallSpring/review1_fall05_chap_cell3.htm
|
https://en.wikipedia.org/wiki/Hardness
|
In materials science, hardness (antonym: softness) is a measure of the resistance to plastic deformation, such as an indentation (over an area) or a scratch (linear), induced mechanically either by pressing or abrasion. In general, different materials differ in their hardness; for example hard metals such as titanium and beryllium are harder than soft metals such as sodium and metallic tin, or wood and common plastics. Macroscopic hardness is generally characterized by strong intermolecular bonds, but the behavior of solid materials under force is complex; therefore, hardness can be measured in different ways, such as scratch hardness, indentation hardness, and rebound hardness. Hardness is dependent on ductility, elastic stiffness, plasticity, strain, strength, toughness, viscoelasticity, and viscosity. Common examples of hard matter are ceramics, concrete, certain metals, and superhard materials, which can be contrasted with soft matter.
Measures
There are three main types of hardness measurements: scratch, indentation, and rebound. Within each of these classes of measurement there are individual measurement scales. For practical reasons conversion tables are used to convert between one scale and another.
Scratch hardness
Scratch hardness is the measure of how resistant a sample is to fracture or permanent plastic deformation due to friction from a sharp object. The principle is that an object made of a harder material will scratch an object made of a softer material. When testing coatings, scratch hardness refers to the force necessary to cut through the film to the substrate. The most common test is Mohs scale, which is used in mineralogy. One tool to make this measurement is the sclerometer.
Another tool used to make these tests is the pocket hardness tester. This tool consists of a scale arm with graduated markings attached to a four-wheeled carriage. A scratch tool with a sharp rim is mounted at a predetermined angle to the testing surface. In order to
|
https://en.wikipedia.org/wiki/Christofides%20algorithm
|
The Christofides algorithm or Christofides–Serdyukov algorithm is an algorithm for finding approximate solutions to the travelling salesman problem, on instances where the distances form a metric space (they are symmetric and obey the triangle inequality).
It is an approximation algorithm that guarantees that its solutions will be within a factor of 3/2 of the optimal solution length, and is named after Nicos Christofides and Anatoliy I. Serdyukov, who discovered it independently in 1976.
This algorithm still stands as the best polynomial time approximation algorithm that has been thoroughly peer-reviewed by the relevant scientific community for the traveling salesman problem on general metric spaces. In July 2020 however, Karlin, Klein, and Gharan released a preprint in which they introduced a novel approximation algorithm and claimed that its approximation ratio is 1.5 − 10−36. Their method follows similar principles to Christofides' algorithm, but uses a randomly chosen tree from a carefully chosen random distribution in place of the minimum spanning tree. The paper was published at STOC'21 where it received a best paper award.
Algorithm
Let be an instance of the travelling salesman problem. That is, is a complete graph on the set of vertices, and the function assigns a nonnegative real weight to every edge of .
According to the triangle inequality, for every three vertices , , and , it should be the case that .
Then the algorithm can be described in pseudocode as follows.
Create a minimum spanning tree of .
Let be the set of vertices with odd degree in . By the handshaking lemma, has an even number of vertices.
Find a minimum-weight perfect matching in the induced subgraph given by the vertices from .
Combine the edges of and to form a connected multigraph in which each vertex has even degree.
Form an Eulerian circuit in .
Make the circuit found in previous step into a Hamiltonian circuit by skipping repeated vertices (shortcutting).
The st
|
https://en.wikipedia.org/wiki/Sweden%20Solar%20System
|
The Sweden Solar System is the world's largest permanent scale model of the Solar System. The Sun is represented by the Avicii Arena in Stockholm, the second-largest hemispherical building in the world. The inner planets can also be found in Stockholm but the outer planets are situated northward in other cities along the Baltic Sea. The system was started by Nils Brenning, professor at the Royal Institute of Technology in Stockholm, and Gösta Gahm, professor at the Stockholm University. The model represents the Solar System on the scale of 1:20 million.
The system
The bodies represented in this model include the Sun, the planets (and some of their moons), dwarf planets and many types of small bodies (comets, asteroids, trans-Neptunians, etc.), as well as some abstract concepts (like the Termination Shock zone). Because of the existence of many small bodies in the real Solar System, the model can always be further increased.
The Sun is represented by the Avicii Arena (Globen), Stockholm, which is the second-largest hemispherical building in the world, in diameter. To respect the scale, the globe represents the Sun including its corona.
Inner planets
Mercury ( in diameter) is placed at Stockholm City Museum, from the Globe. The small metallic sphere was built by the artist Peter Varhelyi.
Venus ( in diameter) is placed at Vetenskapens Hus at KTH (Royal Institute of Technology), from the Globe. The previous model, made by the United States artist Daniel Oberti, was inaugurated on 8 June 2004, during a Venus transit and placed at KTH. It fell and shattered around 11 June 2011. Due to construction work at the location of the previous model of Venus it was removed and as of October 2012 cannot be seen. The current model now at Vetenskapens Hus was previously located at the Observatory Museum in Stockholm (now closed).
Earth ( in diameter) is located at the Swedish Museum of Natural History (Cosmonova), from the Globe. Satellite images of the Earth are exhibited
|
https://en.wikipedia.org/wiki/Heisenbug
|
In computer programming jargon, a heisenbug is a software bug that seems to disappear or alter its behavior when one attempts to study it. The term is a pun on the name of Werner Heisenberg, the physicist who first asserted the observer effect of quantum mechanics, which states that the act of observing a system inevitably alters its state. In electronics, the traditional term is probe effect, where attaching a test probe to a device changes its behavior.
Similar terms, such as bohrbug, mandelbug, hindenbug, and schrödinbug (see the section on related terms) have been occasionally proposed for other kinds of unusual software bugs, sometimes in jest.
Examples
Heisenbugs occur because common attempts to debug a program, such as inserting output statements or running it with a debugger, usually have the side-effect of altering the behavior of the program in subtle ways, such as changing the memory addresses of variables and the timing of its execution.
One common example of a heisenbug is a bug that appears when the program is compiled with an optimizing compiler, but not when the same program is compiled without optimization (as is often done for the purpose of examining it with a debugger). While debugging, values that an optimized program would normally keep in registers are often pushed to main memory. This may affect, for instance, the result of floating-point comparisons, since the value in memory may have smaller range and accuracy than the value in the register. Similarly, heisenbugs may be caused by side-effects in test expressions used in runtime assertions in languages such as C and C++, where the test expression is not evaluated when assertions are turned off in production code using the NDEBUG macro.
Other common causes of heisenbugs are using the value of a non-initialized variable (which may change its address or initial value during debugging), or following an invalid pointer (which may point to a different place when debugging). Debuggers also co
|
https://en.wikipedia.org/wiki/STEbus
|
The STEbus (also called the IEEE-1000 bus) is a non-proprietary, processor-independent, computer bus with 8 data lines and 20 address lines. It was popular for industrial control systems in the late 1980s and early 1990s before the ubiquitous IBM PC dominated this market. STE stands for STandard Eurocard.
Although no longer competitive in its original market, it is valid choice for hobbyists wishing to make 'home brew' computer systems. The Z80 and probably the CMOS 65C02 are possible processors to use. The standardized bus allows hobbyists to interface to each other's designs.
Origins
In the early 1980s, there were many proprietary bus systems, each with its own strengths and weaknesses. Most had grown in an ad-hoc manner, typically around a particular microprocessor. The S-100 bus is based on Intel 8080 signals, the STD Bus around Z80 signals, the SS-50 bus around the Motorola 6800, and the G64 bus around 6809 signals. This made it harder to interface other processors. Upgrading to a more powerful processor would subtly change the timings, and timing restraints were not always tightly specified. Nor were electrical parameters and physical dimensions. They usually used edge-connectors for the bus, which were vulnerable to dirt and vibration.
The VMEbus had provided a high-quality solution for high-performance 16-bit processors, using reliable DIN 41612 connectors and well-specified Eurocard board sizes and rack systems. However, these were too costly where an application only needed a modest 8-bit processor.
In the mid 1980s, the STEbus standard addressed these issues by specifying what is rather like a VMEbus simplified for 8-bit processors. The bus signals are sufficiently generic so that they are easy for 8-bit processors to interface with. The board size was usually a single-height Eurocard (100 mm x 160 mm) but allowed for double-height boards (233 x 160 mm) as well.
The latter positioned the bus connector so that it could neatly merge into VME-bus syste
|
https://en.wikipedia.org/wiki/Interface%20%28matter%29
|
In the physical sciences, an interface is the boundary between two spatial regions occupied by different matter, or by matter in different physical states. The interface between matter and air, or matter and vacuum, is called a surface, and studied in surface science. In thermal equilibrium, the regions in contact are called phases, and the interface is called a phase boundary. An example for an interface out of equilibrium is the grain boundary in polycrystalline matter.
The importance of the interface depends on the type of system: the bigger the quotient area/volume, the greater the effect the interface will have. Consequently, interfaces are very important in systems with large interface area-to-volume ratios, such as colloids.
Interfaces can be flat or curved. For example, oil droplets in a salad dressing are spherical but the interface between water and air in a glass of water is mostly flat.
Surface tension is the physical property which rules interface processes involving liquids. For a liquid film on flat surfaces, the liquid-vapor interface keeps flat to minimize interfacial area and system free energy. For a liquid film on rough surfaces, the surface tension tends to keep the meniscus flat, while the disjoining pressure makes the film conformal to the substrate. The equilibrium meniscus shape is a result of the competition between the capillary pressure and disjoining pressure.
Interfaces may cause various optical phenomena, such as refraction. Optical lenses serve as an example of a practical application of the interface between glass and air.
One topical interface system is the gas-liquid interface between aerosols and other atmospheric molecules.
See also
Capillary surface, a surface that represents the boundary between two fluids
Disjoining pressure
Free surface
Interface and colloid science
Membrane (disambiguation)
Surface phenomenon
|
https://en.wikipedia.org/wiki/Concrete%20Mathematics
|
Concrete Mathematics: A Foundation for Computer Science, by Ronald Graham, Donald Knuth, and Oren Patashnik, first published in 1989, is a textbook that is widely used in computer-science departments as a substantive but light-hearted treatment of the analysis of algorithms.
Contents and history
The book provides mathematical knowledge and skills for computer science, especially for the analysis of algorithms. According to the preface, the topics in Concrete Mathematics are "a blend of CONtinuous and disCRETE mathematics". Calculus is frequently used in the explanations and exercises. The term "concrete mathematics" also denotes a complement to "abstract mathematics".
The book is based on a course begun in 1970 by Knuth at Stanford University. The book expands on the material (approximately 100 pages) in the "Mathematical Preliminaries" section of Knuth's The Art of Computer Programming. Consequently, some readers use it as an introduction to that series of books.
Concrete Mathematics has an informal and often humorous style. The authors reject what they see as the dry style of most mathematics textbooks. The margins contain "mathematical graffiti", comments submitted by the text's first editors: Knuth and Patashnik's students at Stanford.
As with many of Knuth's books, readers are invited to claim a reward for any error found in the book—in this case, whether an error is "technically, historically, typographically, or politically incorrect".
The book popularized some mathematical notation: the Iverson bracket, floor and ceiling functions, and notation for rising and falling factorials.
Typography
Donald Knuth used the first edition of Concrete Mathematics as a test case for the AMS Euler typeface and Concrete Roman font.
Chapter outline
Recurrent Problems
Summation
Integer Functions
Number Theory
Binomial Coefficients
Special Numbers
Generating Functions
Discrete Probability
Asymptotics
Editions
Errata: (1994), (January 1998), (27th print
|
https://en.wikipedia.org/wiki/Round%20dance%20%28honey%20bee%29
|
A round dance is the communicative behaviour of a foraging honey bee (Apis mellifera), in which it moves on the comb in close circles, alternating right and then left. It was previously believed that the round dance indicates that the forager has located a profitable food source close to the hive and the round dance transitions into the waggle dance when food sources are more than away. Recent research shows that bees have only one dance that always encodes distance and direction to the food source, but that precision and expression of this information depends on the distance to the target; therefore, the use of "round dance" is outdated. Elements of the round dance also provide information regarding the forager's subjective evaluation of the food source's profitability.
Nobel laureate Karl von Frisch was one of the first ethologists to investigate both the waggle dance and round dance through his studies examining honey bee foraging behaviours, and is credited with translating many of their underlying mechanisms.
Description
If a foraging honey bee (Apis mellifera) locates a profitable food source, it returns to the hive and performs a round dance to communicate its location. The forager bee moves in close circles over the comb, alternating directions. The round dance is performed by the forager bee when the food source is located in the immediate vicinity of the hive. Karl von Frisch determined that the critical distance for switching between the round dance and the waggle dance exists at away from the hive. The scent attached to the forager bee's body communicates the type of food source in question to the follower bees. However, the scent of the food source alone is not sufficient information to guide the follower bees to said food source.
Mechanism
It has been shown that many of the mechanisms used to communicate distance and direction in the waggle dance are also employed in the round dance. The following section will focus on the role of each mechanism
|
https://en.wikipedia.org/wiki/Francis%20Sowerby%20Macaulay
|
Francis Sowerby Macaulay FRS (11 February 1862, Witney – 9 February 1937, Cambridge) was an English mathematician who made significant contributions to algebraic geometry. He is known for his 1916 book The Algebraic Theory of Modular Systems (an old term for ideals), which greatly influenced the later course of commutative algebra. Cohen–Macaulay rings, Macaulay duality, the Macaulay resultant and the Macaulay and Macaulay2 computer algebra systems are named for Macaulay.
Macaulay was educated at Kingswood School and graduated with distinction from St John's College, Cambridge. He taught the top mathematics class in St Paul's School in London from 1885 to 1911. His students included J. E. Littlewood and G. N. Watson.
In 1928 Macaulay was elected Fellow of the Royal Society.
Publications
See also
Gorenstein ring
|
https://en.wikipedia.org/wiki/Segue
|
A segue (; ) is a transition from one topic or section to the next. The term is derived from Italian segue, which literally means "follows".
In music
In music, segue is a direction to the performer. It means continue (the next section) without a pause. The term attacca is used synonymously. For written music, it implies a transition from one section to the next without any break. In improvisation, it is often used for transitions created as a part of the performance, leading from one section to another.
In live performance, a segue can occur during a jam session, where the improvisation of the end of one song progresses into a new song. Segues can even occur between groups of musicians during live performance. For example, as one band finishes its set, members of the following act replace members of the first band one by one, until a complete band swap occurs.
In recorded music, a segue sometimes means a seamless change between one song and another, sometimes achieved through beatmatching, especially on dance and disco recordings. However, as noted by composer John Williams in the liner notes for his Star Wars soundtrack album, a series of musical ideas can be juxtaposed with no transitions whatsoever. Arrangements that involve or create the effect of a classical musical suite, may be used in many pieces or progressive rock recordings, but by definition, a segue does not involve a bridging transition--it is an abrupt change of musical idea. With breakless joins of the elements in his albums Frank Zappa made extensive use of the segue technique. This was first used in 1966 on Zappa's Freak Out!, and a year later on the Beatles' Sgt. Pepper's Lonely Hearts Club Band.
In some Brazilian musical styles, where it is called "emendar" ("to splice"), in particular in Samba and Forró Pé de Serra, it is very commonly used in live performances, creating sets that usually last around 20 minutes but can sometimes take more than an hour, switching seamlessly between differ
|
https://en.wikipedia.org/wiki/Siege%20of%20Fort%20Pitt
|
The siege of Fort Pitt took place during June and July 1763 in what is now the city of Pittsburgh, Pennsylvania, United States. The siege was a part of Pontiac's War, an effort by Native Americans to remove the Anglo-Americans from the Ohio Country and Allegheny Plateau after they refused to honor their promises and treaties to leave voluntarily after the defeat of the French. The Native American efforts of diplomacy, and by siege, to remove the Anglo-Americans from Fort Pitt ultimately failed.
This event is best known as an early instance of biological warfare, in which William Trent from an American settler family and Simeon Ecuyer, a Swiss mercenary in British service, gave items from a smallpox infirmary as gifts to Native American emissaries with the hope of spreading the deadly disease to nearby tribes. The effectiveness is unknown, although it is known that the method used is inefficient compared to respiratory transmission and these attempts to spread the disease are difficult to differentiate from epidemics occurring from previous contacts with colonists.
Background
Fort Pitt was built in 1758 during the French and Indian War, on the site of what was previously Fort Duquesne in what is now the city of Pittsburgh, Pennsylvania, United States. The French abandoned and destroyed Fort Duquesne in November 1758 with the approach of General John Forbes's expedition. The Forbes expedition was successful in part because of the Treaty of Easton, in which area American Indians agreed to end their alliance with the French. American Indians—primarily the Six Nations, Delawares and Shawnees—made this agreement with the understanding that the British would leave the area after their war with the French.Instead of leaving the territory west of the Appalachian Mountains as they had agreed, the Anglo-Americans remained on Native lands and reinforced their forts while settlers continued to push westward, despite the Royal Proclamation of 1763 placing a limit upon the west
|
https://en.wikipedia.org/wiki/Relay%20channel
|
In information theory, a relay channel is a probability model of the communication between a sender and a receiver aided by one or more intermediate relay nodes.
General discrete-time memoryless relay channel
A discrete memoryless single-relay channel can be modelled as four finite sets, and , and a conditional probability distribution on these sets. The probability distribution of the choice of symbols selected by the encoder and the relay encoder is represented by .
<nowiki>
o------------------o
| Relay Encoder |
o------------------o
Λ |
| y1 x2 |
| V
o---------o x1 o------------------o y o---------o
| Encoder |--->| p(y,y1|x1,x2) |--->| Decoder |
o---------o o------------------o o---------o
</nowiki>
There exist three main relaying schemes: Decode-and-Forward, Compress-and-Forward and Amplify-and-Forward. The first two schemes were first proposed in the pioneer article by Cover and El-Gamal.
Decode-and-Forward (DF): In this relaying scheme, the relay decodes the source message in one block and transmits the re-encoded message in the following block. The achievable rate of DF is known as .
Compress-and-Forward (CF): In this relaying scheme, the relay quantizes the received signal in one block and transmits the encoded version of the quantized received signal in the following block. The achievable rate of CF is known as subject to .
Amplify-and-Forward (AF): In this relaying scheme, the relay sends an amplified version of the received signal in the last time-slot. Comparing with DF and CF, AF requires much less delay as the relay node operates time-slot by time-slot. Also, AF requires much less computing power as no decoding or quantizing operation is performed at the relay side.
Cut-set upper bound
The first upper bound on the capacity of the relay channel is derived in the pioneer article by Cover and El-Gama
|
https://en.wikipedia.org/wiki/Power%20optimization%20%28EDA%29
|
Power optimization is the use of electronic design automation tools to optimize (reduce) the power consumption of a digital design, such as that of an integrated circuit, while preserving the functionality.
Introduction and history
The increasing speed and complexity of today’s designs implies a significant increase in the power consumption of very-large-scale integration (VLSI) chips. To meet this challenge, researchers have developed many different design techniques to reduce power. The complexity of today’s ICs, with over 100 million
transistors, clocked at over 1 GHz, means manual power optimization would be hopelessly slow and all too likely to contain errors. Computer-aided design (CAD) tools and methodologies are mandatory.
One of the key features that led to the success of complementary metal-oxide semiconductor, or CMOS, technology was its intrinsic low-power consumption. This meant that circuit designers and electronic design automation (EDA) tools could afford to concentrate on maximizing circuit performance and minimizing circuit area. Another interesting feature of CMOS technology is its nice scaling properties, which has permitted a steady decrease in the feature size (see Moore's law), allowing for more and more complex systems on a single chip, working at higher clock frequencies.
Power consumption concerns came into play with the appearance of the first portable electronic systems in the late 1980s. In this market, battery lifetime is a decisive factor for the commercial success of the product. Another fact that became apparent at about the same time was that the increasing integration of more active elements per die area would lead to prohibitively large-energy consumption of an integrated circuit. A high absolute level of power is not only undesirable for economic and environmental
reasons, but it also creates the problem of heat dissipation. In order to keep the device working at acceptable temperature levels, excessive heat may require expe
|
https://en.wikipedia.org/wiki/Atomic%20layer%20epitaxy
|
Atomic layer epitaxy (ALE), more generally known as atomic layer deposition (ALD), is a specialized form of thin film growth (epitaxy) that typically deposit alternating monolayers of two elements onto a substrate. The crystal lattice structure achieved is thin, uniform, and aligned with the structure of the substrate. The reactants are brought to the substrate as alternating pulses with "dead" times in between. ALE makes use of the fact that the incoming material is bound strongly until all sites available for chemisorption are occupied. The dead times are used to flush the excess material.
It is mostly used in semiconductor fabrication to grow thin films of thickness in the nanometer scale.
Technique
This technique was invented in 1974 and patented the same year (patent published in 1976) by Dr. Tuomo Suntola at the Instrumentarium company, Finland. Dr. Suntola's purpose was to grow thin films of Zinc sulfide to fabricate electroluminescent flat panel displays. The main trick used for this technique is the use of a self-limiting chemical reaction to control in an accurate way the thickness of the film deposited. Since the early days, ALE (ALD) has grown to a global thin film technology which has enabled the continuation of Moore's law. In 2018, Suntola received the Millennium Technology Prize for ALE (ALD) technology.
Compared to basic chemical vapour deposition, in ALE (ALD), chemical reactants are pulsed alternatively in a reaction chamber and then chemisorb in a saturating manner on the surface of the substrate, forming a chemisorbed monolayer.
ALD introduces two complementary precursors (e.g. Al(CH3)3 and H2O ) alternatively into the reaction chamber. Typically, one of the precursors will adsorb onto the substrate surface until it saturates the surface and further growth cannot occur until the second precursor is introduced. Thus the film thickness is controlled by the number of precursor cycles rather than the deposition time as is the case for conven
|
https://en.wikipedia.org/wiki/Chemical%20beam%20epitaxy
|
Chemical beam epitaxy (CBE) forms an important class of deposition techniques for semiconductor layer systems, especially III-V semiconductor systems. This form of epitaxial growth is performed in an ultrahigh vacuum system. The reactants are in the form of molecular beams of reactive gases, typically as the hydride or a metalorganic. The term CBE is often used interchangeably with metal-organic molecular beam epitaxy (MOMBE). The nomenclature does differentiate between the two (slightly different) processes, however. When used in the strictest sense, CBE refers to the technique in which both components are obtained from gaseous sources, while MOMBE refers to the technique in which the group III component is obtained from a gaseous source and the group V component from a solid source.
Basic principles
Chemical beam epitaxy was first demonstrated by W.T. Tsang in 1984. This technique was then described as a hybrid of metal-organic chemical vapor deposition (MOCVD) and molecular beam epitaxy (MBE) that exploited the advantages of both the techniques. In this initial work, InP and GaAs were grown using gaseous group III and V alkyls. While group III elements were derived from the pyrolysis of the alkyls on the surface, the group V elements were obtained from the decomposition of the alkyls by bringing in contact with heated Tantalum (Ta) or Molybdenum (Mo) at 950-1200 °C.
Typical pressure in the gas reactor is between 102 Torr and 1 atm for MOCVD. Here, the transport of gas occurs by viscous flow and chemicals reach the surface by diffusion. In contrast, gas pressures of less than 10−4 Torr are used in CBE. The gas transport now occurs as molecular beam due to the much longer mean-free paths, and the process evolves to a chemical beam deposition. It is also worth noting here that MBE employs atomic beams (such as aluminium (Al) and Gallium (Ga)) and molecular beams (such as As4 and P4) that are evaporated at high temperatures from solid elemental sources, while the
|
https://en.wikipedia.org/wiki/Business%20mathematics
|
Business mathematics are mathematics used by commercial enterprises to record and manage business operations. Commercial organizations use mathematics in accounting, inventory management, marketing, sales forecasting, and financial analysis.
Mathematics typically used in commerce includes elementary arithmetic, elementary algebra, statistics and probability. For some management problems, more advanced mathematics - calculus, matrix algebra, and linear programming - may be applied.
High school
Business mathematics, sometimes called commercial math or consumer math, is a group of practical subjects used in commerce and everyday life. In schools, these subjects are often taught to students who are not planning a university education. In the United States, they are typically offered in high schools and in schools that grant associate's degrees; elsewhere they may be included under business studies. The emphasis in these courses is on computational skills and their practical application, with practice being predominant. These courses often fulfill the general math credit for high school students.
A (U.S.) business math course typically includes a review of elementary arithmetic, including fractions, decimals, and percentages. Elementary algebra is often included as well, in the context of solving practical business problems. The practical applications typically include checking accounts, price discounts, markups and Markup, payroll calculations, simple and compound interest, consumer and business credit, and mortgages and revenues.
University level
Undergraduate
Business mathematics comprises mathematics credits taken at an undergraduate level by business students.
The course is often organized around the various business sub-disciplines, including the above applications, and usually includes a separate module on interest calculations; the mathematics comprises mainly algebraic techniques.
Many programs, as mentioned, extend to more sophisticated mathemat
|
https://en.wikipedia.org/wiki/Winnipeg%20%28bear%29
|
Winnipeg (1914 – 12 May 1934), or Winnie, was the name given to a female black bear that lived at London Zoo from 1915 until her death in 1934. Rescued by cavalry veterinarian Harry Colebourn, Winnie is best-remembered for inspiring the name of A. A. Milne and E. H. Shepard's character, Winnie-the-Pooh.
History
Upon the outbreak of World War I in August 1914, Lt. Harry Colebourn of The Fort Garry Horse, a Canadian cavalry regiment, volunteered his service. On 24 August, while en route to Valcartier in Quebec to report to the Canadian Army Veterinary Corps (CAVC) as part of the Canadian Expeditionary Force, he purchased a young bear cub for at a train stop in White River, Ontario. The bear's mother was probably killed in the spring of 1914 when the cub was very young and could most easily have become socialized to humans. The name of the hunter who sold the bear and who presumably provided the bear's early socialization is undocumented. Colebourn named the bear "Winnipeg Bear", "Winnie" for short, after his adopted home city of Winnipeg, Manitoba.
Winnie accompanied him to Valcartier and all the way to England, becoming the mascot of the CAVC and a pet to the Second Canadian Infantry Brigade Headquarters. According to Colebourn’s six diaries that he kept during the war, on 3 October 1914, he and Winnie departed Gaspé Bay enroute for England aboard the S.S. Manitou along with numerous other liners filled with troops heading for England. On October 17, they disembarked and left Davenport, Greater Manchester, for Salisbury Plain at 7:00 that morning.
Before leaving for France, Colebourn left Winnie at London Zoo on 9 December 1914.
Winnie's eventual destination was expected to be Assiniboine Park Zoo in Winnipeg, but at the end of the war, Colebourn allowed her to remain at the London Zoo, where she was much loved for her playfulness and gentleness.
In 1919, the London Zoo held a dedication ceremony and erected a plaque that states Colebourn donated Winnie.
Amon
|
https://en.wikipedia.org/wiki/Hisense
|
Hisense Group is a Chinese multinational major appliance and electronics manufacturer headquartered in Qingdao, Shandong Province, China. Televisions are the main products of Hisense, and it is the largest TV manufacturer in China by market share since 2004. Hisense is also an OEM, so some of its products are sold to other companies and carry brand names not related to Hisense.
Two major subsidiaries of Hisense Group are listed companies, Hisense Visual Technology () and Hisense H.A. (, ). Both had a state ownership of over 30% via Hisense holding company before the end of 2020.
Hisense Group has over 80,000 employees worldwide, as well as 14 industrial parks, some of which are located in Qingdao, Shunde, Huzhou, Czech Republic, South Africa and Mexico. There are also 18 R&D centers located in Qingdao, Shenzhen, the United States, Germany, Slovenia, Israel, and other countries.
History
Qingdao No.2 Radio Factory, the predecessor of Hisense Group, was established in September 1969; this is the year its existence was first officially recognized. The small factory's first product was a radio sold under the brand name Red Lantern, but the company later gained the know-how to make TVs through a trial-production of black and white televisions ordered by the Shandong National Defense Office. This involved the technical training of three employees at another Chinese factory, Tianjin 712, and resulted in the production of 82 televisions by 1971 and the development of transistor TVs by 1975. Their first TV model, CJD18, was produced in 1978.
Television production in China was limited until 1979, when a meeting of the Ministry of Electronics in Beijing concluded with calls for greater development of the civil-use electronics industry. Qingdao No.2 Radio Factory was then quickly merged with other local electronics makers and manufactured televisions under the name Qingdao General Television Factory in Shandong province.
Color televisions were manufactured through the purch
|
https://en.wikipedia.org/wiki/Alfred%20Y.%20Cho
|
Alfred Yi Cho (; born July 10, 1937) is a Chinese-American electrical engineer, inventor, and optical engineer. He is the Adjunct Vice President of Semiconductor Research at Alcatel-Lucent's Bell Labs. He is known as the "father of molecular beam epitaxy"; a technique he developed at that facility in the late 1960s. He is also the co-inventor, with Federico Capasso of quantum cascade lasers at Bell Labs in 1994.
Cho was elected a member of the National Academy of Engineering in (1985) for his pioneering development of a molecular beam epitaxy technique, leading to unique semiconductor layer device structures.
Biography
Cho was born in Beiping. He went to Hong Kong in 1949 and had his secondary education in Pui Ching Middle School there. Cho holds B.S., M.S. and Ph.D. degrees in electrical engineering from the University of Illinois. He joined Bell Labs in 1968. He is a member of the National Academy of Sciences and the National Academy of Engineering, as well as a Fellow of the American Physical Society, the Institute of Electrical and Electronics Engineers, the American Philosophical Society, and the American Academy of Arts and Sciences.
In June 2007 he was honoured with the U.S. National Medal of Technology, the highest honor awarded by the President of the United States for technological innovation.
Cho received the award for his contributions to the invention of molecular beam epitaxy (MBE) and his work to commercialize the process.
He already has many awards to his name, including: the American Physical Society's International Prize for New Materials in 1982, the Solid State Science and Technology Medal of the Electrochemical Society in 1987, the World Materials Congress Award of ASM International in 1988, the Gaede-Langmuir Award of the American Vacuum Society in 1988, the IRI Achievement Award of the Industrial Research Institute in 1988, the New Jersey Governor's Thomas Alva Edison Science Award in 1990, the International Crystal Growth Award of the
|
https://en.wikipedia.org/wiki/Cyclic%20stress
|
Cyclic stress is the distribution of forces (aka stresses) that change over time in a repetitive fashion. As an example, consider one of the large wheels used to drive an aerial lift such as a ski lift. The wire cable wrapped around the wheel exerts a downward force on the wheel and the drive shaft supporting the wheel. Although the shaft, wheel, and cable move, the force remains nearly vertical relative to the ground. Thus a point on the surface of the drive shaft will undergo tension when it is pointing towards the ground and compression when it is pointing to the sky.
Types of cyclic stress
Cyclic stress is frequently encountered in rotating machinery where a bending moment is applied to a rotating part. This is called a cyclic bending stress and the aerial lift above is a good example. However, cyclic axial stresses and cyclic torsional stresses also exist. An example of cyclic axial stress would be a bungee cord (see bungee jumping), which must support the mass of people as they jump off structures such as bridges. When a person reaches the end of a cord, the cord deflects elastically and stops the person's descent. This creates a large axial stress in the cord. A fraction of the elastic potential energy stored in the cord is typically transferred back to the person, throwing the person upwards some fraction of the distance he or she fell. The person then falls on the cord again, inducing stress in the cord. This happens multiple times per jump. The same cord is used for several jumps, creating cyclical stresses in the cord that could eventually cause failure if not replaced.
Cyclic stress and material failure
When cyclic stresses are applied to a material, even though the stresses do not cause plastic deformation, the material may fail due to fatigue. Fatigue failure is typically modeled by decomposing cyclic stresses into mean and alternating components. Mean stress is the time average of the principal stress. The definition of alternating stress va
|
https://en.wikipedia.org/wiki/Dynamic%20Multipoint%20Virtual%20Private%20Network
|
Dynamic Multipoint Virtual Private Network (DMVPN) is a dynamic tunneling form of a virtual private network (VPN) supported on Cisco IOS-based routers, and Huawei AR G3 routers, and on Unix-like operating systems.
Benefits
DMVPN provides the capability for creating a dynamic-mesh VPN network without having to pre-configure (static) all possible tunnel end-point peers, including IPsec (Internet Protocol Security) and ISAKMP (Internet Security Association and Key Management Protocol) peers. DMVPN is initially configured to build out a hub-and-spoke network by statically configuring the hubs (VPN headends) on the spokes, no change in the configuration on the hub is required to accept new spokes. Using this initial hub-and-spoke network, tunnels between spokes can be dynamically built on demand (dynamic-mesh) without additional configuration on the hubs or spokes. This dynamic-mesh capability alleviates the need for any load on the hub to route data between the spoke networks.
Technologies
Next Hop Resolution Protocol,
Generic Routing Encapsulation (GRE), , or multipoint GRE if spoke-to-spoke tunnels are desired
An IP-based routing protocol, EIGRP, OSPF, RIPv2, BGP or ODR (DMVPN hub-and-spoke only).
IPsec (Internet Protocol Security) using an IPsec profile, which is associated with a virtual tunnel interface in IOS software. All traffic sent via the tunnel is encrypted per the policy configured (IPsec transform set)
Internal routing
Routing protocols such as OSPF, EIGRP v1 or v2 or BGP are generally run between the hub and spoke to allow for growth and scalability. Both EIGRP and BGP allow a higher number of supported spokes per hub.
Encryption
As with GRE tunnels, DMVPN allows for several encryption schemes (including none) for the encryption of data traversing the tunnels. For security reasons Cisco recommend that customers use AES.
Phases
DMVPN has three phases that route data differently.
Phase 1: All traffic flows from spokes to and through the hub.
|
https://en.wikipedia.org/wiki/List%20of%20ERP%20software%20packages
|
This is a list of notable enterprise resource planning (ERP) software. The first section is devoted to free and open-source software, and the second is for proprietary software.
Free and open-source ERP software
Proprietary ERP vendors and software
1C Company – 1C:Enterprise
24SevenOffice – 24SevenOffice Start, Premium, Professional and Custom
3i Infotech – Orion ERP, Orion 11j, Orion 11s
abas Software AG – abas ERP
Acumatica – Acumatica Cloud ERP
BatchMaster Software – BatchMaster ERP
Consona Corporation – AXIS ERP, Intuitive ERP, Made2Manage ERP
CGI Group – CGI Advantage
CGram Software – CGram Enterprise
Consona Corporation – Cimnet Systems, Compiere professional edition, Encompix ERP
Ciright Systems – Ciright ERP
Comarch - from the smallest to the biggest system: Comarch ERP XT, Comarch Optima, Comarch ERP Standard (Altum), Comarch ERP Enterprise (Semiramis)
Deacom – DEACOM ERP
Epicor - Epicor iScala, Epicor Eagle, Prophet 21
Erply – Retail ERP
Exact Software – Globe Next, Exact Online
FinancialForce – FinancialForce ERP
Fishbowl – Fishbowl Inventory
Fujitsu Glovia Inc. – GLOVIA G2
Greentree International – Greentree Business Software
IFS
Inductive Automation – Ignition MES, OEE Module
Industrial and Financial Systems – IFS Applications
Infor Global Solutions – Infor CloudSuite Financials, Infor LN, Infor M3, Infor CloudSuite Industrial (SyteLine), Infor VISUAL, Infor Distribution SX.e
IQMS – EnterpriseIQ
Jeeves Information Systems AB – Jeeves
Microsoft – Microsoft Dynamics (a product line of ERP and CRM applications), NAV-X
Open Systems Accounting Software – OSAS, TRAVERSE
Oracle – Oracle Fusion Cloud, Oracle ERP Cloud, Oracle NetSuite, Oracle E-Business Suite, JD Edwards EnterpriseOne, JD Edwards World, PeopleSoft, Oracle Retail
Panaya – Panaya CloudQuality Suite
Pegasus Software – Opera (I, II and 3)
Planet Soho – SohoOS
Plex Systems – Plex Online
Pronto Software – Pronto Software
QAD Inc – QAD Enterprise Applications (fo
|
https://en.wikipedia.org/wiki/Maximum%20power%20principle
|
The maximum power principle or Lotka's principle has been proposed as the fourth principle of energetics in open system thermodynamics, where an example of an open system is a biological cell. According to Howard T. Odum, "The maximum power principle can be stated: During self-organization, system designs develop and prevail that maximize power intake, energy transformation, and those uses that reinforce production and efficiency."
History
Chen (2006) has located the origin of the statement of maximum power as a formal principle in a tentative proposal by Alfred J. Lotka (1922a, b). Lotka's statement sought to explain the Darwinian notion of evolution with reference to a physical principle. Lotka's work was subsequently developed by the systems ecologist Howard T. Odum in collaboration with the chemical engineer Richard C. Pinkerton, and later advanced by the engineer Myron Tribus.
While Lotka's work may have been a first attempt to formalise evolutionary thought in mathematical terms, it followed similar observations made by Leibniz and Volterra and Ludwig Boltzmann, for example, throughout the sometimes controversial history of natural philosophy. In contemporary literature it is most commonly associated with the work of Howard T. Odum.
The significance of Odum's approach was given greater support during the 1970s, amid times of oil crisis, where, as Gilliland (1978, pp. 100) observed, there was an emerging need for a new method of analysing the importance and value of energy resources to economic and environmental production. A field known as energy analysis, itself associated with net energy and EROEI, arose to fulfill this analytic need. However, in energy analysis intractable theoretical and practical difficulties arose when using the energy unit to understand, a) the conversion among concentrated fuel types (or energy types), b) the contribution of labour, and c) the contribution of the environment.
Philosophy and theory
Lotka said (1922b: 151):
Gilli
|
https://en.wikipedia.org/wiki/Diphenylamine
|
Diphenylamine is an organic compound with the formula (C6H5)2NH. The compound is a derivative of aniline, consisting of an amine bound to two phenyl groups. The compound is a colorless solid, but commercial samples are often yellow due to oxidized impurities. Diphenylamine dissolves well in many common organic solvents, and is moderately soluble in water. It is used mainly for its antioxidant properties. Diphenylamine is widely used as an industrial antioxidant, dye mordant and reagent and is also employed in agriculture as a fungicide and antihelmintic.
Preparation and reactivity
Diphenylamine is manufactured by the thermal deamination of aniline over oxide catalysts:
2 C6H5NH2 → (C6H5)2NH + NH3
It is a weak base, with a Kb of 10−14. With strong acids, it forms salts. For example, treatment with sulfuric acid gives the bisulfate [(C6H5)2NH2]+[HSO4]− as a white or yellowish powder with m.p. 123-125 °C.
Diphenylamine undergoes various cyclisation reactions. With sulfur, it gives phenothiazine, a precursor to pharmaceuticals.
(C6H5)2NH + 2 S → S(C6H4)2NH + H2S
With iodine, it undergoes dehydrogenation to give carbazole, with release of hydrogen iodide:
(C6H5)2NH + I2 → (C6H4)2NH + 2 HI
Arylation with iodobenzene gives triphenylamine. it is also used as a test reagent in the dische's test .
Applications
Testing for DNA
The Dische test uses diphenylamine to test for DNA, and can be used to distinguish DNA from RNA.
Apple scald inhibitor
Diphenylamine is used as a pre- or postharvest scald inhibitor for apples applied as an indoor drench treatment. Its anti-scald activity is the result of its antioxidant properties, which protect the apple skin from the oxidation products of α-farnesene during storage. Apple scald is physical injury that manifests in brown spots after fruit is removed from cold storage.
Stabilizer for smokeless powder
In the manufacture of smokeless powder, diphenylamine is commonly used as a stabilizer, s
|
https://en.wikipedia.org/wiki/Narendra%20Nayak
|
Narendra Nayak (born 5 February 1951) is a rationalist, sceptic, and godman debunker from Mangalore, Karnataka, India. Nayak is the current president of the Federation of Indian Rationalist Associations (FIRA). He founded the Dakshina Kannada Rationalist Association in 1976 and has been its secretary since then. He also founded an NGO called Aid Without Religion in July 2011. He tours the country conducting workshops to promote scientific temper and showing people how to debunk godmen and frauds. He has conducted over 2000 such demonstrations in India, including some in Australia, Greece, England, Norway, Denmark, Sri Lanka and Nepal. He is also a polyglot who speaks 9 languages fluently, which helps him when he is giving talks in various parts of the country.
Life and work
Nayak was named after Swami Vivekananda (born Narendra Nath Datta). He has stated that seeing his father's business premises being repossessed by the bank and his father buying a lottery ticket on the advice of an astrologer to pay off the loan with the total confidence that it would get the first prize made him turn to rationalism. He married Asha Nayak, a lawyer in Mangaluru in a non-religious ceremony. Nayak started out working as a lecturer in the Department of biochemistry in the Kasturba Medical College in Mangalore in 1978. In 1982, he met Basava Premanand, a notable rationalist from Kerala, and was influenced by him.
Karnataka State Police withdrew his security wherein Nayak was quoted to say that it was an open invitation by forces to finish him.
Activism
Nayak decided to take on full-time anti-superstition activism in 2004 when he heard that a girl had been sacrificed in Gulbarga in Karnataka. He was an assistant professor of biochemistry when he took voluntary retirement on 25 November 2006, after working there for 28 years.
Before the general election in 2009, Nayak laid an open challenge to any soothsayer to answer 25 questions correctly about the forthcoming elections. The priz
|
https://en.wikipedia.org/wiki/Schoof%27s%20algorithm
|
Schoof's algorithm is an efficient algorithm to count points on elliptic curves over finite fields. The algorithm has applications in elliptic curve cryptography where it is important to know the number of points to judge the difficulty of solving the discrete logarithm problem in the group of points on an elliptic curve.
The algorithm was published by René Schoof in 1985 and it was a theoretical breakthrough, as it was the first deterministic polynomial time algorithm for counting points on elliptic curves. Before Schoof's algorithm, approaches to counting points on elliptic curves such as the naive and baby-step giant-step algorithms were, for the most part, tedious and had an exponential running time.
This article explains Schoof's approach, laying emphasis on the mathematical ideas underlying the structure of the algorithm.
Introduction
Let be an elliptic curve defined over the finite field , where for a prime and an integer . Over a field of characteristic an elliptic curve can be given by a (short) Weierstrass equation
with . The set of points defined over consists of the solutions satisfying the curve equation and a point at infinity . Using the group law on elliptic curves restricted to this set one can see that this set forms an abelian group, with acting as the zero element.
In order to count points on an elliptic curve, we compute the cardinality of .
Schoof's approach to computing the cardinality makes use of Hasse's theorem on elliptic curves along with the Chinese remainder theorem and division polynomials.
Hasse's theorem
Hasse's theorem states that if is an elliptic curve over the finite field , then satisfies
This powerful result, given by Hasse in 1934, simplifies our problem by narrowing down to a finite (albeit large) set of possibilities. Defining to be , and making use of this result, we now have that computing the value of modulo where , is sufficient for determining , and thus . While there is no efficient way to c
|
https://en.wikipedia.org/wiki/Minimum%20total%20potential%20energy%20principle
|
The minimum total potential energy principle is a fundamental concept used in physics and engineering. It dictates that at low temperatures a structure or body shall deform or displace to a position that (locally) minimizes the total potential energy, with the lost potential energy being converted into kinetic energy (specifically heat).
Some examples
A free proton and free electron will tend to combine to form the lowest energy state (the ground state) of a hydrogen atom, the most stable configuration. This is because that state's energy is 13.6 electron volts (eV) lower than when the two particles separated by an infinite distance. The dissipation in this system takes the form of spontaneous emission of electromagnetic radiation, which increases the entropy of the surroundings.
A rolling ball will end up stationary at the bottom of a hill, the point of minimum potential energy. The reason is that as it rolls downward under the influence of gravity, friction produced by its motion transfers energy in the form of heat of the surroundings with an attendant increase in entropy.
A protein folds into the state of lowest potential energy. In this case, the dissipation takes the form of vibration of atoms within or adjacent to the protein.
Structural mechanics
The total potential energy, , is the sum of the elastic strain energy, , stored in the deformed body and the potential energy, , associated to the applied forces:
This energy is at a stationary position when an infinitesimal variation from such position involves no change in energy:
The principle of minimum total potential energy may be derived as a special case of the virtual work principle for elastic systems subject to conservative forces.
The equality between external and internal virtual work (due to virtual displacements) is:
where
= vector of displacements
= vector of distributed forces acting on the part of the surface
= vector of body forces
In the special case of elastic bodies, the right
|
https://en.wikipedia.org/wiki/Einzel%20lens
|
An einzel lens (from – single lens), or unipotential lens, is a charged particle electrostatic lens that focuses without changing the energy of the beam. It consists of three or more sets of cylindrical or rectangular apertures or tubes in series along an axis. It is used in ion optics to focus ions in flight, which is accomplished through manipulation of the electric field in the path of the ions.
The electrostatic potential in the lens is symmetric, so the ions will regain their initial energy on exiting the lens, although the velocity of the outer particles will be altered such that they converge on to the axis. This causes the outer particles to arrive at the focus intersection slightly later than the ones that travel along a straight path, as they have to travel an extra distance.
Theory
The equation for the change in radial velocity for a particle as it passes between any pair of cylinders in the lens is:
with z axis passing through the middle of the lens, and r being the direction normal to z. If the lens is constructed with cylindrical electrodes, the field is symmetrical around z. is the magnitude of the electric field in the radial direction for a particle at a particular radial distance and distance across the gap, is the mass of the particle passing through the field, is the velocity of the particle and q is the charge of the particle. The integral occurs over the gap between the plates. This is also the interval where the lensing occurs.
The pair of plates is also called an electrostatic immersion lens, thus an einzel lens can be described as two or more electrostatic immersion lenses. Solving the equation above twice to find the change in radial velocity for each pair of plates can be used to calculate the focal length of the lens.
Application to television tubes
The einzel lens principle in a simplified form was also used as a focusing mechanism in display and television cathode ray tubes, and has the advantage of providing a good sharp
|
https://en.wikipedia.org/wiki/Ball-and-socket%20joint
|
The ball-and-socket joint (or spheroid joint) is a type of synovial joint in which the ball-shaped surface of one rounded bone fits into the cup-like depression of another bone. The distal bone is capable of motion around an indefinite number of axes, which have one common center. This enables the joint to move in many directions.
An enarthrosis is a special kind of spheroidal joint in which the socket covers the sphere beyond its equator.
Examples
Examples of this form of articulation are found in the hip, where the round head of the femur (ball) rests in the cup-like acetabulum (socket) of the pelvis; and in the shoulder joint, where the rounded upper extremity of the humerus (ball) rests in the cup-like glenoid fossa (socket) of the shoulder blade. (The shoulder also includes a sternoclavicular joint.)
|
https://en.wikipedia.org/wiki/Common%20gamma%20chain
|
The common gamma chain (γc) (or CD132), also known as interleukin-2 receptor subunit gamma or IL-2RG, is a cytokine receptor sub-unit that is common to the receptor complexes for at least six different interleukin receptors: IL-2, IL-4, IL-7, IL-9, IL-15 and interleukin-21 receptor. The γc glycoprotein is a member of the type I cytokine receptor family expressed on most lymphocyte (white blood cell) populations, and its gene is found on the X-chromosome of mammals.
This protein is located on the surface of immature blood-forming cells in bone marrow. One end of the protein resides outside the cell where it binds to cytokines and the other end of the protein resides in the interior of the cell where it transmits signals to the cell's nucleus. The common gamma chain partners with other proteins to direct blood-forming cells to form lymphocytes (a type of white blood cell). The receptor also directs the growth and maturation of lymphocyte subtypes: T cells, B cells, and natural killer cells. These cells kill viruses, make antibodies, and help regulate the entire immune system.
Gene
Cytokine receptor common subunit gamma also known as interleukin-2 receptor subunit gamma or IL-2RG is a protein that in humans is encoded by the IL2RG gene. The human IL2RG gene is located on the long (q) arm of the X chromosome at position 13.1, from base pair 70,110,279 to base pair 70,114,423.
Structure
The γc chain is an integral membrane protein that contains extracellular, transmembrane, and intracellular domains.
Function
Lymphocytes expressing the common gamma chain can form functional receptors for these cytokine proteins, which transmit signals from one cell to another and direct programs of cellular differentiation.
Ligands
The γc chain partners with other ligand-specific receptors to direct lymphocytes to respond to cytokines including IL-2, IL-4, IL-7, IL-9, IL-15 and IL-21.
Signalling
IL2RG has been shown to interact with Janus kinase 3.
Clinical significance
X-lin
|
https://en.wikipedia.org/wiki/Paul%20Horn%20%28computer%20scientist%29
|
Paul M. Horn (born August 16, 1946) is an American computer scientist and solid state physicist who has made contributions to pervasive computing, pioneered the use of copper and self-assembly in chip manufacturing, and he helped manage the development of deep computing, an important tool that provides business decision makers with the ability to analyze and develop solutions to very complex and difficult problems.
Horn was elected a member of the National Academy of Engineering in 2007 for leadership in the development of information technology products, ranging from microelectronics to supercomputing.
Early life and education
Horn was born on August 16, 1946, and graduated from Clarkson University in 1968 with a Bachelor of Science degree. He obtained his PhD from the University of Rochester in physics in 1973.
Career
Horn has, at various times, been Senior Vice President of the IBM Corporation and executive director of Research. While at IBM, he initiated the project to develop Watson, the computer that competed successfully in the quiz show Jeopardy!.
He is currently a New York University (NYU) Distinguished Scientist in Residence and NYU Stern Executive in Residence. He is also a professor at NYU Tandon School of Engineering. In 2009, he was appointed as the Senior Vice Provost for Research at NYU.
Awards
Industrial Research Institute (IRI) Medal in honor of his contributions to technology leadership, 2005
American Physical Society, George E. Pake Prize, 2002
Hutchison Medal from the University of Rochester, 2002
Distinguished Leadership award from the New York Hall of Science, 2000
Bertram Eugene Warren Award from the American Crystallographic Association, 1988
|
https://en.wikipedia.org/wiki/Resel
|
In image analysis, a resel (from resolution element) represents the actual spatial resolution in an image or a volumetric dataset.
The number of resels in the image may be lower or equal to the number of pixel/voxels in the image.
In an actual image the resels can vary across the image and indeed the local resolution can be expressed as "resels per pixel" (or "resels per voxel").
In functional neuroimaging analysis, an estimate of the number of resels together with random field theory is used in statistical inference.
Keith Worsley has proposed an estimate for the number of resels/roughness.
The word "resel" is related to the words "pixel", "texel", and "voxel", and Waldo R. Tobler is probably among the first to use the word.
See also
Kell factor
|
https://en.wikipedia.org/wiki/Magnetosphere%20particle%20motion
|
The ions and electrons of a plasma interacting with the Earth's magnetic field generally follow its magnetic field lines. These represent the force that a north magnetic pole would experience at any given point. (Denser lines indicate a stronger force.) Plasmas exhibit more complex second-order behaviors, studied as part of magnetohydrodynamics.
Thus in the "closed" model of the magnetosphere, the magnetopause boundary between the magnetosphere and the solar wind is outlined by field lines. Not much plasma can cross such a stiff boundary. Its only "weak points" are the two polar cusps, the points where field lines closing at noon (-z axis GSM) get separated from those closing at midnight (+z axis GSM); at such points the field intensity on the boundary is zero, posing no barrier to the entry of plasma. (This simple definition assumes a noon-midnight plane of symmetry, but closed fields lacking such symmetry also must have cusps, by the fixed point theorem.)
The amount of solar wind energy and plasma entering the actual magnetosphere depends on how far it departs from such a "closed" configuration, i.e. the extent to which Interplanetary Magnetic Field field lines manage to cross the boundary. As discussed further below, that extent depends very much on the direction of the Interplanetary Magnetic Field, in particular on its southward or northward slant.
Trapping of plasma, e.g. of the ring current, also follows the structure of field lines. A particle interacting with this B field experiences a Lorentz Force which is responsible for many of the particle motion in the magnetosphere. Furthermore, Birkeland currents and heat flow are also channeled by such lines — easy along them, blocked in perpendicular directions. Indeed, field lines in the magnetosphere have been likened to the grain in a log of wood, which defines an "easy" direction along which it easily gives way.
Motion of charged particles
The simplest magnetic field B is a constant one– straight pa
|
https://en.wikipedia.org/wiki/Magnetosphere%20chronology
|
The following is a chronology of discoveries concerning the magnetosphere.
1600 - William Gilbert in London suggests the Earth is a giant magnet.
1741 - Hiorter and Anders Celsius note that the polar aurora is accompanied by a disturbance of the magnetic needle.
1820 - Hans Christian Ørsted discovers electric currents create magnetic effects. André-Marie Ampère deduces that magnetism is basically the force between electric currents.
1833 - Carl Friedrich Gauss and Wilhelm Weber worked out the mathematical theory for separating the inner and outer Magnetosphere sources of Earth's magnetic field.
1843 - Samuel Schwabe, a German amateur astronomer, shows the existence of an 11-year sunspot cycle.
1859 - Richard Carrington in England observes a solar flare; 17 hours later a large magnetic storm begins.
1892 - George Ellery Hale introduces the spectroheliograph, observing the Sun in hydrogen light from the chromosphere, a sensitive way of detecting flares. He confirms the connection between flares and magnetic storms.
1900-3 - Kristian Birkeland experiments with beams of electrons aimed at a magnetized sphere ("terrella") in a vacuum chamber. The electrons hit near the magnetic poles, leading him to propose that the polar aurora is created by electron beams from the Sun. Birkeland also observes magnetic disturbances associated with the aurora, suggesting to him that localized "polar magnetic storms" exist in the auroral zone.
1902 - Marconi successfully sends radio signals across the Atlantic Ocean. Oliver Heaviside suggests that the radio waves found their way around the curving Earth because they were reflected from electrically conducting layer at the top of the atmosphere.
1926 - Gregory Breit and Merle Tuve measure the distance to the conducting layer—which R. Watson-Watt proposes naming "ionosphere"—by measuring the time needed for a radio signal to bounce back.
1930-1 - After Birkeland's "electron beam" theory is disproved, Sydney Chapman and Vincent Ferrar
|
https://en.wikipedia.org/wiki/God%20helmet
|
The God helmet is an experimental apparatus originally called the Koren helmet after its inventor Stanley Koren. It was developed by Koren and neuroscientist Michael Persinger to study creativity, religious experience and the effects of subtle stimulation of the temporal lobes. Reports by participants of a "sensed presence" while wearing the God helmet brought public attention and resulted in several TV documentaries. The device has been used in Persinger's research in the field of neurotheology, the study of the purported neural correlations of religion and spirituality. The apparatus, placed on the head of an experimental subject, generates very weak magnetic fields, that Persinger refers to as "complex". Like other neural stimulation with low-intensity magnetic fields, these fields are approximately as strong as those generated by a land line telephone handset or an ordinary hair dryer, but far weaker than that of an ordinary refrigerator magnet and approximately a million times weaker than transcranial magnetic stimulation.
Persinger reports that many subjects have reported "mystical experiences and altered states" while wearing the God Helmet. The foundations of his theory have been criticized in the scientific press. Anecdotal reports by journalists, academics and documentarists have been mixed and several effects reported by Persinger have not yet been independently replicated. One attempt at replication published in the scientific literature reported a failure to reproduce Persinger's effects and the authors speculated that the suggestibility of participants, improper blinding of participants or idiosyncratic methodology could explain Persinger's results. Persinger argues that the replication was technically flawed, but the researchers have stood by their replication. However, one group has published a direct replication of one God Helmet experiment. Other groups have reported no effects at all or have generated similar experiences by using sham helmets, o
|
https://en.wikipedia.org/wiki/Energy%20principles%20in%20structural%20mechanics
|
Energy principles in structural mechanics express the relationships between stresses, strains or deformations, displacements, material properties, and external effects in the form of energy or work done by internal and external forces. Since energy is a scalar quantity, these relationships provide convenient and alternative means for formulating the governing equations of deformable bodies in solid mechanics. They can also be used for obtaining approximate solutions of fairly complex systems, bypassing the difficult task of solving the set of governing partial differential equations.
General principles
Virtual work principle
Principle of virtual displacements
Principle of virtual forces
Unit dummy force method
Modified variational principles
Elastic systems
Minimum total potential energy principle
Principle of stationary total complementary potential energy
Castigliano's first theorem (for forces)
Linear elastic systems
Castigliano's second theorem (for displacements)
Betti's reciprocal theorem
Müller-Breslau's principle
Applications
Governing equations by variational principles
Approximate solution methods
Finite element method in structural mechanics
Bibliography
Charlton, T.M.; Energy Principles in Theory of Structures, Oxford University Press, 1973.
Dym, C. L. and I. H. Shames; Solid Mechanics: A Variational Approach, McGraw-Hill, 1973.
Hu, H. Variational Principles of Theory of Elasticity With Applications; Taylor & Francis, 1984.
Langhaar, H. L.; Energy Methods in Applied Mechanics, Krieger, 1989.
Moiseiwitsch, B. L.; Variational Principles, John Wiley and Sons, 1966.
Mura, T.; Variational Methods in Mechanics, Oxford University Press, 1992.
Reddy, J.N.; Energy Principles and Variational Methods in Applied Mechanics, John Wiley, 2002.
Shames, I. H. and Dym, C. L.; Energy and Finite Element Methods in Structural Mechanics, Taylor & Francis, 1995,
Tauchert, T.R.; Energy Principles in Structural Mechanics, McGraw-Hill, 1974.
Washizu, K
|
https://en.wikipedia.org/wiki/Deus%20Ex%20Machina%20%28video%20game%29
|
Deus Ex Machina is a video game designed and created by Mel Croucher and published by Automata UK for the ZX Spectrum in October 1984 and later converted to MSX and Commodore 64.
The game was the first to be accompanied by a fully synchronised soundtrack which featured narration, celebrity artists and music. The cast included Ian Dury, Jon Pertwee, Donna Bailey, Frankie Howerd, E.P. Thompson, and Croucher (who also composed the music). Andrew Stagg coded the original Spectrum version, and Colin Jones (later known as author/publisher Colin Bradshaw-Jones) was the programmer of the Commodore 64 version.
The game charts the life of a "defect" which has formed in "the machine", from conception, through growth, evolution and eventually death. The progression is loosely based on "The Seven Ages of Man" from the Shakespeare play, As You Like It and includes many quotations and parodies of this. The original game would later be rereleased alongside a sequel/remake in 2015.
Gameplay
Players of the game take control of a defective machine which has taken the form of the human body. The players would experience the different stages of life, all the way from being a cell to being a senile old being. It is considered to be a work of audiovisual entertainment although the game itself does not have sound. It is separated into an audio cassette where the tape needs to be played alongside the game. The length of the audio cassette is 46 minutes which is also the length of the game itself. Although the game could be played without the audio cassette, it would make it easier to understand with the help of the soundtrack. The soundtrack includes songs, musical compositions, and also voices of famous actors. As the game comes with a full transcript of the speech, it could at times be played without audio.
Reception
Despite critical acclaim at the time, the game did not conform to conventions of packaging and pricing required by distributors and retailers and the game was sold mail-
|
https://en.wikipedia.org/wiki/Extreme%20programming%20practices
|
Extreme programming (XP) is an agile software development methodology used to implement software projects. This article details the practices used in this methodology. Extreme programming has 12 practices, grouped into four areas, derived from the best practices of software engineering.
Fine scale feedback
Pair programming
Pair programming means that all the codes which is produced by two people programming on one task on one workstation. One programmer has control over the workstation and is thinking mostly about the coding in detail. The other programmer is more focused on the big picture, and is continually reviewing the code that is being produced by the first programmer. Programmers trade roles after minute to hour periods.
The pairs are not fixed; programmers switch partners frequently, so that everyone knows what everyone is doing, and everybody remains familiar with the whole system, even the parts outside their skill set. This way, pair programming also can enhance team-wide communication. (This also goes hand-in-hand with the concept of Collective Ownership).
Planning game
The main planning process within extreme programming is called the Planning Game. The game is a meeting that occurs once per iteration, typically once a week. The planning process is divided into two parts:
Release Planning: This is focused on determining what requirements are included in which near-term releases, and when they should be delivered. The customers and developers are both part of this. Release Planning consists of three phases:
Exploration Phase: In this phase the customer will provide a shortlist of high-value requirements for the system. These will be written down on user story cards.
Commitment Phase: Within the commitment phase business and developers will commit themselves to the functionality that will be included and the date of the next release.
Steering Phase: In the steering phase the plan can be adjusted, new requirements can be added and/or existing req
|
https://en.wikipedia.org/wiki/Fermi%E2%80%93Walker%20transport
|
Fermi–Walker transport is a process in general relativity used to define a coordinate system or reference frame such that all curvature in the frame is due to the presence of mass/energy density and not to arbitrary spin or rotation of the frame.
Fermi–Walker differentiation
In the theory of Lorentzian manifolds, Fermi–Walker differentiation is a generalization of covariant differentiation. In general relativity, Fermi–Walker derivatives of the spacelike vector fields in a frame field, taken with respect to the timelike unit vector field in the frame field, are used to define non-inertial and non-rotating frames, by stipulating that the Fermi–Walker derivatives should vanish. In the special case of inertial frames, the Fermi–Walker derivatives reduce to covariant derivatives.
With a sign convention, this is defined for a vector field X along a curve :
where is four-velocity, is the covariant derivative, and is the scalar product. If
then the vector field is Fermi–Walker transported along the curve. Vectors perpendicular to the space of four-velocities in Minkowski spacetime, e.g., polarization vectors, under Fermi–Walker transport experience Thomas precession.
Using the Fermi derivative, the Bargmann–Michel–Telegdi equation for spin precession of electron in an external electromagnetic field can be written as follows:
where and are polarization four-vector and magnetic moment, is four-velocity of electron, , , and is the electromagnetic field strength tensor. The right side describes Larmor precession.
Co-moving coordinate systems
A coordinate system co-moving with a particle can be defined. If we take the unit vector as defining an axis in the co-moving coordinate system, then any system transforming with proper time is said to be undergoing Fermi–Walker transport.
Generalised Fermi–Walker differentiation
Fermi–Walker differentiation can be extended for any , this is defined for a vector field along a curve :
where .
If , then
and
|
https://en.wikipedia.org/wiki/Secretion%20assay
|
Secretion assay is a process used in cell biology to identify cells that are secreting a particular protein (usually a cytokine). It was first developed by Manz et al. in 1995.
Usually, a cell that is secreting the protein of interest is isolated using an antibody-antibody complex that coats the cell and is able to "catch" the secreted molecules. The cell is then detected by another fluorochrome-labelled antibody, and is subsequently extracted using a process called fluorescent-activated cell sorting (FACS). The FACS method is broadly similar to the enzyme-linked immunosorbent assay (ELISA) antibody format, except that the encapsulated cells remain intact. This is advantageous as the cells are still living after the extraction has taken place.
Further advances now mean that it is possible to extract the secreting cells using a magnetic-based separation system or using a flow cytometer.
A number of commercial applications exist for secretion assay. One such example is the Gel Microdrop (GMD) technology, developed by One Cell Systems. One Cell asserts that GMD typically recovers a higher number of viable secreting cells than other methods, whilst ignoring any cells which are not secreting the desired protein.
|
https://en.wikipedia.org/wiki/Tsallis%20statistics
|
The term Tsallis statistics usually refers to the collection of mathematical functions and associated probability distributions that were originated by Constantino Tsallis. Using that collection, it is possible to derive Tsallis distributions from the optimization of the Tsallis entropic form. A continuous real parameter q can be used to adjust the distributions, so that distributions which have properties intermediate to that of Gaussian and Lévy distributions can be created. The parameter q represents the degree of non-extensivity of the distribution. Tsallis statistics are useful for characterising complex, anomalous diffusion.
Tsallis functions
The q-deformed exponential and logarithmic functions were first introduced in Tsallis statistics in 1994. However, the q-deformation is the Box–Cox transformation for , proposed by George Box and David Cox in 1964.
q-exponential
The q-exponential is a deformation of the exponential function using the real parameter q.
Note that the q-exponential in Tsallis statistics is different from a version used elsewhere.
q-logarithm
The q-logarithm is the inverse of q-exponential and a deformation of the logarithm using the real parameter q.
Inverses
These functions have the property that
Analysis
The limits of the above expression can be understood by considering
for the exponential function and
for the logarithm.
See also
Tsallis entropy
Tsallis distribution
q-Gaussian
q-exponential distribution
q-Weibull distribution
|
https://en.wikipedia.org/wiki/Shot%20transition%20detection
|
Shot transition detection (or simply shot detection) also called cut detection is a field of research of video processing. Its subject is the automated detection of transitions between shots in digital video with the purpose of temporal segmentation of videos.
Use
Shot transition detection is used to split up a film into basic temporal units called shots; a shot is a series of interrelated consecutive pictures taken contiguously by a single camera and representing a continuous action in time and space.
This operation is of great use in software for post-production of videos. It is also a fundamental step of automated indexing and content-based video retrieval or summarization applications which provide an efficient access to huge video archives, e.g. an application may choose a representative picture from each scene to create a visual overview of the whole film and, by processing such indexes, a search engine can process search items like "show me all films where there's a scene with a lion in it."
Cut detection can do nothing that a human editor couldn't do manually, however it is advantageous as it saves time. Furthermore, due to the increase in the use of digital video and, consequently, in the importance of the aforementioned indexing applications, the automatic cut detection is very important nowadays.
Basic technical terms
In simple terms cut detection is about finding the positions in a video in that one scene is replaced by another one with different visual content. Technically speaking the following terms are used:
A digital video consists of frames that are presented to the viewer's eye in rapid succession to create the impression of movement. "Digital" in this context means both that a single frame consists of pixels and the data is present as binary data, such that it can be processed with a computer. Each frame within a digital video can be uniquely identified by its frame index, a serial number.
A shot is a sequence of frames shot uninterrupt
|
https://en.wikipedia.org/wiki/Dysferlin
|
Dysferlin also known as dystrophy-associated fer-1-like protein is a protein that in humans is encoded by the DYSF gene. Dysferlin is linked with plasma membrane repair., stabilization of calcium signaling and the development of the T-tubule system of the muscle A defect in the DYSF gene, located on chromosome 2p12-14, results in several types of muscular dystrophy; including Miyoshi myopathy (MM), Limb-girdle muscular dystrophy type 2B (LGMD2B) and Distal Myopathy (DM). A reduction or absence of dysferlin, termed dysferlinopathy, usually becomes apparent in the third or fourth decade of life and is characterised by weakness and wasting of various voluntary skeletal muscles. Pathogenic mutations leading to dysferlinopathy can occur throughout the DYSF gene.
Structure
The human dysferlin protein is a 237 kilodalton type-II transmembrane protein. It contains a large intracellular cytoplasmic N-terminal domain, an extreme C-terminal transmembrane domain, and a short C-terminal extracellular domain. The cytosolic domain of dysferlin is composed of seven highly conserved C2 domains (C2A-G) which are conserved across several proteins within the ferlin family, including dysferlin homolog myoferlin. In fact, the C2 domain at any given position is more similar to the C2 domain at the corresponding position within other ferlin family members than the adjacent C2 domain within the same protein. This suggests that each individual C2 domain may in fact play a specific role in dysferlin function and each has in fact been shown to be required for two of dysferlin’s roles stabilization of calcium signaling and membrane repair. Mutations in each of these domains can cause dysferlinopathy. A crystal structure of the C2A domain of human dysferlin has been solved, and reveals that the C2A domain changes conformation when interacting with calcium ions, which is consistent with a growing body of evidence suggesting that the C2A domain plays a role in calcium-dependent lipid binding. I
|
https://en.wikipedia.org/wiki/Dynamic%20linker
|
In computing, a dynamic linker is the part of an operating system that loads and links the shared libraries needed by an executable when it is executed (at "run time"), by copying the content of libraries from persistent storage to RAM, filling jump tables and relocating pointers. The specific operating system and executable format determine how the dynamic linker functions and how it is implemented.
Linking is often referred to as a process that is performed when the executable is compiled, while a dynamic linker is a special part of an operating system that loads external shared libraries into a running process and then binds those shared libraries dynamically to the running process. This approach is also called dynamic linking or late linking.
Implementations
Microsoft Windows
Dynamic-link library, or DLL, is Microsoft's implementation of the shared library concept in the Microsoft Windows and OS/2 operating systems. These libraries usually have the file extension DLL, OCX (for libraries containing ActiveX controls), or DRV (for legacy system drivers). The file formats for DLLs are the same as for Windows EXE files that is, Portable Executable (PE) for 32-bit and 64-bit Windows, and New Executable (NE) for 16-bit Windows. As with EXEs, DLLs can contain code, data, and resources, in any combination.
Data files with the same file format as a DLL, but with different file extensions and possibly containing only resource sections, can be called resource DLLs. Examples of such DLLs include multi-language user interface libraries with extension MUI, icon libraries, sometimes having the extension ICL, and font files, having the extensions FON and FOT.
Unix-like systems using ELF, and Darwin-based systems
In most Unix-like systems, most of the machine code that makes up the dynamic linker is actually an external executable that the operating system kernel loads and executes first in a process address space newly constructed as a result of calling exec or posix_spa
|
https://en.wikipedia.org/wiki/Volume%20fraction
|
In chemistry and fluid mechanics, the volume fraction φi is defined as the volume of a constituent Vi divided by the volume of all constituents of the mixture V prior to mixing:
Being dimensionless, its unit is 1; it is expressed as a number, e.g., 0.18. It is the same concept as volume percent (vol%) except that the latter is expressed with a denominator of 100, e.g., 18%.
The volume fraction coincides with the volume concentration in ideal solutions where the volumes of the constituents are additive (the volume of the solution is equal to the sum of the volumes of its ingredients).
The sum of all volume fractions of a mixture is equal to 1:
The volume fraction (percentage by volume, vol%) is one way of expressing the composition of a mixture with a dimensionless quantity; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles, mol%) are others.
Volume concentration and volume percent
Volume percent is the concentration of a certain solute, measured by volume, in a solution. It has as a denominator the volume of the mixture itself, as usual for expressions of concentration, rather than the total of all the individual components’ volumes prior to mixing:
Volume percent is usually used when the solution is made by mixing two fluids, such as liquids or gases. However, percentages are only additive for ideal gases.
The percentage by volume (vol%) is one way of expressing the composition of a mixture with a dimensionless quantity; mass fraction (percentage by weight, wt%) and mole fraction (percentage by moles, mol%) are others.
In the case of a mixture of ethanol and water, which are miscible in all proportions, the designation of solvent and solute is arbitrary. The volume of such a mixture is slightly less than the sum of the volumes of the components. Thus, by the above definition, the term "40% alcohol by volume" refers to a mixture of 40 volume units of ethanol with enough water to make a final volume of 100 units, rather than a
|
https://en.wikipedia.org/wiki/Edgeworth%20price%20cycle
|
An Edgeworth price cycle is cyclical pattern in prices characterized by an initial jump, which is then followed by a slower decline back towards the initial level. The term was introduced by Maskin and Tirole (1988) in a theoretical setting featuring two firms bidding sequentially and where the winner captures the full market.
Phases of a price cycle
A price cycle has the following phases:
War of attrition: When the price is at marginal cost, the firms are engaged in a war of attrition where each firm hopes that the competitor will raise her price first ("relent").
Jump: When one firm relents, the other firm will then in the next period undercut, which is when the market price jumps. This first period is the most valuable to be the low-price firm, which is what causes firms to want to stay in the war of attrition to force the competitor to jump first.
Undercutting: then follows a sequence where the firms take turns at undercutting each other until the market arrives back in the war of attrition at the low price.
Discussion
It can be debated whether Edgeworth Cycles should be thought of as tacit collusion because it is a Markov Perfect equilibrium, but Maskin and Tirole write: "Thus our model can be viewed as a theory of tacit collusion." (p. 592).
Edgeworth cycles have been reported in gasoline markets in many countries. Because the cycles tend to occur frequently, weekly average prices found in government reports will generally mask the cycling. Wang (2012) emphasizes the role of price commitment in facilitating price cycles: without price commitment, the dynamic game becomes one of simultaneous move and here, the cycles are no longer a Markov Perfect equilibrium but rely on, e.g., supergame arguments.
Edgeworth cycles are distinguished from both sticky pricing and cost-based pricing. Sticky prices are typically found in markets with less aggressive price competition, so there are fewer or no cycles. Purely cost-based pricing occurs when retailers m
|
https://en.wikipedia.org/wiki/Shock%20diamond
|
Shock diamonds (also known as Mach diamonds or thrust diamonds) are a formation of standing wave patterns that appear in the supersonic exhaust plume of an aerospace propulsion system, such as a supersonic jet engine, rocket, ramjet, or scramjet, when it is operated in an atmosphere. The "diamonds" are actually a complex flow field made visible by abrupt changes in local density and pressure as the exhaust passes through a series of standing shock waves and expansion fans. Mach diamonds are named after Ernst Mach, the physicist who first described them.
Mechanism
Shock diamonds form when the supersonic exhaust from a propelling nozzle is slightly over-expanded, meaning that the static pressure of the gases exiting the nozzle is less than the ambient air pressure. The higher ambient pressure compresses the flow, and since the resulting pressure increase in the exhaust gas stream is adiabatic, a reduction in velocity causes its static temperature to be substantially increased. The exhaust is generally over-expanded at low altitudes, where air pressure is higher.
As the flow exits the nozzle, ambient air pressure will compress the flow. The external compression is caused by oblique shock waves inclined at an angle to the flow. The compressed flow is alternately expanded by Prandtl-Meyer expansion fans, and each "diamond" is formed by the pairing of an oblique shock with an expansion fan. When the compressed flow becomes parallel to the center line, a shock wave perpendicular to the flow forms, called a normal shock wave or Mach disk. This locates the first shock diamond, and the space between it and the nozzle is called the "zone of silence". The distance from the nozzle to the first shock diamond can be approximated by
where x is the distance, D0 is the nozzle diameter, P0 is flow pressure, and P1 is atmospheric pressure.
As the exhaust passes through the normal shock wave, its temperature increases, igniting excess fuel and causing the glow that makes the shock
|
https://en.wikipedia.org/wiki/SIMNET
|
SIMNET was a wide area network with vehicle simulators and displays for real-time distributed combat simulation: tanks, helicopters and airplanes in a virtual battlefield. SIMNET was developed for and used by the United States military. SIMNET development began in the mid-1980s, was fielded starting in 1987, and was used for training until successor programs came online well into the 1990s.
SIMNET was perhaps the world's first fully operational virtual reality system and was the first real time, networked simulator. It was not unlike our massive multiplayer games today. It supported a variety of air and ground vehicles, some human-directed and others autonomous.
Origins and purpose
Jack Thorpe of the Defense Advanced Research Projects Agency (DARPA) saw the need for networked multi-user simulation. Interactive simulation equipment was very expensive, and reproducing training facilities was likewise expensive and time consuming. In the early 1980s, DARPA decided to create a prototype research system to investigate the feasibility of creating a real-time distributed simulator for combat simulation. SIMNET, the resulting application, was to prove both the feasibility and effectiveness of such a project.
Training using actual equipment was extremely expensive and dangerous. Being able to simulate certain combat scenarios, and to have participants remotely located rather than all in one place, hugely reduced the cost of training and the risk of personal injury. Long-haul networking for SIMNET was run originally across multiple 56 kbit/s dial-up lines, using parallel processors to compress packets over the data links. This traffic contained not only the vehicle data but also compressed voice.
Developers
SIMNET was developed by three companies: Delta Graphics, Inc.; Perceptronics, Inc.; and Bolt, Beranek and Newman (BBN), Inc. There was no prime contractor on SIMNET; independent contracts were made directly with each of these three companies. BBN developed
|
https://en.wikipedia.org/wiki/T-cell%20vaccination
|
T-cell vaccination is immunization with inactivated autoreactive T cells. The concept of T-cell vaccination is, at least partially, analogous to classical vaccination against infectious disease. However, the agents to be eliminated or neutralized are not foreign microbial agents but a pathogenic autoreactive T-cell population. Research on T-cell vaccination so far has focused mostly on multiple sclerosis and to a lesser extent on rheumatoid arthritis, Crohn's disease and AIDS.
|
https://en.wikipedia.org/wiki/Nielsen%20theory
|
Nielsen theory is a branch of mathematical research with its origins in topological fixed-point theory. Its central ideas were developed by Danish mathematician Jakob Nielsen, and bear his name.
The theory developed in the study of the so-called minimal number of a map f from a compact space to itself, denoted MF[f]. This is defined as:
where ~ indicates homotopy of mappings, and #Fix(g) indicates the number of fixed points of g. The minimal number was very difficult to compute in Nielsen's time, and remains so today. Nielsen's approach is to group the fixed-point set into classes, which are judged "essential" or "nonessential" according to whether or not they can be "removed" by a homotopy.
Nielsen's original formulation is equivalent to the following:
We define an equivalence relation on the set of fixed points of a self-map f on a space X. We say that x is equivalent to y if and only if there exists a path c from x to y with f(c) homotopic to c as paths. The equivalence classes with respect to this relation are called the Nielsen classes of f, and the Nielsen number N(f) is defined as the number of Nielsen classes having non-zero fixed-point index sum.
Nielsen proved that
making his invariant a good tool for estimating the much more difficult MF[f]. This leads immediately to what is now known as the Nielsen fixed-point theorem: Any map f has at least N(f) fixed points.
Because of its definition in terms of the fixed-point index, the Nielsen number is closely related to the Lefschetz number. Indeed, shortly after Nielsen's initial work, the two invariants were combined into a single "generalized Lefschetz number" (more recently called the Reidemeister trace) by Wecken and Reidemeister.
Bibliography
External links
Survey article on Nielsen theory by Robert F. Brown at Topology Atlas
Fixed-point theorems
Fixed points (mathematics)
Topology
|
https://en.wikipedia.org/wiki/Genetic%20Information%20Research%20Institute
|
The Genetic Information Research Institute (GIRI) is a non-profit institution that was founded in 1994 by Jerzy Jurka. The mission of the institute "is to understand biological processes which alter the genetic makeup of different organisms, as a basis for potential gene therapy and genome engineering techniques." The institute specializes in applying computer tools to analysis of DNA and protein sequence information. GIRI develops and maintains Repbase Update, a database of prototypic sequences representing repetitive DNA from different eukaryotic species, and Repbase Reports, an electronic journal established in 2001. Repetitive DNA is primarily derived from transposable elements (TEs), which include DNA transposons belonging to around 20 superfamilies and retrotransposons that can also be sub-classified into subfamilies . The majority of known superfamilies of DNA transposons were discovered or co-discovered at GIRI, including Helitron, Academ, Dada, Ginger, Kolobok, Novosib, Sola, Transib, Zator, PIF/Harbinger and Polinton/Maverick. An ancient element from the Transib superfamily was identified as the evolutionary precursor of the Recombination activating gene. GIRI has hosted three international conferences devoted to the genomic impact of eukaryotic transposable elements.
|
https://en.wikipedia.org/wiki/Herzog%E2%80%93Sch%C3%B6nheim%20conjecture
|
In mathematics, the Herzog–Schönheim conjecture is a combinatorial problem in the area of group theory, posed by Marcel Herzog and Jochanan Schönheim in 1974.
Let be a group, and let
be a finite system of left cosets of subgroups
of .
Herzog and Schönheim conjectured
that if forms a partition of
with ,
then the (finite) indices cannot be distinct. In contrast, if repeated indices are allowed, then partitioning a group into cosets is easy: if is any subgroup of
with index then can be partitioned into left cosets of .
Subnormal subgroups
In 2004, Zhi-Wei Sun proved an extended version
of the Herzog–Schönheim conjecture in the case where are subnormal in . A basic lemma in Sun's proof states that if are subnormal and of finite index in , then
and hence
where denotes the set of prime
divisors of .
Mirsky–Newman theorem
When is the additive group of integers, the cosets of are the arithmetic progressions.
In this case, the Herzog–Schönheim conjecture states that every covering system, a family of arithmetic progressions that together cover all the integers, must either cover some integers more than once or include at least one pair of progressions that have the same difference as each other. This result was conjectured in 1950 by Paul Erdős and proved soon thereafter by Leon Mirsky and Donald J. Newman. However, Mirsky and Newman never published their proof. The same proof was also found independently by Harold Davenport and Richard Rado.
In 1970, a geometric coloring problem equivalent to the Mirsky–Newman theorem was given in the Soviet mathematical olympiad: suppose that the vertices of a regular polygon are colored in such a way that every color class itself forms the vertices of a regular polygon. Then, there exist two color classes that form congruent polygons.
|
https://en.wikipedia.org/wiki/Tire%20code
|
Automotive tires are described by an alphanumeric tire code (in North American English) or tyre code (in Commonwealth English), which is generally molded into the sidewall of the tire. This code specifies the dimensions of the tire, and some of its key limitations, such as load-bearing ability, and maximum speed. Sometimes the inner sidewall contains information not included on the outer sidewall, and vice versa.
The code has grown in complexity over the years, as is evident from the mix of SI and USC units, and ad-hoc extensions to lettering and numbering schemes. New automotive tires frequently have ratings for traction, treadwear, and temperature resistance, all collectively known as the Uniform Tire Quality Grading.
Most tires sizes are given using the ISO metric sizing system. However, some pickup trucks and SUVs use the Light Truck Numeric or Light Truck High Flotation system.
National technical standards regulations
DOT code
The DOT code is an alphanumeric character sequence molded into the sidewall of the tire and allows the identification of the tire and its age. The code is mandated by the U.S. Department of Transportation but is used worldwide. The DOT code is also useful in identifying tires subject to product recall or at end of life due to age.
Since February 2021 UK vehicle Regulations do not permit the use of tyres over 10 years old on the front steered axle(s) of heavy goods vehicles, buses and coaches. The ban also applies to all tyres in single configuration on minibuses. In addition, it is a requirement for all tyres on these vehicles to display a legible date code.
ETRTO and TRA
The European Tyre and Rim Technical Organisation (ETRTO) and the Tire and Rim Association (TRA) are two organizations that influence national tire standards. The objectives of the ETRTO include aligning national tire and rim standards in Europe. The Tire and Rim Association, formerly known as The Tire and Rim Association of America, Inc., is an American trade
|
https://en.wikipedia.org/wiki/XML%20for%20Analysis
|
XML for Analysis (XMLA) is an industry standard for data access in analytical systems, such as online analytical processing (OLAP) and data mining. XMLA is based on other industry standards such as XML, SOAP and HTTP. XMLA is maintained by XMLA Council with Microsoft, Hyperion and SAS Institute being the XMLA Council founder members.
History
The XMLA specification was first proposed by Microsoft as a successor for OLE DB for OLAP in April 2000. By January 2001 it was joined by Hyperion endorsing XMLA. The 1.0 version of the standard was released in April 2001, and in September 2001 the XMLA Council was formed. In April 2002 SAS joined Microsoft and Hyperion as founding member of XMLA Council. With time, more than 25 companies joined with their support for the standard.
API
XMLA consists of only two SOAP methods.: execute and discover. It was designed in such a way to preserve simplicity.
Execute
Execute method has two parameters:
Command - command to be executed. It can be MDX, DMX or SQL.
Properties - XML list of command properties such as Timeout, Catalog name, etc.
The result of Execute command could be Multidimensional Dataset or Tabular Rowset.
Discover
Discover method was designed to model all the discovery methods possible in OLEDB including various schema rowset, properties, keywords, etc. Discover method allows users to specify both what needs to be discovered and the possible restrictions or properties.
The result of Discover method is a rowset.
Query language
XMLA specifies MDXML as the query language. In the XMLA 1.1 version, the only construct in MDXML is an MDX statement enclosed in the <Statement> tag.
Example
Below is an example of XMLA Execute request with MDX query in command.
<soap:Envelope>
<soap:Body>
<Execute xmlns="urn:schemas-microsoft-com:xml-analysis">
<Command>
<Statement>SELECT Measures.MEMBERS ON COLUMNS FROM Sales</Statement>
</Command>
<Properties>
<PropertyList>
<DataSourceInfo/>
<Catalog>FoodM
|
https://en.wikipedia.org/wiki/FASMI
|
Fast Analysis of Shared Multidimensional Information (FASMI) is an alternative term for OLAP. The term was coined by Nigel Pendse of The OLAP Report (now known as The BI Verdict), because he felt that the 12 rules that Tedd Codd used to define OLAP were too controversial and biased (the rules were sponsored by Arbor Software, the company which developed Essbase). Also, Pendse considered that the list of 12 rules was too long, and the OLAP concept could be defined in only five rules.
|
https://en.wikipedia.org/wiki/Relativistic%20particle
|
In particle physics, a relativistic particle is an elementary particle with kinetic energy greater than or equal to its rest-mass energy given by Einstein's relation, , or specifically, of which the velocity is comparable to the speed of light .
This is achieved by photons to the extent that effects described by special relativity are able to describe those of such particles themselves. Several approaches exist as a means of describing the motion of single and multiple relativistic particles, with a prominent example being postulations through the Dirac equation of single particle motion.
Since the energy-momentum relation of an particle can be written as:
where is the energy, is the momentum, and is the rest mass,
when the rest mass tends to be zero, e.g. for a photon, or the momentum tends to be large, e.g. for a large-speed proton, this relation will collapses into a linear dispersion, i.e.
This is different from the parabolic energy-momentum relation for classical particles. Thus, in practice, the linearity or the non-parabolicity of the energy-momentum relation is considered as a key feature for relativistic particles. These two types of relativistic particles are remarked as massless and massive, respectively.
In experiments, massive particles are relativistic when their kinetic energy is comparable to or greater than the energy corresponding to their rest mass. In other words, a massive particle is relativistic when its total mass-energy is at least twice its rest mass. This condition implies that the speed of the particle is close to the speed of light. According to the Lorentz factor formula, this requires the particle to move at roughly 85% of the speed of light. Such relativistic particles are generated in particle accelerators, as well as naturally occurring in cosmic radiation. In astrophysics, jets of relativistic plasma are produced by the centers of active galaxies and quasars.
A charged relativistic particle crossing the interface
|
https://en.wikipedia.org/wiki/Percy%20John%20Heawood
|
Percy John Heawood (8 September 1861 – 24 January 1955) was a British mathematician, who concentrated on graph colouring.
Life
He was the son of the Rev. John Richard Heawood of Newport, Shropshire, and his wife Emily Heath, daughter of the Rev. Joseph Heath of Wigmore, Herefordshire; and a first cousin of Oliver Lodge, whose mother Grace was also a daughter of Joseph Heath. He was educated at Queen Elizabeth's School, Ipswich, and matriculated at Exeter College, Oxford in 1880, graduating B.A. in 1883 and M.A. in 1887.
Heawood spent his academic career at Durham University, where he was appointed Lecturer in 1885. He was, successively, Censor of St Cuthbert's Society between 1897 and 1901 succeeding Frank Byron Jevons in the role, Senior Proctor of the university from 1901, Professor in 1910 and Vice-Chancellor between 1926 and 1928. He was awarded an OBE, as Honorary Secretary of the Preservation Fund, for his part in raising £120,000 to prevent Durham Castle from collapsing into the River Wear.
Heawood was fond of country pursuits, and one of his interests was Hebrew. His nickname was "Pussy".
Durham University awards an annual Heawood Prize to a student graduating in Mathematics whose performance is outstanding in the final year.
Works
Heawood devoted himself to the four colour theorem and related questions. In 1890 he exposed a flaw in Alfred Kempe's proof, that had been considered as valid for 11 years. The four colour theorem being an open question again, he established the weaker five colour theorem. The four colour theorem itself was finally established by a computer-based proof in 1976.
Heawood also studied colouring of maps on higher surfaces and established the upper bound on the chromatic number of such a graph in terms of the connectivity (genus, or number of handles) of the surface. This upper bound was proved only in 1968 to be the actual maximum.
Writing in the Journal of the London Mathematical Society, G. A. Dirac wrote:
Family
Heawood m
|
https://en.wikipedia.org/wiki/Amaurote
|
Amaurote is a British video game for 8-bit computer systems that was released in 1987 by Mastertronic on their Mastertronic Added Dimension label. The music for the game was written by David Whittaker.
Plot
From the game's instructions:
The city of Amaurote has been invaded by huge, aggressive insects who have built colonies in each of the city's 25 sectors. As the only uninjured army officer left after the invasion (that'll teach you for hiding!) the job falls to you to destroy all the insect colonies.
Gameplay
The player controls an "Arachnus 4", an armoured fighting-machine that moves on four legs. The player must first select a sector to play in via a map screen and then control the Arachnus as it wanders an isometric (top-down in the Commodore 64 version) view of the cityscape attacking marauding insects and searching for the insect queen using a scanner. The Arachnus attacks by launching bouncing bombs. It can only launch one at a time so if a bomb misses its intended target the player will have to wait until it hits the scenery or bounces against the fence of the play area before firing again. Once the queen has been located, the player can radio-in a "supa-bomb" which can be used to destroy the queen. The player can also radio-in other supplies such as additional bombs and even ask to be pulled out of the combat zone. Extra weaponry costs the player "dosh", the in-game currency.
Reception
The game was favourably reviewed by Crash magazine who said it was graphically impressive, well designed and fun to play. It was given a 92% overall rating. Zzap!64 were less impressed by the Commodore 64 version which was criticised for dull gameplay and programming bugs. It was rated 39% overall.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.