source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/Clockwork%20universe
|
In the history of science, the clockwork universe compares the universe to a mechanical clock. It continues ticking along, as a perfect machine, with its gears governed by the laws of physics, making every aspect of the machine predictable.
History
This idea was very popular among deists during the Enlightenment, when Isaac Newton derived his laws of motion, and showed that alongside the law of universal gravitation, they could predict the behaviour of both terrestrial objects and the Solar System.
A similar concept goes back, to John of Sacrobosco's early 13th-century introduction to astronomy: On the Sphere of the World. In this widely popular medieval text, Sacrobosco spoke of the universe as the machina mundi, the machine of the world, suggesting that the reported eclipse of the Sun at the crucifixion of Jesus was a disturbance of the order of that machine.
Responding to Gottfried Leibniz, a prominent supporter of the theory, in the Leibniz–Clarke correspondence, Samuel Clarke wrote:
"The Notion of the World's being a great Machine, going on without the Interposition of God, as a Clock continues to go without the Assistance of a Clockmaker; is the Notion of Materialism and Fate, and tends, (under pretence of making God a Supra-mundane Intelligence,) to exclude Providence and God's Government in reality out of the World."
In 2009, artist Tim Wetherell created a large wall piece for Questacon (The National Science and Technology centre in Canberra, Australia) representing the concept of the clockwork universe. This steel artwork contains moving gears, a working clock, and a movie of the lunar terminator.
See also
Mechanical philosophy
Determinism
Eternalism (philosophy of time)
Orrery
Philosophy of space and time
Superdeterminism
|
https://en.wikipedia.org/wiki/Armstrong%20Audio
|
Armstrong Audio, originally called Armstrong Wireless and Television Ltd. was a British manufacturer of radios and other audio equipment based in London, England. Founded by Claude Charles Jackson in 1932.
History
Initially created to manufacture portable radios, during World War II their factory was used to manufacture radios, public address systems, and various electronic parts. After the war, they began to produce television sets, as well as long range radios for ships, but eventually ceased production of those lines to manufacture radios, amplifiers and tuners for home consumer use. In the 1950s when the high fidelity market began to take shape, the company name was changed to Armstrong Audio and they focused their marketing and manufacturing at becoming hi-fi specialists.
During the 1960s and 1970s they were extremely successful, creating several durable radio models which are still in use by consumers today, but by the end of the 1970s their lease on their factory ran out and it was decided not to invest in a new one. The building was torn down and the owners redeveloped it. Using plans developed for a further radio model, some of the staff continued on as Armstrong Amplifiers, but due to a lack of capital and suitable manufacturing space, production did not last long.
Today, what once was Armstrong Audio is now called Armstrong Hi-Fi and Video Services and is based in Walthamstow, and they provide maintenance contract to a number of retail stores.
Armstrong 521
The Armstrong 521 was a stereo hi-fi amplifier from the Armstrong Audio company and was marketed as 2 x 25W amplifier.
It employed germanium AL102 transistors in its output stages and these had a reputation for failure and are now unobtainable although it is possible, with modification to replace these with newer, silicon transistors. The amplifier was a single rail design and employed an electrolytic output capacitor in the output stage. The amplifier featured inputs for tape, tuner and MM g
|
https://en.wikipedia.org/wiki/Sauroplites
|
Sauroplites (meaning "saurian hoplite") is a genus of herbivorous ankylosaurian dinosaur from the Early Cretaceous of China.
Discovery and naming
In 1930, the Swedish paleontologist Anders Birger Bohlin during the Swedish-Chinese expeditions of Sven Hedin discovered an ankylosaurian fossil near Tebch in Inner Mongolia.
The type species Sauroplites scutiger was named and described by Bohlin in 1953. The generic name is derived from Greek sauros or saura, "lizard", and hoplites, "hoplite, armed foot soldier". The specific name is Neo-Latin for "shield bearer", in reference to the body armour.
At first generally accepted as valid, even though a diagnosis had originally not been provided, Sauroplites was later often considered a nomen dubium because it is based on fragmentary material. Some believed it might actually be a specimen of another ankylosaur, Shamosaurus. However, in 2014 Victoria Megan Arbour discovered a clear unique trait, autapomorphy: the sacral or pelvic shield shows rosettes with a large central osteoderm surrounded by a single ring of smaller scutes. Other species have multiple or irregular rings. She concluded that Sauroplites was a valid taxon.
The specimens were not given an inventory number and are today lost, though some casts are present in the American Museum of Natural History as specimen AMNH 2074. They were found in a layer of the Zhidan Group, probably dating from the Barremian to Aptian stages. The carcass had been deposited on its back and the bones had been eroded away, apart from some ribs and perhaps a piece of an ischium, leaving parts of the body armour in a largely articulated position.
Description
The large central osteoderms of the rosettes are rather flat and have a diameter of ten centimetres. More to the front oval osteoderms, with an asymmetric low keel, cover the back, with a length of up to forty centimetres. Thirty centimetre osteoderms cover the sides.
Classification
Bohlin placed Sauroplites in the Ankylosauridae. H
|
https://en.wikipedia.org/wiki/Big%20Bang%20%28Singh%20book%29
|
Big Bang: The most important scientific discovery of all time and why you need to know about it is a book written by Simon Singh and published in 2004 by Fourth Estate.
Big Bang chronicles the history and development of the Big Bang model of the universe, from the ancient Greek scientists who first measured the distance to the sun to the 20th century detection of the cosmic radiation still echoing the dawn of time.
The book discusses how different theories of the universe evolved, along with a personal look at the people involved.
Before Big Bang theories
The book takes up how the inaccuracies of the theories of Copernicus and Galileo lead them to be dismissed. Copernicus and Galileo used false arguments to persuade people that the Earth went in circles around the Sun, and that the Sun was the center of the universe. Both these statements were alien to the public at the time, and are still alien to a modern public. Only the finally mathematically correct interpretation of Johannes Kepler made the theories accepted, within a single generation. As Singh points out, the old generation must die before a new theory can be accepted.
The Big Bang theory evolves
In parallel to the evolution of the Big Bang theory, the book tells the personal stories of the people who played a part in advancing it, both by hypothesis and by experiment. These include Albert Einstein, for his General Relativity, Alexander Alexandrovich Friedman for first discovering that this theory led to an expanding universe, Georges Lemaître who concluded independently of Friedman discovered an expanding universe, and then that the theory must lead to an initial event of creation, which is the Big Bang theory we know today, Edwin Hubble for observing that the universe expanded, thereby confirming Friedman and Lemaître, George Gamow, Ralph Asher Alpher, Robert Herman, Martin Ryle, Arno Allan Penzias and Robert Woodrow Wilson, among many others.
Another theme of the book is the scientific method itse
|
https://en.wikipedia.org/wiki/Next-to-Minimal%20Supersymmetric%20Standard%20Model
|
In particle physics, NMSSM is an acronym for Next-to-Minimal Supersymmetric Standard Model.
It is a supersymmetric extension to the Standard Model that adds an additional singlet chiral superfield to the MSSM and can be used to dynamically generate the term, solving the -problem. Articles about the NMSSM are available for review.
The Minimal Supersymmetric Standard Model does not explain why the parameter in the superpotential term is at the electroweak scale. The idea behind the Next-to-Minimal Supersymmetric Standard Model is to promote the term to a gauge singlet, chiral superfield . Note that the scalar superpartner of the singlino is denoted by and the spin-1/2 singlino superpartner by in the following. The superpotential for the NMSSM is given by
where gives the Yukawa couplings for the Standard Model fermions. Since the superpotential has a mass dimension of 3, the couplings and are dimensionless; hence the -problem of the MSSM is solved in the NMSSM, the superpotential of the NMSSM being scale-invariant. The role of the term is to generate an effective term. This is done with the scalar component of the singlet getting a vacuum-expectation value of ; that is, we have
Without the term the superpotential would have a U(1)' symmetry, so-called Peccei–Quinn symmetry; see Peccei–Quinn theory. This additional symmetry would alter the phenomenology completely. The role of the term is to break this U(1)' symmetry. The term is introduced trilinearly such that is dimensionless. However, there remains a discrete symmetry, which is moreover broken spontaneously. In principle this leads to the domain wall problem. Introducing additional but suppressed terms, the symmetry can be broken without changing phenomenology at the electroweak scale.
It is assumed that the domain wall problem is circumvented in this way without any modifications except far beyond the electroweak scale.
Other models have been proposed which solve the -problem of the MSSM
|
https://en.wikipedia.org/wiki/Seedless%20fruit
|
A seedless fruit is a fruit developed to possess no mature seeds. Since eating seedless fruits is generally easier and more convenient, they are considered commercially valuable.
Most commercially produced seedless fruits have been developed from plants whose fruits normally contain numerous relatively large hard seeds distributed throughout the flesh of the fruit.
Varieties
Common varieties of seedless fruits include watermelons, tomatoes, and grapes (such as Termarina rossa). Additionally, there are numerous seedless citrus fruits, such as oranges, lemons and limes.
A recent development over the last twenty years has been that of seedless sweet peppers (Capsicum annuum). The seedless plant combines male sterility in the pepper plant (commonly occurring) with the ability to set seedless fruits (a natural fruit-setting without fertilization). In male sterile plants, the parthenocarpy expresses itself only sporadically on the plant with deformed fruits. It has been reported that plant hormones provided by the ovary seed (such as auxins and gibberellins) promote fruit set and growth to produce seedless fruits. Initially, without seeds in the fruit, vegetative propagation was essential. However, now – as with seedless watermelon – seedless peppers can be grown from seeds.
Biological description
Seedless fruits can develop in one of two ways: either the fruit develops without fertilization (parthenocarpy), or pollination triggers fruit development, but the ovules or embryos abort without producing mature seeds (stenospermocarpy). Seedless banana and watermelon fruits are produced on triploid plants, whose three sets of chromosomes make it very unlikely for meiosis to successfully produce spores and gametophytes. This is because one of the three copies of each chromosome cannot pair with another appropriate chromosome before separating into daughter cells, so these extra third copies end up randomly distributed between the two daughter cells from meiosis 1, resul
|
https://en.wikipedia.org/wiki/SCSI%20architectural%20model
|
The SCSI architectural model provides an abstract view of the way that SCSI devices communicate. It is intended to show how the different SCSI standards are inter-related. The main concepts and terminology of the SCSI architectural model are:
Only the externally observable behavior is defined in SCSI standards.
The relationship between SCSI devices is described by a client-server service-delivery model. The client is called a SCSI initiator and the server is called a SCSI target.
A SCSI domain consists of at least one SCSI device, at least one SCSI target and at least one SCSI initiator interconnected by a service delivery subsystem.
A SCSI device has one or more SCSI ports, and a SCSI port may have an optional SCSI port identifier (SCSI ID or PID).
A SCSI device can have an optional SCSI device name which must be unique within the SCSI domain in which the SCSI device has SCSI ports. This is often called a World Wide Name. Note that the "world" may only consist of a very small number of SCSI devices.
A SCSI target consists of one or more logical units (LUNs), which are identified by logical unit numbers.
A LUN may have dependent LUNs embedded within it. This can recur up to a maximum nesting depth of four addressable levels.
There are three type of SCSI ports: initiator ports, target ports and target/initiator ports. A SCSI device may contain any combination of initiator ports, target ports and target/initiator ports.
SCSI distributed objects are considered to communicate in a three layer model:
The highest level of abstraction is the SCSI Application Layer (SAL) where an initiator and a target are considered to communicate using SCSI commands sent via the SCSI application protocol.
The SCSI Transport Protocol Layer (STPL) is where an initiator and a target are considered to communicate using a SCSI transport protocol. Examples of SCSI transport protocols are Fibre Channel, SSA, SAS, UAS, iSCSI and the SCSI Parallel Interface.
The lowest level i
|
https://en.wikipedia.org/wiki/Opportunistic%20encryption
|
Opportunistic encryption (OE) refers to any system that, when connecting to another system, attempts to encrypt communications channels, otherwise falling back to unencrypted communications. This method requires no pre-arrangement between the two systems.
Opportunistic encryption can be used to combat passive wiretapping. (an active wiretapper, on the other hand, can disrupt encryption negotiation to either force an unencrypted channel or perform a man-in-the-middle attack on the encrypted link.) It does not provide a strong level of security as authentication may be difficult to establish and secure communications are not mandatory. However, it does make the encryption of most Internet traffic easy to implement, which removes a significant impediment to the mass adoption of Internet traffic security.
Opportunistic encryption on the Internet is described in "Opportunistic Encryption using the Internet Key Exchange (IKE)", "Opportunistic Security: Some Protection Most of the Time", and in "Opportunistic Security for HTTP/2".
Routers
The FreeS/WAN project was one of the early proponents of OE. The effort is continued by the former freeswan developers now working on Libreswan. Libreswan aims to support different authentication hooks for Opportunistic Encryption with IPsec. Version 3.16, which was released in December 2015, had support for Opportunistic IPsec using AUTH-NULL which is based on RFC 7619. The Libreswan Project is currently working on (forward) Domain Name System Security Extensions (DNSSEC) and Kerberos support for Opportunistic IPsec.
Openswan has also been ported to the OpenWrt project. Openswan used reverse DNS records to facilitate the key exchange between the systems.
It is possible to use OpenVPN and networking protocols to set up dynamic VPN links which act similar to OE for specific domains.
Linux and Unix-like systems
The FreeS/WAN and forks such as Openswan and strongSwan offer VPNs that can also operate in OE mode using IPsec-based tech
|
https://en.wikipedia.org/wiki/Parched%20grain
|
Parched grain is grain that has been cooked by dry roasting. It is an ancient foodstuff and is thought to be one of the earliest ways in which the hunter gatherers in the Fertile Crescent ate grains. Historically, it was a common food in the Middle East, as attested by the following Bible quotes:
"On the day after the Passover, on that very day, they ate some of the produce of the land, unleavened cakes, and parched grain."
"Now Boaz said to her at mealtime, 'Come here, and eat of the bread, and dip your piece of bread in the vinegar.'" So she sat beside the reapers, and he passed parched grain to her; and she ate and was satisfied, and kept some back."
It is known in Hebrew as קָלִי (qālî). The grain has the same length of the normal grain, although somewhat thinner and darker with a green shade. It is served as a casserole hot dish, cooked with morsels of meat or poultry.
Use as a Camp Ration
A variety of parched grains have been used historically as a camp ration, both for military troops on maneuvers and civilian travelers on extended overland journeys. Because parching both cooked the grains, and removed most of the water content, it was useful as a way to have pre-cooked meals which could be stored or carried for extended periods, and weighed the same or slightly less than the uncooked grains. It also had the advantage that it could be eaten without re-heating it, either dry or by soaking in water, and so would both reduce cooking time in the field and allow troops to travel without any campfires at all if needed.
In particular, parched rice was widely used in South and East Asia for troops well into the 20th century, including by the Imperial Japanese Army during the Sino-Japanese Wars and World War II. It was a primary staple of the People's Liberation Army of China during the Long March as well, being one of the few items they were able to carry a significant supply of while on the move.
During the U.S. Civil War parched maize was used both as a grai
|
https://en.wikipedia.org/wiki/Reciprocal%20rule
|
In calculus, the reciprocal rule gives the derivative of the reciprocal of a function f in terms of the derivative of f. The reciprocal rule can be used to show that the power rule holds for negative exponents if it has already been established for positive exponents. Also, one can readily deduce the quotient rule from the reciprocal rule and the product rule.
The reciprocal rule states that if f is differentiable at a point x and f(x) ≠ 0 then g(x) = 1/f(x) is also differentiable at x and
Proof
This proof relies on the premise that is differentiable at and on the theorem that is then also necessarily continuous there. Applying the definition of the derivative of at with gives
The limit of this product exists and is equal to the product of the existing limits of its factors:
Because of the differentiability of at the first limit equals and because of and the continuity of at the second limit thus yielding
A weak reciprocal rule that follows algebraically from the product rule
It may be argued that since
an application of the product rule says that
and this may be algebraically rearranged to say
However, this fails to prove that 1/f is differentiable at x; it is valid only when differentiability of 1/f at x is already established. In that way, it is a weaker result than the reciprocal rule proved above. However, in the context of differential algebra, in which there is nothing that is not differentiable and in which derivatives are not defined by limits, it is in this way that the reciprocal rule and the more general quotient rule are established.
Application to generalization of the power rule
Often the power rule, stating that , is proved by methods that are valid only when n is a nonnegative integer. This can be extended to negative integers n by letting , where m is a positive integer.
Application to a proof of the quotient rule
The reciprocal rule is a special case of the quotient rule, which states that if f and g are differentiab
|
https://en.wikipedia.org/wiki/The%20Hero%3A%20Love%20Story%20of%20a%20Spy
|
The Hero: Love Story of a Spy is a 2003 Indian Hindi-language spy thriller film directed by Anil Sharma and produced by Time Magnetics. It stars Sunny Deol, Preity Zinta and Priyanka Chopra in her Bollywood film debut. Written by Shaktimaan, the film tells the story of an undercover Research and Analysis Wing (RAW) agent who must gather intelligence about cross-border terrorism and stop the terrorist responsible for it and his separation from his fiancé.
Sharma had long contemplated making a spy film but felt this was not economically viable for the Indian market because Indian films did not have sufficient budgets. He first planned a film about India's spy network set in the early 2000s but made the 2001 film Gadar: Ek Prem Katha, which became one of the highest-grossing Indian films of all time. Following the record-breaking success of that film, Sharma decided to make The Hero: Love Story of a Spy. The Shah Brothers were engaged to produce the film, which was touted to have a huge budget and scale, unlike previous Bollywood films. Aiming for high production values, a sizeable amount of money was spent on the film. Several large sets were created to give the film a feeling of grandeur, and international stunt experts were hired to coordinate action sequences new to Bollywood. Principal photography was done at Indian locations, including Kullu and Manali, and in locations in Canada and Switzerland. Uttam Singh composed the soundtrack with lyrics written by Anand Bakshi and Javed Akhtar.
The film's production cost was very high, with trades suggesting that it was the most expensive Indian film ever made at that point; this was the most talked-about aspect of the film. The Hero: Love Story of a Spy was released on 11 April 2003 to mixed to positive reviews from critics. It grossed over 451 million at the box office against a production and marketing budget of 350 million, becoming the third-highest-grossing film of the year. Chopra won the Stardust Award for Best S
|
https://en.wikipedia.org/wiki/Cochlear%20nucleus
|
The cochlear nuclear (CN) complex comprises two cranial nerve nuclei in the human brainstem, the ventral cochlear nucleus (VCN) and the dorsal cochlear nucleus (DCN).
The ventral cochlear nucleus is unlayered whereas the dorsal cochlear nucleus is layered. Auditory nerve fibers, fibers that travel through the auditory nerve (also known as the cochlear nerve or eighth cranial nerve) carry information from the inner ear, the cochlea, on the same side of the head, to the nerve root in the ventral cochlear nucleus.
At the nerve root the fibers branch to innervate the ventral cochlear nucleus and the deep layer of the dorsal cochlear nucleus. All acoustic information thus enters the brain through the cochlear nuclei, where the processing of acoustic information begins. The outputs from the cochlear nuclei are received in higher regions of the auditory brainstem.
Structure
The cochlear nuclei (CN) are located at the dorso-lateral side of the brainstem, spanning the junction of the pons and medulla.
The ventral cochlear nucleus (VCN) on the ventral aspect of the brain stem, ventrolateral to the inferior peduncle.
The dorsal cochlear nucleus (DCN), also known as the tuberculum acusticum or acoustic tubercle, curves over the VCN and wraps around the cerebellar peduncle.
The VCN is further divided by the nerve root into the posteroventral cochlear nucleus (PVCN) and the anteroventral cochlear nucleus (AVCN).
Projections to the cochlear nuclei
The major input to the cochlear nucleus is from the auditory nerve, a part of cranial nerve VIII (the vestibulocochlear nerve). The auditory nerve fibers form a highly organized system of connections according to their peripheral innervation of the cochlea. Axons from the spiral ganglion cells of the lower frequencies innervate the ventrolateral portions of the ventral cochlear nucleus and lateral-ventral portions of the dorsal cochlear nucleus. The axons from the higher frequency organ of corti hair cells project to the dor
|
https://en.wikipedia.org/wiki/Khan%20Research%20Laboratories
|
The Dr. A. Q. Khan Research Laboratories, () or KRL for short, is a federally funded, multi-program national research institute and national laboratory site primarily dedicated to uranium enrichment, supercomputing and fluid mechanics. It is managed by the Ministry of Energy for the Government of Pakistan via partnership between the universities through the security contractor Strategic Plans Division Force due to its sensitivity. The site is located in Kahuta, a short distance north-east of Rawalpindi, Punjab, Pakistan.
The site was organized to produce weapons-grade nuclear material, primarily weapons-grade uranium, as part of Pakistan's secretive atomic bomb program in the years after the Indo-Pakistani war of 1971. Chosen to be a top-secret location, it was built in secrecy by the Pakistan Army Corps of Engineers. It was commissioned under the Army engineers with civilian scientists joining the site in late 1976. During the midst of the 1970s, the site was the cornerstone of the first stage of Pakistan's atomic bomb program, and is one of the many sites where classified scientific research on atomic bombs was undertaken.
The KRL has prestige for conducting research and development to be able to produce highly enriched uranium (HEU) utilizing the Zippe-method of gas centrifuge– the other user of this method is the Urenco Group in the Netherlands. Since its inception, many technical staff have been employed, mostly physicists and mathematicians, assisted by engineers (both Army and civilians), chemists, and material scientists. Professional scientists and engineers are delegated to visit this institute, after undergoing strict screening and background checks, to participate as visitors in scientific projects.
As of its current mission, KRL is one of the largest science and technology research sites in Pakistan, and conducts multidisciplinary research and development in fields such as national security, space exploration, and supercomputing.
History
As early a
|
https://en.wikipedia.org/wiki/Grain%20growth
|
In materials science, grain growth is the increase in size of grains (crystallites) in a material at high temperature. This occurs when recovery and recrystallisation are complete and further reduction in the internal energy can only be achieved by reducing the total area of grain boundary. The term is commonly used in metallurgy but is also used in reference to ceramics and minerals. The behaviors of grain growth is analogous to the coarsening behaviors of grains, which implied that both of grain growth and coarsening may be dominated by the same physical mechanism.
Importance of grain growth
The practical performances of polycrystalline materials are strongly affected by the formed microstructure inside, which is mostly dominated by grain growth behaviors. For example, most materials exhibit the Hall–Petch effect at room-temperature and so display a higher yield stress when the grain size is reduced (assuming abnormal grain growth has not taken place). At high temperatures the opposite is true since the open, disordered nature of grain boundaries means that vacancies can diffuse more rapidly down boundaries leading to more rapid Coble creep. Since boundaries are regions of high energy they make excellent sites for the nucleation of precipitates and other second-phases e.g. Mg–Si–Cu phases in some aluminium alloys or martensite platlets in steel. Depending on the second phase in question this may have positive or negative effects.
Rules of grain growth
Grain growth has long been studied primarily by the examination of sectioned, polished and etched samples under the optical microscope. Although such methods enabled the collection of a great deal of empirical evidence, particularly with regard to factors such as temperature or composition, the lack of crystallographic information limited the development of an understanding of the fundamental physics. Nevertheless, the following became well-established features of grain growth:
Grain growth occurs by the movement
|
https://en.wikipedia.org/wiki/Baldwin%27s%20rules
|
Baldwin's rules in organic chemistry are a series of guidelines outlining the relative favorabilities of ring closure reactions in alicyclic compounds. They were first proposed by Jack Baldwin in 1976.
Baldwin's rules discuss the relative rates of ring closures of these various types. These terms are not meant to describe the absolute probability that a reaction will or will not take place, rather they are used in a relative sense. A reaction that is disfavoured (slow) does not have a rate that is able to compete effectively with an alternative reaction that is favoured (fast). However, the disfavoured product may be observed, if no alternate reactions are more favoured.
The rules classify ring closures in three ways:
the number of atoms in the newly formed ring
into exo and endo ring closures, depending whether the bond broken during the ring closure is inside (endo) or outside (exo) the ring that is being formed
into tet, trig and dig geometry of the atom being attacked, depending on whether this electrophilic carbon is tetrahedral (sp3 hybridised), trigonal (sp2 hybridised) or diagonal (sp hybridised).
Thus, a ring closure reaction could be classified as, for example, a 5-exo-trig.
Baldwin discovered that orbital overlap requirements for the formation of bonds favour only certain combinations of ring size and the exo/endo/dig/trig/tet parameters. Interactive 3D models of several of these transition states can be seen here (javascript required).
There are sometimes exceptions to Baldwin's rules. For example, cations often disobey Baldwin's rules, as do reactions in which a third-row atom is included in the ring. An expanded and revised version of the rules is available:
The rules apply when the nucleophile can attack the bond in question in an ideal angle. These angles are 180° (Walden inversion) for exo-tet reactions, 109° (Bürgi–Dunitz angle) for exo-trig reaction and 120° for endo-dig reactions. Angles for nucleophilic attack on alkynes were reviewed and
|
https://en.wikipedia.org/wiki/Wine%20cave
|
Wine caves are subterranean structures for the storage and the aging of wine. They are an integral component of the wine industry worldwide. The design and construction of wine caves represents a unique application of underground construction techniques.
The storage of wine in extensive underground space is an extension of the culture of wine cellar rooms, both offering the benefits of energy efficiency and optimum use of limited land area. Wine caves naturally provide both high humidity and cool temperatures, which are key to the storage and aging of wine.
History
The history of wine cave construction in the United States dates back to the 1860s in Sonoma, and the 1870s in the Napa Valley region. In 1857, Agoston Harazsthy founded Buena Vista Winery and in 1862, Buena Vista Winery's Press House was completed, and in 1864, a second building now called the Champagne Cellars was completed. In total, Buena Vista Winery had five caves among the two buildings in operation in 1864. Jacob Schram, a German immigrant and barber, founded Schramsberg Vineyards near Calistoga, California in 1862. Eight years later, Schram found new employment for the Chinese laborers who had recently finished constructing tunnels and grades over the Sierra Nevada Mountains for the Union Pacific Transcontinental Railroad. He hired them to dig a network of caves through the soft Sonoma Volcanics Formation rock underlying his vineyard.
Another Chinese workforce took time away from their regular vineyard work to excavate a labyrinth of wine-aging caves beneath the Beringer Vineyards near St. Helena, California. These caves exceeded 1,200 ft (365 m) long, 17 ft (5 m) wide and 7 ft (2 m) high. The workers used pick-axes and shovels – and on occasion, chisel steel, double jacks and black powder – to break the soft rock. They worked by candlelight, and removed the excavated material in wicker baskets. At least 12 wine storage caves were constructed by these methods.
From the late 19th century to t
|
https://en.wikipedia.org/wiki/Superior%20olivary%20complex
|
The superior olivary complex (SOC) or superior olive is a collection of brainstem nuclei that functions in multiple aspects of hearing and is an important component of the ascending and descending auditory pathways of the auditory system. The SOC is intimately related to the trapezoid body: most of the cell groups of the SOC are dorsal (posterior in primates) to this axon bundle while a number of cell groups are embedded in the trapezoid body. Overall, the SOC displays a significant interspecies variation, being largest in bats and rodents and smaller in primates.
Physiology
The superior olivary nucleus plays a number of roles in hearing. The medial superior olive (MSO) is a specialized nucleus that is believed to measure the time difference of arrival of sounds between the ears (the interaural time difference or ITD). The ITD is a major cue for determining the azimuth of sounds, i.e., localising them on the azimuthal plane – their degree to the left or the right.
The lateral superior olive (LSO) is believed to be involved in measuring the difference in sound intensity between the ears (the interaural level difference or ILD). The ILD is a second major cue in determining the azimuth of high-frequency sounds.
Relationship to auditory system
The superior olivary complex is generally located in the pons, but in humans extends from the rostral medulla to the mid-pons and receives projections predominantly from the anteroventral cochlear nucleus (AVCN) via the trapezoid body, although the posteroventral nucleus projects to the SOC via the intermediate acoustic stria. The SOC is the first major site of convergence of auditory information from the left and right ears.
Primary nuclei
The superior olivary complex is divided into three primary nuclei, the MSO, LSO, and the Medial nucleus of the trapezoid body, and several smaller periolivary nuclei. These three nuclei are the most studied, and therefore best understood. Typically, they are regarded as forming the asc
|
https://en.wikipedia.org/wiki/Vestibular%20nuclei
|
The vestibular nuclei (VN) are the cranial nuclei for the vestibular nerve located in the brainstem.
In Terminologia Anatomica they are grouped in both the pons and the medulla in the brainstem.
Structure
Path
The fibers of the vestibular nerve enter the medulla oblongata on the medial side of those of the cochlear, and pass between the inferior peduncle and the spinal tract of the trigeminal nerve.
They then divide into ascending and descending fibers. The latter end by arborizing around the cells of the medial nucleus, which is situated in the area acustica of the rhomboid fossa. The ascending fibers either end in the same manner or in the lateral nucleus, which is situated lateral to the area acustica and farther from the ventricular floor.
Some of the axons of the cells of the lateral nucleus, and possibly also of the medial nucleus, are continued upward through the inferior peduncle to the roof nuclei of the opposite side of the cerebellum, to which also other fibers of the vestibular root are prolonged without interruption in the nuclei of the medulla oblongata.
A second set of fibers from the medial and lateral nuclei end partly in the tegmentum, while the remainder ascend in the medial longitudinal fasciculus to arborize around the cells of the nuclei of the oculomotor nerve.
Fibers from the lateral vestibular nucleus also pass via the vestibulospinal tract, to anterior horn cells at many levels in the spinal cord, in order to co-ordinate head and trunk movements.
Subnuclei
There are 4 subnuclei; they are situated at the floor of the fourth ventricle.
See also
Vestibular nerve
Vestibulocerebellar syndrome
|
https://en.wikipedia.org/wiki/Deposition%20%28phase%20transition%29
|
Deposition is the phase transition in which gas transforms into solid without passing through the liquid phase. Deposition is a thermodynamic process. The reverse of deposition is sublimation and hence sometimes deposition is called desublimation.
Applications
Examples
One example of deposition is the process by which, in sub-freezing air, water vapour changes directly to ice without first becoming a liquid. This is how frost and hoar frost form on the ground or other surfaces. Another example is when frost forms on a leaf. For deposition to occur, thermal energy must be removed from a gas. When the air becomes cold enough, water vapour in the air surrounding the leaf loses enough thermal energy to change into a solid. Even though the air temperature may be below the dew point, the water vapour may not be able to condense spontaneously if there is no way to remove the latent heat. When the leaf is introduced, the supercooled water vapour immediately begins to condense, but by this point is already past the freezing point. This causes the water vapour to change directly into a solid.
Another example is the soot that is deposited on the walls of chimneys. Soot molecules rise from the fire in a hot and gaseous state. When they come into contact with the walls they cool, and change to the solid state, without formation of the liquid state. The process is made use of industrially in combustion chemical vapour deposition.
Industrial applications
There is an industrial coatings process, known as evaporative deposition, whereby a solid material is heated to the gaseous state in a low-pressure chamber, the gas molecules travel across the chamber space and then deposit to the solid state on a target surface, forming a smooth and thin layer on the target surface. Again, the molecules do not go through an intermediate liquid state when going from the gas to the solid. See also physical vapor deposition, which is a class of processes used to deposit thin films of various
|
https://en.wikipedia.org/wiki/Straw%20man%20%28dummy%29
|
A straw man (ritual doll) is a dummy in the shape of a human usually made up entirely out of straw material, or created by stuffing straw into clothes.
Uses
Straw men are commonly used as scarecrows, combat training targets, swordsmiths' test targets, effigies to be burned, and as rodeo dummies to distract bulls.
Rodeo straw men
In the sport of rodeo, the straw man is a dummy, originally made of a shirt and pants stuffed with straw. The straw man is placed in the arena during bullriding events as a safety measure. It is intended to distract the bull after the rider has dismounted (or has been thrown), with the idea that the bull will attack the straw man rather than attack its former rider. Two so-called rodeo clowns – people dressed in bright colors whose job it is to distract the bull if the rider is injured – are in the ring as well and are usually far more effective than the straw man.
See also
Scarecrow
Voodoo doll
Dummies and mannequins
|
https://en.wikipedia.org/wiki/Latitudinal%20gradients%20in%20species%20diversity
|
Species richness, or biodiversity, increases from the poles to the tropics for a wide variety of terrestrial and marine organisms, often referred to as the latitudinal diversity gradient. The latitudinal diversity gradient is one of the most widely recognized patterns in ecology. It has been observed to varying degrees in Earth's past. A parallel trend has been found with elevation (elevational diversity gradient), though this is less well-studied.
Explaining the latitudinal diversity gradient has been called one of the great contemporary challenges of biogeography and macroecology (Willig et al. 2003, Pimm and Brown 2004, Cardillo et al. 2005). The question "What determines patterns of species diversity?" was among the 25 key research themes for the future identified in 125th Anniversary issue of Science (July 2005). There is a lack of consensus among ecologists about the mechanisms underlying the pattern, and many hypotheses have been proposed and debated. A recent review noted that among the many conundrums associated with the latitudinal diversity gradient (or latitudinal biodiversity gradient) the causal relationship between rates of molecular evolution and speciation has yet to be demonstrated.
Understanding the global distribution of biodiversity is one of the most significant objectives for ecologists and biogeographers. Beyond purely scientific goals and satisfying curiosity, this understanding is essential for applied issues of major concern to humankind, such as the spread of invasive species, the control of diseases and their vectors, and the likely effects of global climate change on the maintenance of biodiversity (Gaston 2000). Tropical areas play prominent roles in the understanding of the distribution of biodiversity, as their rates of habitat degradation and biodiversity loss are exceptionally high.
Patterns in the past
The latitudinal diversity gradient is a noticeable pattern among modern organisms that has been described qualitatively and
|
https://en.wikipedia.org/wiki/Basis%20set%20superposition%20error
|
In quantum chemistry, calculations using finite basis sets are susceptible to basis set superposition error (BSSE). As the atoms of interacting molecules (or of different parts of the same molecule - intramolecular BSSE) approach one another, their basis functions overlap. Each monomer "borrows" functions from other nearby components, effectively increasing its basis set and improving the calculation of derived properties such as energy. If the total energy is minimised as a function of the system geometry, the short-range energies from the mixed basis sets must be compared with the long-range energies from the unmixed sets, and this mismatch introduces an error.
Other than using infinite basis sets, two methods exist to eliminate the BSSE. In the chemical Hamiltonian approach (CHA), basis set mixing is prevented a priori, by replacing the conventional Hamiltonian with one in which all the projector-containing terms that would allow mixing have been removed. In the counterpoise method (CP), the BSSE is calculated by re-performing all the calculations using the mixed basis sets, and the error is then subtracted a posteriori from the uncorrected energy. (The mixed basis sets are realised by introducing "ghost orbitals", basis set functions which have no electrons or protons. It however has been shown that there is an inherent danger in using counterpoise corrected energy surfaces, due to the inconsistent effect of the correction in different areas of the energy surface.) Though conceptually very different, the two methods tend to give similar results. It also has been shown that the error is often larger when using the CP method since the central atoms in the system have much greater freedom to mix with all of the available functions compared to the outer atoms. Whereas in the CHA model, those orbitals have no greater intrinsic freedom and therefore the correction treats all fragments equally. The errors inherent in either BSSE correction disappear more rapidly
|
https://en.wikipedia.org/wiki/Electrocaloric%20effect
|
The electrocaloric effect is a phenomenon in which a material shows a reversible temperature change under an applied electric field. It is often considered to be the physical inverse of the pyroelectric effect. It should not be confused with the Thermoelectric effect (specifically, the Peltier effect), in which a temperature difference occurs when a current is driven through an electric junction with two dissimilar conductors.
The underlying mechanism of the effect is not fully established; in particular, different textbooks give conflicting explanations. However, as with any isolated (adiabatic) temperature change, the effect comes from the voltage raising or lowering the entropy of the system. (The magnetocaloric effect is an analogous, but better-known and understood, phenomenon.)
Electrocaloric materials were the focus of significant scientific interest in the 1960s and 1970s, but were not commercially exploited as the electrocaloric effects were insufficient for practical applications, the highest response being 2.5 degrees Celsius under an applied potential of 750 volts.
In March 2006 it was reported in the journal Science that thin films of the material PZT (a mixture of lead, titanium, oxygen and zirconium) showed the strongest electrocalorific response yet reported, with the materials cooling down by as much as ~12 K (12 °C) for an electric field change of 480 kV/cm, at an ambient temperature of 220 °C (430 °F). The device structure consisted of a thin film (PZT) on top of a much thicker substrate, but the figure of 12 K represents the cooling of the thin film only. The net cooling of such a device would be lower than 12 K due to the heat capacity of the substrate to which it is attached.
Along the same lines, in 2008, it was shown that a ferroelectric polymer can also achieve 12 K of cooling, nearer to room temperature (yet above 70 °C) than PZT.
With these new, larger responses, practical applications may be more likely, such as in computer cooling
|
https://en.wikipedia.org/wiki/Flags%20of%20the%20lieutenant%20governors%20of%20Canada
|
As the viceregal representative of the monarch of Canada, the lieutenant governors of the Canadian provinces have since Confederation been entitled to and have used a personal standard. Within a lieutenant governor's province, this standard has precedence over any other flag, including the national one, though it comes secondary to the Sovereign's Flag for Canada. The provincial viceregal flags are also subordinate to the governor general's personal standard, save for when the governor general is present as a guest of the lieutenant governor.
In 1980, a new design was introduced and is used by each province's lieutenant governor, except for Quebec and Nova Scotia. Common frame of each flag consists of the escutcheon of the arms of the province circled with ten gold maple leaves (representing the ten provinces) surmounted by a St. Edward's Crown on a field of blue. Though approved in 1980, most provinces adopted this new common design in 1981, with Newfoundland being the last in 1987. The personal standard is flown at the office or home of the lieutenant governor and from flagpoles of buildings where official duties are carried out to indicate presence of the lieutenant governor. It is also attached to the front fender of the car or on the provincial landau that the lieutenant governor is riding in. The standard is never flown on a church or inside a church, nor is it ever lowered to half-mast. Should a lieutenant governor die while in office, the standard is taken down until a successor is sworn in.
Current
Historical
For the other provinces many of them used a defaced Union Jack with the vice-regal arms in the centre.
See also
Flag of the governor general of Canada
List of Canadian flags
Royal standards of Canada
|
https://en.wikipedia.org/wiki/Bhaskar%E2%80%93Jagannathan%20syndrome
|
Bhaskar–Jagannathan syndrome is an extremely rare genetic disorder and there is a limited amount of information related to it. Similar or related medical conditions are arachnodactyly, aminoaciduria, congenital cataracts, cerebellar ataxia, and delayed developmental milestones.
Signs and symptoms
Bhaskar–Jagannathan has symptoms such as long fingers, thin fingers, poor balance, incoordination, high levels of amino acids in urine, cataracts during infancy, and ataxia. Ataxia, which is a neurological sign and symptom made up of gross incoordination of muscle movements and is a specific clinical manifestation
Cause
Diagnose
There are three different ways to diagnose Bhaskar–Jagannathan. This disorder may be diagnosed by a urine test, a blood test, and an X-ray of the eyes or other body parts.
Treatment
Treatment for this rare genetic disorder can be physical therapy, there have been antibiotics found to be effective, and surgery has been found to be another solution.
|
https://en.wikipedia.org/wiki/Data%20transfer%20object
|
In the field of programming a data transfer object (DTO) is an object that carries data between processes. The motivation for its use is that communication between processes is usually done resorting to remote interfaces (e.g., web services), where each call is an expensive operation. Because the majority of the cost of each call is related to the round-trip time between the client and the server, one way of reducing the number of calls is to use an object (the DTO) that aggregates the data that would have been transferred by the several calls, but that is served by one call only.
The difference between data transfer objects and business objects or data access objects is that a DTO does not have any behavior except for storage, retrieval, serialization and deserialization of its own data (mutators, accessors, serializers and parsers). In other words,
DTOs are simple objects that should not contain any business logic but may contain serialization and deserialization mechanisms for transferring data over the wire.
This pattern is often incorrectly used outside of remote interfaces. This has triggered a response from its author where he reiterates that the whole purpose of DTOs is to shift data in expensive remote calls.
Terminology
A "Value Object" is not a DTO. The two terms have been conflated by Sun/Java community in the past.
|
https://en.wikipedia.org/wiki/Black%20Report
|
The Black Report was a 1980 document published by the Department of Health and Social Security (now the Department of Health and Social Care) in the United Kingdom, which was the report of the expert committee into health inequality chaired by Sir Douglas Black. It was demonstrated that although, in general, health had improved since the introduction of the welfare state, there were widespread health inequalities. It also found that the main cause of these inequalities was economic inequality. The report showed that the death rate for men in social class V was twice that for men in social class I and that gap between the two was increasing, not reducing as was expected.
Commissioning
The Black report was commissioned in March 1977 by David Ennals, Labour Secretary of State, following publication of a two-page article by Richard G. Wilkinson in New Society, on 16 December 1976, entitled Dear David Ennals. The report was nearly ready for publication in early 1979.
In the General Election on 3 May 1979, the Conservatives were elected. The Black Report was not issued until 1980 by the Conservative Government. The Black report was published on August Bank Holiday with only 260 copies made available on the day for the media. The foreword, by Patrick Jenkin rejected "the view that the causes of health inequalities are so deep rooted that only a major and wide-ranging programme of public expenditure is capable of altering the pattern." He made "it clear that additional expenditure on the scale which could result from the report’s recommendations – the amount involved could be upwards of £2 billion a year – is quite unrealistic in present or any foreseeable economic circumstances, quite apart from any judgement that may be formed of the effectiveness of such expenditure in dealing with the problems identified."
The report had a huge impact on political thought in the United Kingdom and overseas. It led to an assessment by the Office for Economic Co-Operation and Deve
|
https://en.wikipedia.org/wiki/Union-closed%20sets%20conjecture
|
The union-closed sets conjecture, also known as Frankl’s conjecture, is an open problem in combinatorics posed by Péter Frankl in 1979. A family of sets is said to be union-closed if the union of any two sets from the family belongs to the family. The conjecture states: For every finite union-closed family of sets, other than the family containing only the empty set, there exists an element that belongs to at least half of the sets in the family.
Professor Timothy Gowers has called this "one of the best known open problems in combinatorics" and has said that the conjecture "feels as though it ought to be easy (and as a result has attracted a lot of false proofs over the years). A good way to understand why it isn't easy is to spend an afternoon trying to prove it. That clever averaging argument you had in mind doesn't work ..."
Example
The family of setsconsists of five different sets and is union-closed. The element is contained in three of the five sets (and so is the element ), thus the conjecture holds in this case.
Basic results
It is easy to show that if a union-closed family contains a singleton (as in the example above), then the element must occur in at least half of the sets of the family.
If there is a counterexample to the conjecture, then there is also a counterexample consisting only of finite sets. Therefore, without loss of generality, we will assume that all sets in the given union-closed family are finite.
Given a finite non-empty set , the power set consisting of all subsets of is union-closed. Each element of is contained in exactly half of the subsets of . Therefore, in general we cannot ask for an element contained in more than half of the sets of the family: the bound of the conjecture is sharp.
Equivalent forms
Intersection formulation
The union-closed set conjecture is true if and only if a set system which is intersection-closed contains an element of in at most half of the sets of , where is the universe set, i.e. the
|
https://en.wikipedia.org/wiki/MMARP
|
The Multicast MAnet Routing Protocol (MMARP) aims to provide multicast routing in Mobile Ad Hoc Networks (MANETs) taking into account interoperation with fixed IP networks with support of IGMP/MLD protocol. This is achieved by the Multicast Internet Gateway (MIG) which is an ad hoc node itself and is responsible for notifying access routers about the interest revealed by common ad hoc nodes. Any of these nodes may become a MIG at any time but needs to be one hop away from the network access router. Once it self-configures as MIG it should then broadcast periodically its address as being the one of the default multicast gateway. Whoever besides this proactive advertisement the protocol states a reactive component the ad hoc mesh is created and maintained.
When a source node has multicast traffic to send it broadcast a message warning potential receivers of such data. Receivers should then manifest interest sending a Join message towards the source creating a multicast shortest path. Also in the same way the MIG should inform all the ad hoc nodes about the path towards multicast sources in the fixed network.
See also
List of ad hoc routing protocols
|
https://en.wikipedia.org/wiki/Tactile%20transducer
|
A tactile transducer or "bass shaker" is a device which is made on the principle that low bass frequencies can be felt as well as heard. They can be compared with a common loudspeaker, just that the diaphragm is missing. Instead, another object is used as a diaphragm. A shaker transmits low-frequency vibrations into various surfaces so that they can be felt by people. This is called tactile sound. Tactile transducers may augment or in some cases substitute for a subwoofer. One benefit of tactile transducers is they produce little or no noise, if properly installed, as compared with a subwoofer speaker enclosure.
Applications
A bass-shaker is meant to be firmly attached to some surface such as a seat, couch or floor. The shaker houses a small weight which is driven by a voice coil similar to those found in dynamic loudspeakers. The voice-coil is driven by a low-frequency audio signal from an amplifier; common shakers typically handle 25 to 50 watts of amplifier power. The voice coil exerts force on both the weight and the body of the shaker, with the latter forces being transmitted into the mounting surface. Tactile transducers may be used in a home theater, a video gaming chair or controller, a commercial movie theater, or for special effects in an arcade game, amusement park ride or other application.
Related to bass shakers are a newer type of transducer referred to as linear actuators. These piston-like electromagnetic devices transmit motion in a direct fashion by lifting home theater seating in the vertical plane rather than transferring vibrations (by mounting within a seat, platform or floor). This technology is said to transmit a high-fidelity sound-motion augmentation, whereas "Shakers" may require heavy equalization and/or multiple units to approach a realistic effect.
Virtual reality
There are other products which employ hydraulic (long-throw) linear actuators and outboard motion processors for home applications as popularized in "virtual reality" ride
|
https://en.wikipedia.org/wiki/Stronsay%20Beast
|
The Stronsay Beast was a large globster that washed ashore on the island of Stronsay (at the time spelled Stronsa), in the Orkney Islands, Scotland, after a storm on 25 September 1808. The carcass measured 55 ft (16.8 m) in length, without part of its tail. The Natural History Society (Wernerian Society) of Edinburgh could not identify the carcass and decided it was a new species, probably a sea serpent. The Scottish naturalist Patrick Neill gave it the scientific name Halsydrus pontoppidani (Pontoppidan's sea-snake) in honor of Erik Pontoppidan, who described sea serpents in a work published half a century before. The anatomist Sir Everard Home in London later dismissed the measurement, declaring it must have been around 30 ft (9 m), and deemed it to be a decayed basking shark. In 1849, Scottish professor John Goodsir in Edinburgh came to the same conclusion.
The Stronsay Beast was measured by a carpenter and two farmers. It was 4 ft (1.2 m) wide and had a circumference of about 10 ft (3.1 m). It had three pairs of appendages described as 'paws' or 'wings'. Its skin was smooth when stroked head to tail and rough when stroked tail to head. Its fins were edged with bristles and it had a row of bristles down its back, which glowed in the dark when wet. Its stomach contents were red.
See also
Zuiyo-maru carcass
|
https://en.wikipedia.org/wiki/European%20Genetics%20Foundation
|
The European Genetics Foundation (EGF) is a non-profit organization, dedicated to the training of young geneticists active in medicine, to continuing education in genetics/genomics and to the promotion of public understanding of genetics. Its main office is located in Bologna, Italy.
Background
In 1988 Prof. Giovanni Romeo, President of the European Genetics Foundation (EGF) and professor of Medical Genetics at the University of Bologna and Prof. Victor A. McKusick founded together the European School of Genetic Medicine (ESGM).
Since that time ESGM has taught genetics to postgraduate students (young M.D. and PhD) from some 70 different countries. Most of the courses are presented at ESGM's Main Training Center (MTC) in Bertinoro di Romagna (Italy), and are also available via webcast at authorized Remote Training Centers (RTC) in various countries in Europe and the Mediterranean area (Hybrid Courses). In the Netherlands and Switzerland, medical geneticists must attend at least one ESGM course before admission to their Board examinations.
For these reasons, the School has been able to expand and to obtain funding from the European Commission and from other international organizations.
Presentation of the Ronzano Project
The European School of Genetic Medicine was founded in 1988 and saw rapid success, which necessitated that an administrative body be formed. To this end the European Genetics Foundation was born in Genoa on 20 November 1995, with the following aims:
to run the ESGM, promoting the advanced scientific and professional training of young European Geneticists, with particular attention to the applications in the field of preventive medicine;
to promote public education about genetics discoveries;
to organize conferences, courses, international prizes and initiatives aimed at bringing together the scientific and humanistic disciplines.
The ESGM began receiving funding from the European Union and from other international organizations including the Eu
|
https://en.wikipedia.org/wiki/History%20of%20the%20Amiga
|
The Amiga is a family of home computers that were designed and sold by the Amiga Corporation (and later by Commodore Computing International) from 1985 to 1994.
Amiga Corporation
The Amiga's Original Chip Set, code-named Lorraine, was designed by the Amiga Corporation during the end of the first home video game boom. Development of the Lorraine project was done using a Sage IV machine nicknamed "Agony" which had 64-kbit memory modules with a capacity of 1 mbit and a 8 MHz . Amiga Corp. funded the development of the Lorraine by manufacturing game controllers, and later with an initial bridge loan from Atari Inc. while seeking further investors. The chipset was to be used in a video game machine, but following the video game crash of 1983, the Lorraine was reconceived as a multi-tasking multi-media personal computer.
The company demonstrated a prototype at the January 1984 Consumer Electronics Show (CES) in Chicago, attempting to attract investors. The Sage acted as the CPU, and BYTE described "big steel boxes" substituting for the chipset that did not yet exist. The magazine reported in April 1984 that Amiga Corporation "is developing a 68000-based home computer with a custom graphics processor. With 128K bytes of RAM and a floppy-disk drive, the computer will reportedly sell for less than $1000 late this year."
Further presentations were made at the following CES in June 1984, to Sony, HP, Philips, Apple, Silicon Graphics, and others. Steve Jobs of Apple, who had just introduced the Macintosh in January, was shown the original prototype for the first Amiga and stated that there was too much hardware – even though the newly redesigned board consisted of just three silicon chips which had yet to be shrunk down. Investors became increasingly wary of new computer companies in an industry dominated by the IBM PC. Jay Miner, co-founder, lead engineer and architect, took out a second mortgage on his home to keep the company from going bankrupt.
In July 1984, Atari Inc
|
https://en.wikipedia.org/wiki/Verifiable%20secret%20sharing
|
In cryptography, a secret sharing scheme is verifiable if auxiliary information is included that allows players to verify their shares as consistent. More formally, verifiable secret sharing ensures that even if the dealer is malicious there is a well-defined secret that the players can later reconstruct. (In standard secret sharing, the dealer is assumed to be honest.)
The concept of verifiable secret sharing (VSS) was first introduced in 1985 by Benny Chor, Shafi Goldwasser, Silvio Micali and Baruch Awerbuch.
In a VSS protocol a distinguished player who wants to share the secret is referred to as the dealer. The protocol consists of two phases: a sharing phase and a reconstruction phase.
Sharing: Initially the dealer holds secret as input and each player holds an independent random input. The sharing phase may consist of several rounds. At each round each player can privately send messages to other players and can also broadcast a message. Each message sent or broadcast by a player is determined by its input, its random input and messages received from other players in previous rounds.
Reconstruction: In this phase each player provides its entire view from the sharing phase and a reconstruction function is applied and is taken as the protocol's output.
An alternative definition given by Oded Goldreich defines VSS as a secure multi-party protocol for computing the randomized functionality corresponding to some (non-verifiable) secret sharing scheme. This definition is stronger than that of the other definitions and is very convenient to use in the context of general secure multi-party computation.
Verifiable secret sharing is important for secure multiparty computation. Multiparty computation is typically accomplished by making secret shares of the inputs, and manipulating the shares to compute some function. To handle "active" adversaries (that is, adversaries that corrupt nodes and then make them deviate from the protocol), the secret sharing scheme needs
|
https://en.wikipedia.org/wiki/Wadim%20Zudilin
|
Wadim Zudilin (Вадим Валентинович Зудилин) is a Russian mathematician and number theorist who is active in studying hypergeometric functions and zeta constants. He studied under Yuri V. Nesterenko and worked at Moscow State University, the Steklov Institute of Mathematics, the Max Planck Institute for Mathematics and the University of Newcastle, Australia. He now works at the Radboud University Nijmegen, the Netherlands.
He has reproved Apéry's theorem that ζ(3) is irrational, and expanded it. Zudilin proved that at least one of the four numbers ζ(5), ζ(7), ζ(9), or ζ(11) is irrational. For that accomplishment he won the Distinguished Award of the Hardy-Ramanujan Society in 2001.
With Doron Zeilberger, Zudilin improved upper bound of irrationality measure for π, which as of November 2022 is the current best estimate.
|
https://en.wikipedia.org/wiki/Factory%20Interface%20Network%20Service
|
FINS, Factory Interface Network Service, is a network protocol used by Omron PLCs, over different physical networks like Ethernet, Controller Link, DeviceNet and RS-232C.
The FINS communications service was developed by Omron to provide a consistent way for PLCs and computers on various networks to communicate. Compatible network types include Ethernet, Host Link, Controller Link, SYSMAC LINK, SYSMAC WAY, and Toolbus. FINS allows communications between nodes up to three network levels. A direct connection between a computer and a PLC via Host Link is not considered a network level.
|
https://en.wikipedia.org/wiki/Asteroid%20body
|
An asteroid body is a microscopic finding seen within the giant cells of granulomas in diseases such as sarcoidosis and foreign-body giant cell reactions.
There is controversy about their composition. Traditionally, they were thought to be cytoskeletal elements and to consist primarily of vimentin. However, more recent research suggested that that was incorrect and that they may be composed of lipids arranged into bilayer membranes.
They were also once thought to be related to centrioles, an organelle involved in cell division in eukaryotes.
See also
Asteroid
Centriole
Schaumann body
Granulomatous diseases
Sarcoidosis
Additional images
|
https://en.wikipedia.org/wiki/CPUID
|
In the x86 architecture, the CPUID instruction (identified by a CPUID opcode) is a processor supplementary instruction (its name derived from CPU Identification) allowing software to discover details of the processor. It was introduced by Intel in 1993 with the launch of the Pentium and SL-enhanced 486 processors.
A program can use the CPUID to determine processor type and whether features such as MMX/SSE are implemented.
History
Prior to the general availability of the CPUID instruction, programmers would write esoteric machine code which exploited minor differences in CPU behavior in order to determine the processor make and model. With the introduction of the 80386 processor, EDX on reset indicated the revision but this was only readable after reset and there was no standard way for applications to read the value.
Outside the x86 family, developers are mostly still required to use esoteric processes (involving instruction timing or CPU fault triggers) to determine the variations in CPU design that are present.
In the Motorola 680x0 family — that never had a CPUID instruction of any kind — certain specific instructions required elevated privileges. These could be used to tell various CPU family members apart. In the Motorola 68010 the instruction MOVE from SR became privileged. This notable instruction (and state machine) change allowed the 68010 to meet the Popek and Goldberg virtualization requirements. Because the 68000 offered an unprivileged MOVE from SR the 2 different CPUs could be told apart by a CPU error condition being triggered.
While the CPUID instruction is specific to the x86 architecture, other architectures (like ARM) often provide on-chip registers which can be read in prescribed ways to obtain the same sorts of information provided by the x86 CPUID instruction.
Calling CPUID
The CPUID opcode is 0F A2.
In assembly language, the CPUID instruction takes no parameters as CPUID implicitly uses the EAX register to determine the main category
|
https://en.wikipedia.org/wiki/Erd%C5%91s%20conjecture%20on%20arithmetic%20progressions
|
Erdős' conjecture on arithmetic progressions, often referred to as the Erdős–Turán conjecture, is a conjecture in arithmetic combinatorics (not to be confused with the Erdős–Turán conjecture on additive bases). It states that if the sum of the reciprocals of the members of a set A of positive integers diverges, then A contains arbitrarily long arithmetic progressions.
Formally, the conjecture states that if A is a large set in the sense that
then A contains arithmetic progressions of any given length, meaning that for every positive integer k there are an integer a and a non-zero integer c such that .
History
In 1936, Erdős and Turán made the weaker conjecture that any set of integers with positive natural density contains infinitely many 3 term arithmetic progressions. This was proven by Klaus Roth in 1952, and generalized to arbitrarily long arithmetic progressions by Szemerédi in 1975 in what is now known as Szemerédi's theorem.
In a 1976 talk titled "To the memory of my lifelong friend and collaborator Paul Turán," Paul Erdős offered a prize of US$3000 for a proof of this conjecture. As of 2008 the problem is worth US$5000.
Progress and related results
Erdős' conjecture on arithmetic progressions can be viewed as a stronger version of Szemerédi's theorem. Because the sum of the reciprocals of the primes diverges, the Green–Tao theorem on arithmetic progressions is a special case of the conjecture.
The weaker claim that A must contain infinitely many arithmetic progressions of length 3 is a consequence of an improved bound in Roth's theorem. A 2016 paper by Bloom proved that if contains no non-trivial three-term arithmetic progressions then .
In 2020 a preprint by Bloom and Sisask improved the bound to for some absolute constant .
In 2023 a preprint by Kelley and Meka gave a new bound of and four days later Bloom and Sisask simplified the result and with a little improvement to .
See also
Problems involving arithmetic progressions
List of sums
|
https://en.wikipedia.org/wiki/Texas%20Math%20and%20Science%20Coaches%20Association
|
The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests.
Events
There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science.
Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier.
The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version.
It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers.
General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy.
Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5
|
https://en.wikipedia.org/wiki/PowerHouse%20%28programming%20language%29
|
PowerHouse is a byte-compiled fourth-generation programming language (or 4GL) originally produced by Quasar Corporation (later renamed Cognos Incorporated) for the Hewlett-Packard HP3000 mini-computer, as well as Data General and DEC VAX/VMS systems. It was initially composed of five components:
QDD, or Quasar Data Dictionary: for building a central data dictionary used by all other components
QDesign: a character-based screen generator
Quick: an interactive, character-based screen processor (running screens generated by QDesign)
Quiz: a report writer
QTP: a batch transaction processor.
History
PowerHouse was introduced in 1982 and bundled together in a single product Quiz and Quick/QDesign, both of which had been previously available separately, with a new batch processor QTP. In 1983, Quasar changed its name to Cognos Corporation and began porting their application development tools to other platforms, notably Digital Equipment Corporation's VMS, Data General's AOS/VS II, and IBM's OS/400, along with the UNIX platforms from these vendors. Cognos also began extending their product line with add-ons to PowerHouse (for example, Architect) and end-user applications written in PowerHouse (for example, MultiView). Subsequent development of the product added support for platform-specific relational databases, such as HP's Allbase/SQL, DEC's Rdb, and Microsoft's SQL Server, as well as cross-platform relational databases such as Oracle, Sybase, and IBM's DB2.
The PowerHouse language represented a considerable achievement. Compared with languages like COBOL, Pascal and PL/1, PowerHouse substantially cut the amount of labour required to produce useful applications on its chosen platforms. It achieved this through the use of a central data-dictionary, a compiled file that extended the attributes of data fields natively available in the DBMS with frequently used programming idioms such as:
display masks
help and message strings
range and pattern checks
help an
|
https://en.wikipedia.org/wiki/Born%20rule
|
The Born rule (also called Born's rule) is a postulate of quantum mechanics which gives the probability that a measurement of a quantum system will yield a given result. In its simplest form, it states that the probability density of finding a system in a given state, when measured, is proportional to the square of the amplitude of the system's wavefunction at that state. It was formulated and published by German physicist Max Born in July, 1926.
Details
The Born rule states that if an observable corresponding to a self-adjoint operator with discrete spectrum is measured in a system with normalized wave function (see Bra–ket notation), then:
the measured result will be one of the eigenvalues of , and
the probability of measuring a given eigenvalue will equal , where is the projection onto the eigenspace of corresponding to .
(In the case where the eigenspace of corresponding to is one-dimensional and spanned by the normalized eigenvector , is equal to , so the probability is equal to . Since the complex number is known as the probability amplitude that the state vector assigns to the eigenvector , it is common to describe the Born rule as saying that probability is equal to the amplitude-squared (really the amplitude times its own complex conjugate). Equivalently, the probability can be written as .)
In the case where the spectrum of is not wholly discrete, the spectral theorem proves the existence of a certain projection-valued measure , the spectral measure of . In this case:
the probability that the result of the measurement lies in a measurable set is given by .
A wave function for a single structureless particle in space position implies that the probability density function for a measurement of the particles's position at time is:
In some applications, this treatment of the Born rule is generalized using positive-operator-valued measures. A POVM is a measure whose values are positive semi-definite operators on a Hilbert space. POV
|
https://en.wikipedia.org/wiki/Round%20function
|
In topology and in calculus, a round function is a scalar function ,
over a manifold , whose critical points form one or several connected components, each homeomorphic to the circle
, also called critical loops. They are special cases of Morse-Bott functions.
For instance
For example, let be the torus. Let
Then we know that a map
given by
is a parametrization for almost all of . Now, via the projection
we get the restriction
is a function whose critical sets are determined by
this is if and only if .
These two values for give the critical sets
which represent two extremal circles over the torus .
Observe that the Hessian for this function is
which clearly it reveals itself as rank of equal to one
at the tagged circles, making the critical point degenerate, that is, showing that the critical points are not isolated.
Round complexity
Mimicking the L–S category theory one can define the round complexity asking for whether or not exist round functions on manifolds and/or for the minimum number of critical loops.
|
https://en.wikipedia.org/wiki/IETF%20Administrative%20Support%20Activity
|
The full name of IETF is "The Internet Engineering Task Force"which is the premier Internet standards body. It develops open standards through collaboration for open processes.
The IETF Administrative Support Activity (IASA) is an activity housed within the Internet Society (ISOC).
The IASA is described by , an IETF Request for Comments document, released in April, 2005.
See also
Computer-supported collaboration
|
https://en.wikipedia.org/wiki/Apperception
|
Apperception (from the Latin ad-, "to, toward" and percipere, "to perceive, gain, secure, learn, or feel") is any of several aspects of perception and consciousness in such fields as psychology, philosophy and epistemology.
Meaning in philosophy
The term originates with René Descartes in the form of the word apercevoir in his book Traité des passions. Leibniz introduced the concept of apperception into the more technical philosophical tradition, in his work Principes de la nature fondés en raison et de la grâce; although he used the word practically in the sense of the modern attention, by which an object is apprehended as "not-self" and yet in relation to the self.
Immanuel Kant distinguished transcendental apperception from empirical apperception. The first is the perception of an object as involving the consciousness of the pure self as subject – "the pure, original, unchangeable consciousness that is the necessary condition of experience and the ultimate foundation of the unity of experience". The second is "the consciousness of the concrete actual self with its changing states", the so-called "inner sense" (Otto F. Kraushaar in Runes).
The German philosopher Theodor Lipps distinguished the terms perception and apperception in his 1902 work Vom Fühlen, Wollen und Denken. Perception, for Lipps, is a generic term that covers such psychic occurrences as auditory and tactile sensations, recollections, visual representations in memory, etc. But these perceptions do not always hold one's conscious attention – perception is not always consciously noticed. Lipps uses the term apperception, then, to refer to attentive perception, wherein, in addition to merely perceiving an object, either one also consciously attends to the perceived object or one also attends to the very perception of the object.
Meaning in psychology
In psychology, apperception is "the process by which new experience is assimilated to and transformed by the residuum of past experience of an individ
|
https://en.wikipedia.org/wiki/WHOIS
|
WHOIS (pronounced as the phrase "who is") is a query and response protocol that is used for querying databases that store an Internet resource's registered users or assignees. These resources include domain names, IP address blocks and autonomous systems, but it is also used for a wider range of other information. The protocol stores and delivers database content in a human-readable format. The current iteration of the WHOIS protocol was drafted by the Internet Society, and is documented in .
Whois is also the name of the command-line utility on most UNIX systems used to make WHOIS protocol queries. In addition, WHOIS has a sister protocol called Referral Whois (RWhois).
History
Elizabeth Feinler and her team (who had created the Resource Directory for ARPANET) were responsible for creating the first WHOIS directory in the early 1970s. Feinler set up a server in Stanford's Network Information Center (NIC) which acted as a directory that could retrieve relevant information about people or entities. She and the team created domains, with Feinler's suggestion that domains be divided into categories based on the physical address of the computer.
The process of registration was established in . WHOIS was standardized in the early 1980s to look up domains, people, and other resources related to domain and number registrations. As all registration was done by one organization at that time, one centralized server was used for WHOIS queries. This made looking up such information very easy.
At the time of the emergence of the internet from the ARPANET, the only organization that handled all domain registrations was the Defense Advanced Research Projects Agency (DARPA) of the United States government (created during 1958.). The responsibility of domain registration remained with DARPA as the ARPANET became the Internet during the 1980s. UUNET began offering domain registration service; however, they simply handled the paperwork which they forwarded to the DARPA Network In
|
https://en.wikipedia.org/wiki/Radio%20Reconnaissance%20Platoon
|
The Radio Reconnaissance Platoon is a specially trained Marine Corps Intelligence element of a United States Marine Corps Radio Battalion. A Radio Reconnaissance Team (RRT) was assigned as the tactical signals intelligence collection element for the Marine Corps Special Operations Command, Detachment One. Regular RRTs also participate in SOC operations during Marine Expeditionary Unit (Special Operations Capable), or MEU(SOC), deployments.
Mission
The mission of the Radio Reconnaissance Platoon is to conduct tactical signals intelligence and electronic warfare operations in support of the Marine Air-Ground Task Force (MAGTF) commander during advance force, pre-assault, and deep post-assault operations, as well as maritime special purpose operations.
The RRT is used when the use of conventionally-trained radio battalion elements is inappropriate or not feasible.
While deployed with a MEU (SOC), the Radio Reconnaissance Team is also a part of the Maritime Special Purpose Force (MSPF) as a unit of the Reconnaissance & Surveillance Element (MSPF). The MSPF is a sub-element of the MEU(SOC), as a whole, and is responsible for performing specialized maritime missions. These missions include, but are not limited to:
Direct Action Missions
Maritime interdiction Operations (MIO)
Deep reconnaissance
Capabilities
Indications and warnings
Limited electronic warfare
Communications support
Reconnaissance and surveillance via NATO format
Insertion/Extraction Techniques
Patrolling
Helicopter Touchdown
Helocast
Small Boat (Hard Duck, Soft Duck, Rolled Duck)
Rappel
Fast Rope
Special Patrol Insertion/Extraction (SPIE)
Wet
Dry
Static Line
Over-the-Horizon Combat Rubber Raiding Craft (CRRC)
SIGINT
Foreign languages
Arabic
Russian
Korean
Turkish
Spanish
Persian
Croatian/Serbian/Bosnian
Morse Code intercept (>20 GPM)
Analysis and reporting
Training
RRP begins with completion of Army Airborne School, which is followed by the Basic Reconnaissance Course,
|
https://en.wikipedia.org/wiki/Warm%20dark%20matter
|
Warm dark matter (WDM) is a hypothesized form of dark matter that has properties intermediate between those of hot dark matter and cold dark matter, causing structure formation to occur bottom-up from above their free-streaming scale, and top-down below their free streaming scale. The most common WDM candidates are sterile neutrinos and gravitinos. The WIMPs (weakly interacting massive particles), when produced non-thermally, could be candidates for warm dark matter. In general, however, the thermally produced WIMPs are cold dark matter candidates.
keVins and GeVins
One possible WDM candidate particle with a mass of a few keV comes from introducing two new, zero charge, zero lepton number fermions to the Standard Model of Particle Physics: "keV-mass inert fermions" (keVins) and "GeV-mass inert fermions" (GeVins). keVins are overproduced if they reach thermal equilibrium in the early universe, but in some scenarios the entropy production from the decays of unstable heavier particles may suppress their abundance to the correct value. These particles are considered "inert" because they only have suppressed interactions with the Z boson.
Sterile neutrinos with masses of a few keV are possible candidates for keVins.
At temperatures below the electroweak scale their only interactions with standard model particles are weak interactions due to their mixing with ordinary neutrinos. Due to the smallness of the mixing angle they are not overproduced because they freeze out before reaching thermal equilibrium. Their properties are consistent with astrophysical bounds coming from structure formation and the Pauli principle if their mass is larger than 1-8 keV.
In February 2014, different analyses have extracted from the spectrum of X-ray emissions observed by XMM-Newton, a monochromatic signal around 3.5 keV. This signal is coming from different galaxy clusters (like Perseus and Centaurus) and several scenarios of warm dark matter can justify such a line. We can cite, for ex
|
https://en.wikipedia.org/wiki/Lebesgue%27s%20decomposition%20theorem
|
In mathematics, more precisely in measure theory, Lebesgue's decomposition theorem states that for every two σ-finite signed measures and on a measurable space there exist two σ-finite signed measures and such that:
(that is, is absolutely continuous with respect to )
(that is, and are singular).
These two measures are uniquely determined by and
Refinement
Lebesgue's decomposition theorem can be refined in a number of ways.
First, the decomposition of a regular Borel measure on the real line can be refined:
where
νcont is the absolutely continuous part
νsing is the singular continuous part
νpp is the pure point part (a discrete measure).
Second, absolutely continuous measures are classified by the Radon–Nikodym theorem, and discrete measures are easily understood. Hence (singular continuous measures aside), Lebesgue decomposition gives a very explicit description of measures. The Cantor measure (the probability measure on the real line whose cumulative distribution function is the Cantor function) is an example of a singular continuous measure.
Related concepts
Lévy–Itō decomposition
The analogous decomposition for a stochastic processes is the Lévy–Itō decomposition: given a Lévy process X, it can be decomposed as a sum of three independent Lévy processes where:
is a Brownian motion with drift, corresponding to the absolutely continuous part;
is a compound Poisson process, corresponding to the pure point part;
is a square integrable pure jump martingale that almost surely has a countable number of jumps on a finite interval, corresponding to the singular continuous part.
See also
Decomposition of spectrum
Hahn decomposition theorem and the corresponding Jordan decomposition theorem
Citations
|
https://en.wikipedia.org/wiki/Axostyle
|
An axostyle is a sheet of microtubules found in certain protists. It arises from the bases of the flagella, sometimes projecting beyond the end of the cell, and is often flexible or contractile, and so may be involved in movement and provides support for the cell. Axostyles originate in association with a flagellar microtubular root and occur in two groups, the oxymonads and parabasalids; they have different structures and are not homologous. Within Trichomonads the axostyle has been theorised to participate in locomotion and cell adhesion, but also karyokinesis during cell division.
|
https://en.wikipedia.org/wiki/Morinosaurus
|
Morinosaurus (meaning "Morini lizard", for an ancient people of northern France) was a genus of sauropod dinosaur from an unnamed formation of Kimmeridgian-age Upper Jurassic rocks from Boulogne-sur-Mer, Département du Pas-de-Calais, France. It is an obscure tooth genus sometimes referred to the Lower Cretaceous English wastebasket taxon Pelorosaurus.
History and taxonomy
The French paleontologist H. E. Sauvage based this genus on a single worn tooth, apparently now lost, which he compared to those of Hypselosaurus. Oddly, despite illustrations of the tooth, and the implications of comparing it to a titanosaur with narrow-crowned teeth, it was included as a synonym of Pelorosaurus in two major reviews. Pelorosaurus, being a putative brachiosaurid, is assumed to have had broad-crowned teeth.
Age, however, was not an issue, because it was referred to the possible Pelorosaurus species P. manseli (="Ischyrosaurus"), which was also from the Upper Jurassic (the question of whether "Ischyrosaurus" or any Jurassic species should be included in Pelorosaurus at all is another issue). The most recent review considers it to be a nomen dubium without further comment.
Sauvage also suggested that a partial right humerus belonged to the type individual.
Paleobiology
Morinosaurus would have been a large, quadrupedal herbivore. Having titanosaur-like teeth may suggest more titanosaurian- or diplodocoid-like feeding habits, but this is speculative. The tooth crown was 50 mm (1.97 im) tall and had a cross-section of .
|
https://en.wikipedia.org/wiki/Milliken%27s%20tree%20theorem
|
In mathematics, Milliken's tree theorem in combinatorics is a partition theorem generalizing Ramsey's theorem to infinite trees, objects with more structure than sets.
Let T be a finitely splitting rooted tree of height ω, n a positive integer, and the collection of all strongly embedded subtrees of T of height n. In one of its simple forms, Milliken's tree theorem states that if then for some strongly embedded infinite subtree R of T, for some i ≤ r.
This immediately implies Ramsey's theorem; take the tree T to be a linear ordering on ω vertices.
Define where T ranges over finitely splitting rooted trees of height ω. Milliken's tree theorem says that not only is partition regular for each n < ω, but that the homogeneous subtree R guaranteed by the theorem is strongly embedded in T.
Strong embedding
Call T an α-tree if each branch of T has cardinality α. Define Succ(p, P)= , and to be the set of immediate successors of p in P. Suppose S is an α-tree and T is a β-tree, with 0 ≤ α ≤ β ≤ ω. S is strongly embedded in T if:
, and the partial order on S is induced from T,
if is nonmaximal in S and , then ,
there exists a strictly increasing function from to , such that
Intuitively, for S to be strongly embedded in T,
S must be a subset of T with the induced partial order
S must preserve the branching structure of T; i.e., if a nonmaximal node in S has n immediate successors in T, then it has n immediate successors in S
S preserves the level structure of T; all nodes on a common level of S must be on a common level in T.
|
https://en.wikipedia.org/wiki/MicroVAX
|
The MicroVAX is a discontinued family of low-cost minicomputers developed and manufactured by Digital Equipment Corporation (DEC). The first model, the MicroVAX I, was introduced in 1983. They used processors that implemented the VAX instruction set architecture (ISA) and were succeeded by the VAX 4000. Many members of the MicroVAX family had corresponding VAXstation variants, which primarily differ by the addition of graphics hardware. The MicroVAX family supports Digital's VMS and ULTRIX operating systems. Prior to VMS V5.0, MicroVAX hardware required a dedicated version of VMS named MicroVMS.
MicroVAX I
The MicroVAX I, code-named Seahorse, introduced in October 1984, was one of DEC's first VAX computers to use very-large-scale integration (VLSI) technology. The KA610 CPU module (also known as the KD32) contained two custom chips which implemented the ALU and FPU while TTL chips were used for everything else. Two variants of the floating point chips were supported, with the chips differing by the type of floating point instructions supported, F and G, or F and D. The system was implemented on two quad-height Q-bus cards - a Data Path Module (DAP) and Memory Controller (MCT). The MicroVAX I used Q-bus memory cards, which limited the maximum memory to 4MiB. The performance of the MicroVAX I was rated at 0.3 VUPs, equivalent to the earlier VAX-11/730.
MicroVAX II
The MicroVAX II, code-named Mayflower, was a mid-range MicroVAX introduced in May 1985 and shipped shortly thereafter. It ran VAX/VMS or, alternatively, ULTRIX, the DEC native Unix operating system. At least one non-DEC operating system was available, BSD Unix from MtXinu.
It used the KA630-AA CPU module, a quad-height Q22-Bus module, which featured a MicroVAX 78032 microprocessor and a MicroVAX 78132 floating-point coprocessor operating at 5 MHz (200 ns cycle time). Two gate arrays on the module implemented the external interface for the microprocessor, Q22-bus interface and the scatter-gather map for DM
|
https://en.wikipedia.org/wiki/Baltimore%20classification
|
Baltimore classification is a system used to classify viruses based on their manner of messenger RNA (mRNA) synthesis. By organizing viruses based on their manner of mRNA production, it is possible to study viruses that behave similarly as a distinct group. Seven Baltimore groups are described that take into consideration whether the viral genome is made of deoxyribonucleic acid (DNA) or ribonucleic acid (RNA), whether the genome is single- or double-stranded, and whether the sense of a single-stranded RNA genome is positive or negative.
Baltimore classification also closely corresponds to the manner of replicating the genome, so Baltimore classification is useful for grouping viruses together for both transcription and replication. Certain subjects pertaining to viruses are associated with multiple, specific Baltimore groups, such as specific forms of translation of mRNA and the host range of different types of viruses. Structural characteristics such as the shape of the viral capsid, which stores the viral genome, and the evolutionary history of viruses are not necessarily related to Baltimore groups.
Baltimore classification was created in 1971 by virologist David Baltimore. Since then, it has become common among virologists to use Baltimore classification alongside standard virus taxonomy, which is based on evolutionary history. In 2018 and 2019, Baltimore classification was partially integrated into virus taxonomy based on evidence that certain groups were descended from common ancestors. Various realms, kingdoms, and phyla now correspond to specific Baltimore groups.
Overview
Baltimore classification groups viruses together based on their manner of mRNA synthesis. Characteristics directly related to this include whether the genome is made of deoxyribonucleic acid (DNA) or ribonucleic acid (RNA), the strandedness of the genome, which can be either single- or double-stranded, and the sense of a single-stranded genome, which is either positive or negative. The
|
https://en.wikipedia.org/wiki/Function%20point
|
The function point is a "unit of measurement" to express the amount of business functionality an information system (as a product) provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost (in dollars or hours) of a single unit is calculated from past projects.
Standards
There are several recognized standards and/or public specifications for sizing software based on Function Point.
1. ISO Standards
FiSMA: ISO/IEC 29881:2010 Information technology – Systems and software engineering – FiSMA 1.1 functional size measurement method.
IFPUG: ISO/IEC 20926:2009 Software and systems engineering – Software measurement – IFPUG functional size measurement method.
Mark-II: ISO/IEC 20968:2002 Software engineering – Ml II Function Point Analysis – Counting Practices Manual
Nesma: ISO/IEC 24570:2018 Software engineering – Nesma functional size measurement method version 2.3 – Definitions and counting guidelines for the application of Function Point Analysis
COSMIC: ISO/IEC 19761:2011 Software engineering. A functional size measurement method.
OMG: ISO/IEC 19515:2019 Information technology — Object Management Group Automated Function Points (AFP), 1.0
The first five standards are implementations of the over-arching standard for Functional Size Measurement ISO/IEC 14143. The OMG Automated Function Point (AFP) specification, led by the Consortium for IT Software Quality, provides a standard for automating the Function Point counting according to the guidelines of the International Function Point User Group (IFPUG) However, the current implementations of this standard have a limitation in being able to distinguish External Output (EO) from External Inquiries (EQ) out of the box, without some upfront configuration.
Introduction
Function points were defined in 1979 in Measuring Application Development Productivity by Allan Albrecht at IBM. The functional user requirements of the software are identified and each one is c
|
https://en.wikipedia.org/wiki/Quantum%20reflection
|
Quantum reflection is a uniquely quantum phenomenon in which an object, such as a neutron or a small molecule, reflects smoothly and in a wavelike fashion from a much larger surface, such as a pool of mercury. A classically behaving neutron or molecule will strike the same surface much like a thrown ball, hitting only at one atomic-scale location where it is either absorbed or scattered. Quantum reflection provides a powerful experimental demonstration of particle-wave duality, since it is the extended quantum wave packet of the particle, rather than the particle itself, that reflects from the larger surface. It is similar to reflection high-energy electron diffraction, where electrons reflect and diffraction from surfaces, and grazing incidence atom scattering, where the fact that atoms (and ions) can also be waves is used to diffract from surfaces.
Definition
In a workshop about quantum reflection, the following definition of quantum reflection was suggested:
Quantum reflection is a classically counterintuitive phenomenon whereby the motion of particles is reverted "against the force" acting on them. This effect manifests the wave nature of particles and influences collisions of ultracold atoms and interaction of atoms with solid surfaces.
Observation of quantum reflection has become possible thanks to recent advances in trapping and cooling atoms.
Reflection of slow atoms
Although the principles of quantum mechanics apply to any particles, usually the term "quantum reflection" means reflection of atoms from a surface of condensed matter (liquid or solid). The full potential experienced by the incident atom does become repulsive at a very small distance from the surface (of order of size of atoms). This is when the atom becomes aware of the discrete character of material. This repulsion is responsible for the classical scattering one would expect for particles incident on a surface. Such scattering can be diffuse rather than specular, so this component of the
|
https://en.wikipedia.org/wiki/International%20Year%20of%20Planet%20Earth
|
The United Nations General Assembly declared 2008 as the International Year of Planet Earth to increase awareness of the importance of Earth sciences for the advancement of sustainable development. UNESCO was designated as the lead agency. The Year's activities spanned the three years 2006–2009.
Goals
The Year aimed to raise $20 million from industry and governments, of which half was to be spent on co-funding research, and half on "outreach" activities. It was intended to be the biggest ever international effort to promote the Earth sciences.
Apart from researchers, who were expected to benefit under the Year's Science Programme, the principal target groups for the Year's broader messages were:
Decision makers and politicians, to better inform them about the how Earth scientific knowledge can be used for sustainable development
The voting public, to communicate to them how Earth scientific knowledge can contribute to a better society
Geoscientists, to help them use their knowledge of various aspects of the Earth for the benefit of the world’s population.
The research themes of the year, set out in ten science prospectuses, were chosen for their societal relevance, multidisciplinary nature, and outreach potential. The Year had twelve founding partners, 23 associate partners, and was backed politically by 97 countries representing 87% of the world’s population. The Year was promoted politically at UNESCO and at the United Nations in New York by the People’s Republic of Tanzania.
The Year encouraged contributions from researchers within ten separate themes. The outreach programme worked in a similar way, receiving bids for support from individuals and organisations worldwide.
The Year's Project Leader was former IUGS President Professor Eduardo F J de Mulder. The Year's Science Committee was chaired by Professor Edward Derbyshire (Royal Holloway) and its Outreach Committee by Dr Ted Nield (Geological Society of London).
The International Year of Planet Eart
|
https://en.wikipedia.org/wiki/Vagusstoff
|
Vagusstoff (literally translated from German as "Vagus Substance") refers to the substance released by stimulation of the vagus nerve which causes a reduction in the heart rate. Discovered in 1921 by physiologist Otto Loewi, vagusstoff was the first confirmation of chemical synaptic transmission and the first neurotransmitter ever discovered. It was later confirmed to be acetylcholine, which was first identified by Sir Henry Hallett Dale in 1914. Because of his pioneering experiments, in 1936 Loewi was awarded the Nobel Prize in Physiology or Medicine, which he shared with Dale.
The discovery of Vagusstoff
By the time Loewi began his experiments there was much discussion among scientists whether communication between nerves and muscles was chemical or electrical by nature. Experiments by Luigi Galvani in the 18th century had demonstrated that electrical stimulation of the frog sciatic nerve resulted in twitching of the leg muscles, and from this he developed the concept of bioelectricity. This led to the idea that direct electrical contact between nerves and muscles mediated transmission of excitation. However, work by John Newport Langley had suggested that in the autonomic nervous system communication in the ciliary ganglion was chemical. Loewi's experiments, published in 1921 , finally settled the issue, proving that synaptic transmission was chemical.
Loewi performed a very simple yet elegant experiment. Using an isolated frog heart he had previously found that stimulation of the vagus nerve resulted in a slowing of the heart rate, while stimulation of the sympathetic nerve caused the heart rate to speed up (Figure 1). He reasoned that stimulation of either the vagus or sympathetic nerve would cause the nerve terminal to release a substance which would either slow or accelerate the heart rate. To prove this, he took a frog heart, which had been cannulated in order to perfuse the fluid surrounding the heart, and electrically stimulated the vagus nerve un
|
https://en.wikipedia.org/wiki/Fill%20device
|
A fill device or key loader is a module used to load cryptographic keys into electronic encryption machines. Fill devices are usually hand held and electronic ones are battery operated.
Older mechanical encryption systems, such as rotor machines, were keyed by setting the positions of wheels and plugs from a printed keying list. Electronic systems required some way to load the necessary cryptovariable data. In the 1950s and 1960s, systems such as the U.S. National Security Agency KW-26 and the Soviet Union's Fialka used punched cards for this purpose. Later NSA encryption systems incorporated a serial port fill connector and developed several common fill devices (CFDs) that could be used with multiple systems. A CFD was plugged in when new keys were to be loaded. Newer NSA systems allow "over the air rekeying" (OTAR), but a master key often must still be loaded using a fill device.
NSA uses two serial protocols for key fill, DS-101 and DS-102. Both employ the same U-229 6-pin connector type used for U.S. military audio handsets, with the DS-101 being the newer of the two serial fill protocols. The DS-101 protocol can also be used to load cryptographic algorithms and software updates for crypto modules.
Besides encryption devices, systems that can require key fill include IFF, GPS and frequency hopping radios such as Have Quick and SINCGARS.
Common fill devices employed by NSA include:
KYK-28 pin gun used with the NESTOR (encryption) system
KYK-13 Electronic Transfer Device
KYX-15 Net Control Device
MX-10579 ECCM Fill Device (SINCGARS)
KOI-18 paper tape reader. Can read 8-level paper or PET tape, which is manually pulled through the reader slot by the operator. It is battery powered and has no internal storage, so it can load keys of different lengths, including the 128-bit keys used by more modern systems. The KOI-18 can also be used to load keys into other fill devices that do have internal storage, such as the KYK-13 and AN/CYZ-10. The KOI-18 only supp
|
https://en.wikipedia.org/wiki/Eisosome
|
Eisosomes ('eis' meaning into or portal and 'soma', meaning body) are large, heterodimeric, immobile protein complexes at the plasma membrane which mark the site of endocytosis in some eukaryotes, and were discovered in the yeast Saccharomyces cerevisiae in 2006. Currently, seven genes: Pil1, Lsp1 and Sur7, Eis1, Seg1 and Ygr130C, Seg2, are annotated to the formation of the proteins identified in eisosomes. These organelle-like structures have put to rest the idea that sites of endocytosis in cells are chosen at random. Eisosomes have a profound role in regulating plasma membrane architecture and organization in yeast. Microscopic and genetic analyses link these stable, ultrastructural assemblies to the endocytosis of both lipid and protein cargoes in cells.
There are approximately 50–100 eisosomes in each mature yeast cell distributed uniformly across the cell surface periphery in a characteristic dotted pattern with each eisosome containing approximately 2000–5000 copies of Pil1 and Lsp1 proteins, as well as, integral membrane protein Sur7. Only a few of the eisosomes present in a cell are active at any one time, suggesting that eisosomes function by using reversible phosphorylation and are regulated portals that govern both location and magnitude of membrane traffic into the cell.
Endocytosis in yeast
The yeast plasma membrane consists of three compartments:
Membrane compartment containing Can1 (MCC)
Membrane compartment containing Pma1 (MCP)
Membrane compartment containing TORC2 (MCT)
The MCC, a furrow in the plasma membrane, is generated by eisosomes, it disappears in a cell lacking Pil1 which is one of the main eisosome components.
Structural classification
These are large protein complexes composed primarily of subunits of two Bin-Amphiphysin-RVS (BAR) domain containing proteins Pil1 and Lsp1. These two paralogue proteins self-assemble in higher order structure helices and bind preferentially to phosphoinositide-containing membrane. It is also found
|
https://en.wikipedia.org/wiki/Peter%20Mosses
|
Peter David Mosses (born 1948) is a British computer scientist.
Peter Mosses studied mathematics as an undergraduate at Trinity College, Oxford, and went on to undertake a DPhil supervised by Christopher Strachey in the Programming Research Group while at Wolfson College, Oxford in the early 1970s. He was the last student to submit his thesis under Strachey before Strachey's death.
In 1978, Mosses published his compiler-compiler, the Semantic Implementation System (SIS), which uses a denotational semantics description of the input language.
Mosses has spent most of his career at BRICS in Denmark. He returned to a chair at Swansea University, Wales. His main contribution has been in the area of formal program semantics. In particular, with David Watt he developed action semantics, a combination of denotational, operational and algebraic semantics.
Currently, Mosses is a visitor at TU Delft, working with the Programming Languages Group.
|
https://en.wikipedia.org/wiki/Token%20economy
|
A token economy is a system of contingency management based on the systematic reinforcement of target behavior. The reinforcers are symbols or tokens that can be exchanged for other reinforcers. A token economy is based on the principles of operant conditioning and behavioral economics and can be situated within applied behavior analysis. In applied settings token economies are used with children and adults; however, they have been successfully modeled with pigeons in lab settings.
Basic requirements
Three requirements are basic for a token economy: tokens, back-up reinforcers, and specified target behaviours.
Tokens
Tokens must be used as reinforcers to be effective. A token is an object or symbol that can be exchanged for material reinforcers, services, or privileges (back-up reinforcers). In applied settings, a wide range of tokens have been used: coins, checkmarks, images of small suns or stars, points on a counter, and checkmarks on a poster. These symbols and objects are comparably worthless outside of the patient-clinician or teacher-student relationship, but their value lies in the fact that they can be exchanged for other things. Technically speaking, tokens are not primary reinforcers, but secondary or learned reinforcers. Much research has been conducted on token reinforcement, including animal studies.
Back-up reinforcers
Tokens have no intrinsic value, but can be exchanged for other valued reinforcing events: back-up reinforcers, which act as rewards. Most token economies offer a choice of differing back-up reinforcers that can be virtually anything. Some possible reinforcers might be:
Material reinforcers: candy, cigarettes, journals, money
Services: breakfast in bed, room cleaned, enjoyable activities
Privileges and other extras: passes for leaving a building or area, permission to stay in bed, phone calls, having one's name or picture on a wall.
Back-up reinforcers are chosen in function of the individual or group for which the token econ
|
https://en.wikipedia.org/wiki/David%20Watt%20%28computer%20scientist%29
|
David Anthony Watt (born 5 November 1946) is a British computer scientist.
Watt is a professor at the University of Glasgow, Scotland. With Peter Mosses he developed action semantics, a combination of denotational semantics, operational and algebraic semantics. He currently teaches a third year programming languages course, and a postgraduate course on algorithms and data structures. He is recognisable around campus for his more formal attire compared to the department's normally casual dress code.
|
https://en.wikipedia.org/wiki/Clausius%20theorem
|
The Clausius theorem (1855), also known as the Clausius inequality, states that for a thermodynamic system (e.g. heat engine or heat pump) exchanging heat with external thermal reservoirs and undergoing a thermodynamic cycle, the following inequality holds.
where is the total entropy change in the external thermal reservoirs (surroundings), is an infinitesimal amount of heat that is taken from the reservoirs and absorbed by the system ( if heat from the reservoirs is absorbed by the system, and < 0 if heat is leaving from the system to the reservoirs) and is the common temperature of the reservoirs at a particular instant in time. The closed integral is carried out along a thermodynamic process path from the initial/final state to the same initial/final state (thermodynamic cycle). In principle, the closed integral can start and end at an arbitrary point along the path.
The Clausius theorem or inequality obviously implies per thermodynamic cycle, meaning that the entropy of the reservoirs increases or does not change, and never decreases, per cycle.
For multiple thermal reservoirs with different temperatures interacting a thermodynamic system undergoing a thermodynamic cycle, the Clausius inequality can be written as the following for expression clarity:
where is an infinitesimal heat from the reservoir to the system.
In the special case of a reversible process, the equality holds, and the reversible case is used to introduce the state function known as entropy. This is because in a cyclic process the variation of a state function is zero per cycle, so the fact that this integral is equal to zero per cycle in a reversible process implies that there is some function (entropy) whose infinitesimal change is .
The generalized "inequality of Clausius"
for as an infinitesimal change in entropy of a system (denoted by sys) under consideration applies not only to cyclic processes, but to any process that occurs in a closed system.
The Clausius inequality
|
https://en.wikipedia.org/wiki/Grapefruit%20mercaptan
|
Grapefruit mercaptan is the common name for a natural organic compound found in grapefruit. It is a monoterpenoid that contains a thiol (also known as a mercaptan) functional group. Structurally a hydroxy group of terpineol is replaced by the thiol in grapefruit mercaptan, so it also called thioterpineol. Volatile thiols typically have very strong, often unpleasant odors that can be detected by humans in very low concentrations. Grapefruit mercaptan has a very potent, but not unpleasant, odor, and it is the chemical constituent primarily responsible for the aroma of grapefruit. This characteristic aroma is a property of only the R enantiomer.
Pure grapefruit mercaptan, or citrus-derived oils rich in grapefruit mercaptan, are sometimes used in perfumery and the flavor industry to impart citrus aromas and flavors. However, both industries actively seek substitutes for grapefruit mercaptans for use as a grapefruit flavorant, since its decomposition products are often highly disagreeable to the human sense of smell.
The detection threshold for the (+)-(R) enantiomer of grapefruit mercaptan is 2×10−5 ppb, or equivalently a concentration of 2×10−14. This corresponds to being able to detect 2×10−5 mg in one metric ton of water - one of the lowest detection thresholds ever recorded for a naturally occurring compound.
See also
Nootkatone, another aroma compound in grapefruit
Terpineol, where a hydroxyl is in place of the thiol
|
https://en.wikipedia.org/wiki/University%20of%20Waterloo%20Faculty%20of%20Mathematics
|
The Faculty of Mathematics is one of six faculties of the University of Waterloo in Waterloo, Ontario, offering more than 500 courses in mathematics, statistics and computer science. The faculty also houses the David R. Cheriton School of Computer Science, formerly the faculty's computer science department. There are more than 31,000 alumni.
History
The faculty was founded on January 1, 1967, a successor to the University of Waterloo's Department of Mathematics, which had grown to be the largest department in the Faculty of Arts under the chairmanship of Ralph Stanton (and included such influential professors as W. T. Tutte). Initially located in the Physics building, the faculty was moved in May 1968 into the newly constructed Mathematics and Computing (MC) Building. Inspired by Stanton's famously gaudy ties, the students draped a large pink tie over the MC Building on the occasion of its opening, which later became a symbol of the faculty.
At the time of its founding, the faculty included five departments: Applied Analysis and Computer Science, Applied Mathematics, Combinatorics and Optimization, Pure Mathematics, and Statistics. In 1975 the Department of Applied Analysis and Computer Science became simply the Department of Computer Science; in 2005 it became the David R. Cheriton School of Computer Science. The Statistics Department also was later renamed the Department of Statistics and Actuarial Science. The Department of Combinatorics and Optimization is the only academic department in the world devoted to combinatorics.
The second building occupied by the Mathematics faculty was the Davis Centre, which was completed in 1988. This building includes a plethora of offices, along with various lecture halls and meeting rooms. (The Davis Centre is also home to the library originally known as the Engineering, Math, and Science [EMS] Library, which was originally housed on the fourth floor of the MC building.)
The Faculty of Mathematics finished construction of
|
https://en.wikipedia.org/wiki/Nutritional%20science
|
Nutritional science (also nutrition science, sometimes short nutrition, dated trophology) is the science that studies the physiological process of nutrition (primarily human nutrition), interpreting the nutrients and other substances in food in relation to maintenance, growth, reproduction, health and disease of an organism.
History
Before nutritional science emerged as an independent study disciplines, mainly chemists worked in this area. The chemical composition of food was examined. Macronutrients, especially protein, fat and carbohydrates, have been the focus components of the study of (human) nutrition since the 19th century. Until the discovery of vitamins and vital substances, the quality of nutrition was measured exclusively by the intake of nutritional energy.
The early years of the 20th century were summarized by Kenneth John Carpenter in his Short History of Nutritional Science as "the vitamin era". The first vitamin was isolated and chemically defined in 1926 (thiamine). The isolation of vitamin C followed in 1932 and its effects on health, the protection against scurvy, was scientifically documented for the first time.
At the instigation of the British physiologist John Yudkin at the University of London, the degrees Bachelor of Science and Master of Science in nutritional science were established in the 1950s.
Nutritional science as a separate discipline was institutionalized in Germany in November 1956 when Hans-Diedrich Cremer was appointed to the chair for human nutrition in Giessen. The Institute for Nutritional Science was initially located at the Academy for Medical Research and Further Education, which was transferred to the Faculty of Human Medicine when the Justus Liebig University was reopened. Over time, seven other universities with similar institutions followed in Germany.
From the 1950s to 1970s, a focus of nutritional science was on dietary fat and sugar. From the 1970s to the 1990s, attention was put on diet-related chronic diseas
|
https://en.wikipedia.org/wiki/Geometry%20Center
|
The Geometry Center was a mathematics research and education center at the University of Minnesota. It was established by the National Science Foundation in the late 1980s and closed in 1998. The focus of the center's work was the use of computer graphics and visualization for research and education in pure mathematics and geometry.
The center's founding director was Al Marden. Richard McGehee directed the center during its final years. The center's governing board was chaired by David P. Dobkin.
Geomview
Much of the work done at the center was for the development of Geomview, a three-dimensional interactive geometry program. This focused on mathematical visualization with options to allow hyperbolic space to be visualised. It was originally written for Silicon Graphics workstations, and has been ported to run on Linux systems; it is available for installation in most Linux distributions through the package management system. Geomview can run under Windows using Cygwin and under Mac OS X. Geomview has a web site at .
Geomview is built on the Object Oriented Graphics Library (OOGL). The displayed scene and the attributes of the objects in it may be manipulated by the graphical command language (GCL) of Geomview. Geomview may be set as a default 3-D viewer for Mathematica.
Videos
Geomview was used in the construction of several mathematical movies including:
Not Knot, exploring hyperbolic space rendering of knot complements.
Outside In, a movie about sphere eversion.
The shape of space, exploring possible three dimensional spaces.
Other software
Other programs developed at the Center included:
WebEQ, a web browser plugin allowing mathematical equations to be viewed and edited.
Kali, to explore plane symmetry groups.
The Orrery, a Solar System visualizer.
SaVi, a satellite visualisation tool for examining the orbits and coverage of satellite constellations.
Crafter, for structural design of spacecraft.
Surface Evolver, to explore minimal surfaces.
SnapP
|
https://en.wikipedia.org/wiki/David%20Harel
|
David Harel (; born 12 April 1950) is a computer scientist, currently serving as President of the Israel Academy of Sciences and Humanities. He has been on the faculty of the Weizmann Institute of Science in Israel since 1980, and holds the William Sussman Professorial Chair of Mathematics. Born in London, England, he was Dean of the Faculty of Mathematics and Computer Science at the institute for seven years.
Biography
Harel is best known for his work on dynamic logic, computability, database theory, software engineering and modelling biological systems. In the 1980s he invented the graphical language of Statecharts for specifying and programming reactive systems, which has been adopted as part of the UML standard. Since the late 1990s he has concentrated on a scenario-based approach to programming such systems, launched by his co-invention (with W. Damm) of Live Sequence Charts. He has published expository accounts of computer science, such as his award winning 1987 book "Algorithmics: The Spirit of Computing" and his 2000 book "Computers Ltd.: What They Really Can’t do", and has presented series on computer science for Israeli radio and television. He has also worked on other diverse topics, such as graph layout, computer science education, biological modeling and the analysis and communication of odors.
Harel completed his PhD at MIT between 1976 and 1978. In 1987, he co-founded the software company I-Logix, which in 2006 became part of IBM. He has advocated building a full computer model of the Caenorhabditis elegans nematode, which was the first multicellular organism to have its genome completely sequenced. The eventual completeness of such a model depends on his updated version of the Turing test. He is a fellow of the ACM, the IEEE, the AAAS, and the EATCS, and a member of several international academies. Harel is active in a number of peace and human rights organizations in Israel.
Awards and honors
1986 Stevens Award for Software Development Methods
|
https://en.wikipedia.org/wiki/Kulkarni%E2%80%93Nomizu%20product
|
In the mathematical field of differential geometry, the Kulkarni–Nomizu product (named for Ravindra Shripad Kulkarni and Katsumi Nomizu) is defined for two -tensors and gives as a result a -tensor.
Definition
If h and k are symmetric -tensors, then the product is defined via:
where the Xj are tangent vectors and is the matrix determinant. Note that , as it is clear from the second expression.
With respect to a basis of the tangent space, it takes the compact form
where denotes the total antisymmetrisation symbol.
The Kulkarni–Nomizu product is a special case of the product in the graded algebra
where, on simple elements,
( denotes the symmetric product).
Properties
The Kulkarni–Nomizu product of a pair of symmetric tensors has the algebraic symmetries of the Riemann tensor. For instance, on space forms (i.e. spaces of constant sectional curvature) and two-dimensional smooth Riemannian manifolds, the Riemann curvature tensor has a simple expression in terms of the Kulkarni–Nomizu product of the metric with itself; namely, if we denote by
the -curvature tensor and by
the Riemann curvature tensor with , then
where is the scalar curvature and
is the Ricci tensor, which in components reads .
Expanding the Kulkarni–Nomizu product using the definition from above, one obtains
This is the same expression as stated in the article on the Riemann curvature tensor.
For this very reason, it is commonly used to express the contribution that the Ricci curvature (or rather, the Schouten tensor) and the Weyl tensor each makes to the curvature of a Riemannian manifold. This so-called Ricci decomposition is useful in differential geometry.
When there is a metric tensor g, the Kulkarni–Nomizu product of g with itself is the identity endomorphism of the space of 2-forms, Ω2(M), under the identification (using the metric) of the endomorphism ring End(Ω2(M)) with the tensor product Ω2(M) ⊗ Ω2(M).
A Riemannian manifold has constant sectional curvature k if and only
|
https://en.wikipedia.org/wiki/Ipsilon%20Networks
|
Ipsilon Networks was a computer networking company which specialised in IP switching during the 1990s.
The first product called the IP Switch ATM 1600 was announced in March 1996 for US$46,000.
Its switch used Asynchronous Transfer Mode (ATM) hardware combined with Internet Protocol routing.
The company had a role in the development of the Multiprotocol Label Switching (MPLS) network protocol. The company published early proposals related to label switching, but did not manage to achieve the market share hoped for and was purchased for $120 million by Nokia in December 1997. The president at the time was Brian NeSmith, and it was located in Sunnyvale, California.
|
https://en.wikipedia.org/wiki/Filtered%20category
|
In category theory, filtered categories generalize the notion of directed set understood as a category (hence called a directed category; while some use directed category as a synonym for a filtered category). There is a dual notion of cofiltered category, which will be recalled below.
Filtered categories
A category is filtered when
it is not empty,
for every two objects and in there exists an object and two arrows and in ,
for every two parallel arrows in , there exists an object and an arrow such that .
A filtered colimit is a colimit of a functor where is a filtered category.
Cofiltered categories
A category is cofiltered if the opposite category is filtered. In detail, a category is cofiltered when
it is not empty,
for every two objects and in there exists an object and two arrows and in ,
for every two parallel arrows in , there exists an object and an arrow such that .
A cofiltered limit is a limit of a functor where is a cofiltered category.
Ind-objects and pro-objects
Given a small category , a presheaf of sets that is a small filtered colimit of representable presheaves, is called an ind-object of the category . Ind-objects of a category form a full subcategory in the category of functors (presheaves) . The category of pro-objects in is the opposite of the category of ind-objects in the opposite category .
κ-filtered categories
There is a variant of "filtered category" known as a "κ-filtered category", defined as follows. This begins with the following observation: the three conditions in the definition of filtered category above say respectively that there exists a cocone over any diagram in of the form , , or . The existence of cocones for these three shapes of diagrams turns out to imply that cocones exist for any finite diagram; in other words, a category is filtered (according to the above definition) if and only if there is a cocone over any finite diagram .
Extending this, given a regular cardinal κ, a
|
https://en.wikipedia.org/wiki/Vakarel%20radio%20transmitter
|
The Vakarel Transmitter was a large broadcasting facility for long- and medium wave near Vakarel, Bulgaria. The Vakarel Transmitter was inaugurated in 1937. It had one directional antenna consisting of three guyed masts and another consisting of two masts.
The most remarkable mast of the Vakarel Transmitter was the Blaw-Knox tower, built in 1937 by the company Telefunken. Along with Lakihegy Tower, Hungary, Riga LVRTC Transmitter, Latvia and Lisnagarvey Radio Mast, Northern Ireland it was one of the few Blaw-Knox towers in Europe until its demolition on 16 September 2020.
The transmitter was shut down at 22:00 UTC on 31 December 2014.
Transmitter internal structure
The modulation method used by the transmitter in Vakarel is called a tube voltage modulation and was successfully used in all powerful AM transmitters at that time. The Vakarel transmitter is supplied with electricity from a substation in Samokov via a medium voltage transmission line. The transmitter uses six stages of amplification. The first stage contains a single radio tube, which generates alternating current at a carrier frequency of 850 kHz. The electrical oscillations of the anode circuit in the tube are coupled in series to the second and third stage. The signals in these three stages are only amplified, without any other changes.
In the special fourth modulation stage, the form of signals is modulated with speech or music. The audio recordings are sent to the transmitter with an underground communication cable from the main radio studio in Sofia. Due to the large distance of almost , the audio signal is amplified at both ends by separate blocks of amplifiers.
The fifth stage consists of six transmitting tubes, two of which are in reserve, and four others can be switched on, if necessary. All of them are water-cooled.
The final sixth stage consists of four high-power transmitting tubes amplifying the final output up to 100 kW. The energy is filtered by a high-power tuned circuit and sent
|
https://en.wikipedia.org/wiki/Mineral%20absorption
|
In plants and animals, mineral absorption, also called mineral uptake is the way in which minerals enter the cellular material, typically following the same pathway as water. In plants, the entrance portal for mineral uptake is usually through the roots. Some mineral ions diffuse in-between the cells. In contrast to water, some minerals are actively taken up by plant cells. Mineral nutrient concentration in roots may be 10,000 times more than in surrounding soil. During transport throughout a plant, minerals can exit xylem and enter cells that require them. Mineral ions cross plasma membranes by a chemiosmotic mechanism. Plants absorb minerals in ionic form: nitrate (NO3−), phosphate (HPO4−) and potassium ions (K+); all have difficulty crossing a charged plasma membrane.
It has long been known plants expend energy to actively take up and concentrate mineral ions. Proton pump hydrolyzes adenosine triphosphate (ATP) to transport H+ ions out of cell; this sets up an electrochemical gradient that causes positive ions to flow into cells. Negative ions are carried across the plasma membrane in conjunction with H+ ions as H+ ions diffuse down their concentration gradient.
In animals, minerals found in low small amounts are microminerals while the seven elements that are required in large quantity are known as macrominerals; these are Ca, P, Mg, Na, K, Cl, and S. In most cases, minerals that enter the blood pass through the epithelial cells which line the gastrointestinal mucosa of the small intestine. Minerals can diffuse through the pores of the tight junction in paracellular absorption if there is an electrochemical gradient. Through the process of solvent drag, minerals can also enter with water when solubilized by dipole-ion interactions. Furthermore, the absorption of trace elements can be enhanced by the presence of amino acids that are covalently bonded to the mineral.
|
https://en.wikipedia.org/wiki/Semantic%20service-oriented%20architecture
|
A Semantic Service Oriented Architecture (SSOA) is an architecture that allows for scalable and controlled Enterprise Application Integration solutions. SSOA describes an approach to enterprise-scale IT infrastructure. It leverages rich, machine-interpretable descriptions of data, services, and processes to enable software agents to autonomously interact to perform critical mission functions. SSOA is technically founded on three notions:
The principles of Service-oriented architecture (SOA);
Standard Based Design (SBD); and
Semantics-based computing.
SSOA combines and implements these computer science concepts into a robust, extensible architecture capable of enabling complex, powerful functions.
Applications
In the health care industry, SSOA of HL7 has long been implemented. Other protocols include LOINC, PHIN, and HIPAA related standards. There is a series of SSOA-related ISO standards published for financial services, which can be found at the ISO's website,,. Some financial sectors also adopt EMV standards to facilitate European consumers. A part of SSOA on transport and trade are in the ISO sections of 03.220.20 and 35.240.60,. Some general guidelines of the technology and the standards in other fields are partially located at 25.040.40, 35.240.99,,.
See also
Cyber security standards
ISO/IEC 7816
ISO 8583
ISO/IEC 8859
ISO 9241
ISO 9660
ISO/IEC 11179
ISO/IEC 15408
ISO/IEC 17799
ISO/IEC 27000-series
Service component architecture
Semantic web
EMML
Business Intelligence 2.0 (BI 2.0)
|
https://en.wikipedia.org/wiki/Ristocetin
|
Ristocetin is a glycopeptide antibiotic, obtained from Amycolatopsis lurida, previously used to treat staphylococcal infections. It is no longer used clinically because it caused thrombocytopenia and platelet agglutination. It is now used solely to assay those functions in vitro in the diagnosis of conditions such as von Willebrand disease (vWD) and Bernard–Soulier syndrome. Platelet agglutination caused by ristocetin can occur only in the presence of von Willebrand factor multimers, so if ristocetin is added to blood lacking the factor (or its receptor—see below), the platelets will not clump.
Through an unknown mechanism, the antibiotic ristocetin causes von Willebrand factor to bind the platelet receptor glycoprotein Ib (GpIb), so when ristocetin is added to normal blood, it causes agglutination.
In some types of vWD (types 2B and platelet-type), even very small amounts of ristocetin cause platelet aggregation when the patient's platelet-rich plasma is used. This paradox is explained by these types having gain-of-function mutations which cause the vWD high molecular-weight multimers to bind more tightly to their receptors on platelets (the alpha chains of glycoprotein Ib (GPIb) receptors). In the case of type 2B vWD, the gain-of-function mutation involves von Willebrand's factor (VWF gene), and in platelet-type vWD, the receptor is the object of the mutation (GPIb). This increased binding causes vWD because the high-molecular weight multimers are removed from circulation in plasma since they remain attached to the patient's platelets. Thus, if the patient's platelet-poor plasma is used, the ristocetin cofactor assay will not agglutinate standardized platelets (i.e., pooled platelets from normal donors that are fixed in formalin), similar to the other types of vWD.
In all forms of the ristocetin assay, the platelets are fixed in formalin prior to the assay to prevent von Willebrand's factor stored in platelet granules from being released and participating
|
https://en.wikipedia.org/wiki/J-integral
|
The J-integral represents a way to calculate the strain energy release rate, or work (energy) per unit fracture surface area, in a material. The theoretical concept of J-integral was developed in 1967 by G. P. Cherepanov and independently in 1968 by James R. Rice, who showed that an energetic contour path integral (called J) was independent of the path around a crack.
Experimental methods were developed using the integral that allowed the measurement of critical fracture properties in sample sizes that are too small for Linear Elastic Fracture Mechanics (LEFM) to be valid. These experiments allow the determination of fracture toughness from the critical value of fracture energy JIc, which defines the point at which large-scale plastic yielding during propagation takes place under mode I loading.
The J-integral is equal to the strain energy release rate for a crack in a body subjected to monotonic loading. This is generally true, under quasistatic conditions, only for linear elastic materials. For materials that experience small-scale yielding at the crack tip, J can be used to compute the energy release rate under special circumstances such as monotonic loading in mode III (antiplane shear). The strain energy release rate can also be computed from J for pure power-law hardening plastic materials that undergo small-scale yielding at the crack tip.
The quantity J is not path-independent for monotonic mode I and mode II loading of elastic-plastic materials, so only a contour very close to the crack tip gives the energy release rate. Also, Rice showed that J is path-independent in plastic materials when there is no non-proportional loading. Unloading is a special case of this, but non-proportional plastic loading also invalidates the path-independence. Such non-proportional loading is the reason for the path-dependence for the in-plane loading modes on elastic-plastic materials.
Two-dimensional J-integral
The two-dimensional J-integral was originally defined a
|
https://en.wikipedia.org/wiki/Immune%20complex
|
An immune complex, sometimes called an antigen-antibody complex or antigen-bound antibody, is a molecule formed from the binding of multiple antigens to antibodies. The bound antigen and antibody act as a unitary object, effectively an antigen of its own with a specific epitope. After an antigen-antibody reaction, the immune complexes can be subject to any of a number of responses, including complement deposition, opsonization, phagocytosis, or processing by proteases. Red blood cells carrying CR1-receptors on their surface may bind C3b-coated immune complexes and transport them to phagocytes, mostly in liver and spleen, and return to the general circulation.
The ratio of antigen to antibody determines size and shape of immune complex. This, in turn, determines the effect of the immune complex. Many innate immune cells have FcRs, which are membrane-bound receptors that bind the constant regions of antibodies. Most FcRs on innate immune cells have low affinity for a singular antibody, and instead need to bind to an immune complex containing multiple antibodies in order to begin their intracellular signaling pathway and pass along a message from outside to inside of the cell. Additionally, the grouping and binding together of multiple immune complexes allows for an increase in the avidity, or strength of binding, of the FcRs. This allows innate immune cells to get multiple inputs at once and prevents them from being activated early.
Immune complexes may themselves cause illness when they are deposited in organs, for example, in certain forms of vasculitis. This is the third form of hypersensitivity in the Gell-Coombs classification, called type III hypersensitivity. Such hypersensitivity progressing to disease states produces the immune complex diseases.
Immune complex deposition is a prominent feature of several autoimmune diseases, including rheumatoid arthritis, scleroderma and Sjögren's syndrome. An inability to degrade immune complexes in the lysosome and subs
|
https://en.wikipedia.org/wiki/Moore%20neighborhood
|
In cellular automata, the Moore neighborhood is defined on a two-dimensional square lattice and is composed of a central cell and the eight cells that surround it.
Name
The neighborhood is named after Edward F. Moore, a pioneer of cellular automata theory.
Importance
It is one of the two most commonly used neighborhood types, the other one being the von Neumann neighborhood, which excludes the corner cells. The well known Conway's Game of Life, for example, uses the Moore neighborhood. It is similar to the notion of 8-connected pixels in computer graphics.
The Moore neighbourhood of a cell is the cell itself and the cells at a Chebyshev distance of 1.
The concept can be extended to higher dimensions, for example forming a 26-cell cubic neighborhood for a cellular automaton in three dimensions, as used by 3D Life. In dimension d, where , the size of the neighborhood is 3d − 1.
In two dimensions, the number of cells in an extended Moore neighbourhood of range r is (2r + 1)2.
Algorithm
The idea behind the formulation of Moore neighborhood is to find the contour of a given graph. This idea was a great challenge for most analysts of the 18th century, and as a result an algorithm was derived from the Moore graph which was later called the Moore Neighborhood algorithm.
The pseudocode for the Moore-Neighbor tracing algorithm is
Input: A square tessellation, T, containing a connected component P of black cells.
Output: A sequence B (b1, b2, ..., bk) of boundary pixels i.e. the contour.
Define M(a) to be the Moore neighborhood of pixel a.
Let p denote the current boundary pixel.
Let c denote the current pixel under consideration i.e. c is in M(p).
Let b denote the backtrack of c (i.e. neighbor pixel of p that was previously tested)
Begin
Set B to be empty.
From bottom to top and left to right scan the cells of T until a black pixel, s, of P is found.
Insert s in B.
Set the current boundary point p to s i.e. p=s
Let b = the pixel from which s
|
https://en.wikipedia.org/wiki/OldVersion.com
|
OldVersion.com is an archive website that stores and distributes older versions of primarily Internet-related IBM PC compatible and Apple Macintosh freeware and shareware application software. Alex Levine and Igor Dolgalev founded the site in 2001.
Levine created the site because "Companies make a lot of new versions. They're not always better for the consumer." As reported in The Wall Street Journal, 'Users often try to downgrade when they find confusing changes in a new version or encounter software bugs, or just decide they want to go back to a more familiar version,' said David Smith, an analyst at research firm Gartner. 'Often, they discover that the downgrade process is complicated, if not impossible.'
When OldVersion.com was launched it offered 80 versions of 14 programs.
By 2005, over 500 versions were posted.
By 28 August 2007, this had grown to 2388 versions of 179 programs, in categories such as "graphics", "file-sharing", "security" and "enterprise". The site also carries 600+ versions of 35 Macintosh programs.
In 2007, PC World labeled the site "a treasure trove ... of older-but-better software";
In 2005, National Review called OldVersion.com a "champion" for "software conservatives".
According to Alexander Levine's own words, he has received threats from proprietary software developers for running an archive of obsolete internet browsers with known critical security flaws.
See also
Abandonware
Legacy code
Planned obsolescence
Technology acceptance model
Switching barriers
|
https://en.wikipedia.org/wiki/Convulxin
|
Convulxin is a snake venom toxin found in a tropical rattlesnake known as Crotalus durissus terrificus. It belongs to the family of hemotoxins, which destroy red blood cells or, as is the case with convulxin, induce blood coagulation.
It causes platelet activation in the blood, forming clots and buildup of pressure. Convulxin acts as an agonist to the GPVI receptor, the major signalling receptor for collagen. This can cause the blood stream to burst, or the heart or brain to lose blood, thus resulting in death. It is a tetramer C-type lectin with an oligomeric structure, made up of heterodimeric subunits.
Family
Convulxin is part of the snake venom C-type lectin family, a group of hemorrhagic toxins that disrupt body homeostasis. The name describes their similarity in structure to C-type lectins from other animals, proteins that bind calcium to induce various signalling pathways.
Proteins of about 130 amino acids in length, C-type lectins contain at least a carbohydrate recognition domain (CRD), which mediates sugar and calcium binding. They are involved in various biological activities, ranging from cell-cell adhesion, serum glycoprotein turnover, to immune responses and cell apoptosis.
History
The toxin was first described in detail in 1969 by two Brazilian researchers from University of Campinas, Júlia Prado-Franceschi and Oswaldo Vital-Brazil.
The snake C-type lectin convulxin was reported to activate platelets in a similar way to collagen in the late 1970s, but this was only announced after the discovery of the association with the FcR γ-chain and after it was recognized to mediate activation through GPVI.
Now this toxin is widely used to study mammalian platelet receptors.
Structure
Convulxin is a heterodimer made up of α-(13.9 kDa) and β- (12.6 kDa) subunits, with 38% sequence identity and homologous structures. The subunits are connected by disulfide bridges to form a cyclic, ring-like α4β4 structure .
Its function arises from its ability to bind
|
https://en.wikipedia.org/wiki/BigDog
|
BigDog is a dynamically stable quadruped military robot that was created in 2005 by Boston Dynamics with Foster-Miller, the NASA Jet Propulsion Laboratory, and the Harvard University Concord Field Station. It was funded by DARPA, but the project was shelved after the BigDog was deemed too loud for combat.
History
BigDog was funded by the Defense Advanced Research Projects Agency (DARPA) in the hopes that it would be able to serve as a robotic pack mule to accompany soldiers in terrain too rough for conventional vehicles. Instead of wheels or treads, BigDog uses four legs for movement, allowing it to move across surfaces that would defeat wheels. The legs contain a variety of sensors, including joint position and ground contact. BigDog also features a laser gyroscope and a stereo vision system.
BigDog is long, stands tall, and weighs , making it about the size of a small mule. It is capable of traversing difficult terrain, running at , carrying , and climbing a 35 degree incline. Locomotion is controlled by an onboard computer that receives input from the robot's various sensors. Navigation and balance are also managed by the control system.
BigDog's walking pattern is controlled through four legs, each equipped with four low-friction hydraulic cylinder actuators that power the joints. BigDog's locomotion behaviors can vary greatly. It can stand up, sit down, walk with a crawling gait that lifts one leg at a time, walk with a trotting gait lifting diagonal legs, or trot with a running gait. The travel speed of BigDog varies from a crawl to a trot.
The BigDog project was headed by Dr. Martin Buehler, who received the Joseph Engelberger Award from the Robotics Industries Association in 2012 for the work. Dr. Buehler while previously a professor at McGill University, headed the robotics lab there, developing four-legged walking and running robots.
Built onto the actuators are sensors for joint position and force, and movement is ultimately controlled through
|
https://en.wikipedia.org/wiki/Geometric%20analysis
|
Geometric analysis is a mathematical discipline where tools from differential equations, especially elliptic partial differential equations (PDEs), are used to establish new results in differential geometry and differential topology. The use of linear elliptic PDEs dates at least as far back as Hodge theory. More recently, it refers largely
to the use of nonlinear partial differential equations to study geometric and topological properties of spaces, such as submanifolds of Euclidean space, Riemannian manifolds, and symplectic manifolds. This approach dates back to the work by Tibor Radó and Jesse Douglas on minimal surfaces, John Forbes Nash Jr. on isometric embeddings of Riemannian manifolds into Euclidean space, work by Louis Nirenberg on the Minkowski problem and the Weyl problem, and work by Aleksandr Danilovich Aleksandrov and Aleksei Pogorelov on convex hypersurfaces. In the 1980s fundamental contributions by Karen Uhlenbeck, Clifford Taubes, Shing-Tung Yau, Richard Schoen, and Richard Hamilton launched a particularly exciting and productive era of geometric analysis that continues to this day. A celebrated achievement was the solution to the Poincaré conjecture by Grigori Perelman, completing a program initiated and largely carried out by Richard Hamilton.
Scope
The scope of geometric analysis includes both the use of geometrical methods in the study of partial differential equations (when it is also known as "geometric PDE"), and the application of the theory of partial differential equations to geometry. It incorporates problems involving curves and surfaces, or domains with curved boundaries, but also the study of Riemannian manifolds in arbitrary dimension. The calculus of variations is sometimes regarded as part of geometric analysis, because differential equations arising from variational principles have a strong geometric content. Geometric analysis also includes global analysis, which concerns the study of differential equations on manifolds, and t
|
https://en.wikipedia.org/wiki/Ziauddin%20Ahmad
|
Sir Ziauddin Ahmad (born Ziauddin Ahmed Zuberi; 13 February 1873 – 23 December 1947) was an Indian mathematician, parliamentarian, logician, natural philosopher, politician, political theorist, educationist and a scholar. He was a member of the Aligarh Movement and was a professor, principal of MAO College, first pro vice-chancellor, vice chancellor and rector of Aligarh Muslim University, India.
He served as vice chancellor of Aligarh Muslim University for three terms.
In 1917, he was appointed a member of the Calcutta University Commission also known as the Sadler Commission. He was also a member of Skeen Committee also known as Indian Sandhurst Committee and Shea Commission for the Indianisation of the British Indian Army.
Early life
He was born on 13 February 1873, in Meerut, Uttar Pradesh, British India. His primary education was at a madrasa and later joined Muhammadan Anglo-Oriental College, Aligarh.
Ahmad's association with Aligarh began in 1889, when at the age of 16 years, he joined the 'first year' at the M.A.O. College School. He passed high school in first division and was awarded the Lang Medal and a government scholarship.He had to join the Government College, Allahabad, as science courses were not available at Muhammadan Anglo-Oriental College. He returned to Aligarh and passed his B.A in 1895 in first division, standing first among science students, and was awarded Starchy Gold Medal. Soon after passing B.A., he was appointed assistant lecturer in mathematics at Muhammadan Anglo-Oriental College.
On the basis of merit, he was nominated for the post of deputy collector, but Ahmad declined the offer and elected to continue in the service of the college. Sir Syed offered him a permanent appointment in the grade of Rs , provided he signed a bond to serve for a period of five years. He responded by undertaking to serve for his entire life. A highly impressed Sir Syed tore up the bond.
Education
Ahmad completed his BA in mathematics (with distin
|
https://en.wikipedia.org/wiki/Test%20engineer
|
A test engineer is a professional who determines how to create a process that would best test a particular product in manufacturing and related disciplines, in order to assure that the product meets applicable specifications. Test engineers are also responsible for determining the best way a test can be performed in order to achieve adequate test coverage. Often test engineers also serve as a liaison between manufacturing, design engineering, sales engineering and marketing communities as well.
Test engineer expertises
Test engineers can have different expertise, which depends on what test process they are more familiar with (although many test engineers have full familiarity from the PCB level processes like ICT, JTAG, and AXI) to PCBA and system level processes like board functional test (BFT or FT), burn-in test, system level test (ST). Some of the processes used in manufacturing where a test engineer is needed are:
In-circuit test (ICT)
Stand-alone JTAG test
Automated x-ray inspection (AXI) (also known as X-ray test)
Automated optical inspection (AOI) test
Center of Gravity (CG) test
Continuity or flying probe test
Electromagnetic compatibility or EMI test
(Board) functional test (BFT/FT)
Burn-in test
Environmental stress screening (ESS) test
Highly Accelerated Life Test (HALT)
Highly accelerated stress screening (HASS) test
Insulation test
Ongoing reliability test (ORT)
Regression test
System test (ST)
Vibration test
Final quality audit process (FQA) test
Early project involvement from design phase
Ideally, a test engineer's involvement with a product begins with the very early stages of the engineering design process, i.e. the requirements engineering stage and the design engineering stage. Depending on the culture of the firm, these early stages could involve a Product Requirements Document (PRD) and Marketing Requirements Document (MRD)—some of the earliest work done during a new product introduction (NPI).
By working with or as part o
|
https://en.wikipedia.org/wiki/Wine%20competition
|
A wine competition is an organized event in which trained judges or consumers competitively rate different vintages, categories, and/or brands of wine. Wine competitions generally use blind tasting of wine to prevent bias by the judges.
Types of wine competitions
The common goal of all wine competitions is to obtain valid comparisons of wines by trained experts. Wine competitions can vary widely in their characteristics, and are sometimes geared toward a specific audience (i.e., consumers vs. industry professionals). One of the ways wine competitions can vary is how the wines are ranked. In most competitions, medals are given to individual wines in various categories on the basis of the blind tasting. The awards are frequently bronze, silver, gold, and double gold medals. In other competitions, ribbons of various colors are sometimes used. In these competitions, it is common for more than one wine to receive any given medal. These competitions often also include a "Best in Class" award, producing a clear category winner among those vintages awarded any particular medal, as seen in the Los Angeles International Wine & Spirits Competition, the New York International Wine Competition, and The Decanter World Wine Awards. In still other competitions, instead of giving numerous awards, the wines in each wine category are ranked by number from high to low, a process known as ordinal ranking. In these competitions, there is only one first-place winner, one second place, one third place, and so on down to the lowest place. Medal rankings are different from the 100 point scales that are used by many journalistic publications, such as Wine Spectator. These "scores" are obtained when wine journalists blind taste the wines and score them on an individual basis, as opposed to when the wines are being tasted side by side and competing against one another in a competition setting.
There are critics who argue that the results of such competitions may be misleading and should not b
|
https://en.wikipedia.org/wiki/Hann%20function
|
The Hann function is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing. The function, with length and amplitude is given by:
For digital signal processing, the function is sampled symmetrically (with spacing and amplitude ):
which is a sequence of samples, and can be even or odd. (see ) It is also known as the raised cosine window, Hann filter, von Hann window, etc.
Fourier transform
The Fourier transform of is given by:
Discrete transforms
The Discrete-time Fourier transform (DTFT) of the length, time-shifted sequence is defined by a Fourier series, which also has a 3-term equivalent that is derived similarly to the Fourier transform derivation:
The truncated sequence is a DFT-even (aka periodic) Hann window. Since the truncated sample has value zero, it is clear from the Fourier series definition that the DTFTs are equivalent. However, the approach followed above results in a significantly different-looking, but equivalent, 3-term expression:
An N-length DFT of the window function samples the DTFT at frequencies for integer values of From the expression immediately above, it is easy to see that only 3 of the N DFT coefficients are non-zero. And from the other expression, it is apparent that all are real-valued. These properties are appealing for real-time applications that require both windowed and non-windowed (rectangularly windowed) transforms, because the windowed transforms can be efficiently derived from the non-windowed transforms by convolution.
Name
The function is named in honor of von Hann, who used the three-term weighted average smoothing technique on meteorological data. However, the term Hanning function is also conventionally used, derived from the paper in which the term hanning a signal was used to mean applying the Hann window to it. The confusion arose from the similar Hamming function, named after Richard Hamming.
See also
Window function
Apod
|
https://en.wikipedia.org/wiki/Postzygotic%20mutation
|
A postzygotic mutation (or post-zygotic mutation) is a change in an organism's genome that is acquired during its lifespan, instead of being inherited from its parent(s) through fusion of two haploid gametes. Mutations that occur after the zygote has formed can be caused by a variety of sources that fall under two classes: spontaneous mutations and induced mutations. How detrimental a mutation is to an organism is dependent on what the mutation is, where it occurred in the genome and when it occurred.
Causes
Postzygotic changes to a genome can be caused by small mutations that affect a single base pair, or large mutations that affect entire chromosomes and are divided into two classes, spontaneous mutations and induced mutations.
Spontaneous Mutations
Most spontaneous mutations are the result of naturally occurring lesions to DNA and errors during DNA replication without direct exposure to an agent. A few common spontaneous mutations are:
Depurination- The loss of a purine (A or G) base to form an apurinic site. An apurinic site, also known as an AP site, is the location in a genetic sequence that does not contain a purine base. During replication, the affected double stranded DNA will produce one doubled-stranded daughter containing the missing purine, resulting in an unchanged sequence. The other strand will produce a shorter strand, missing the purine and its complementary base.
Deamination- The amine group on a base is changed to a keto group. This results in cytosine being changed to uracil and adenine being changed to hypoxanthine which can result in incorrect DNA replication and repair.
Tautomerization- The hydrogen atom on a nucleotide base is repositioned causing altered hydrogen bonding pattern and incorrect base pairing during replication. For example, the keto tautomer of thymine normally pairs with adenine, however the enol tautomer of thymine can bind with guanine. This results in an incorrect base pair match. Similarly there are amino and imi
|
https://en.wikipedia.org/wiki/Dodgem
|
Dodgem is a simple abstract strategy game invented by Colin Vout in 1972 while he was a mathematics student at the University of Cambridge as described in the book Winning Ways. It is played on an n×n board with n-1 cars for each player—two cars each on a 3×3 board is enough for an interesting game, but larger sizes are also possible.
Play
The board is initially set up with n-1 blue cars along the left edge and n-1 red cars along the bottom edge, the bottom left square remaining empty. Turns alternate: player 1 ("Left")'s turn is to move any one of the blue cars one space forwards (right) or sideways (up or down). Player 2 ("Right")'s turn is to move any one of the red cars one space forwards (up) or sideways (left or right).
Cars may not move onto occupied spaces. They may leave the board, but only by a forward move. A car which leaves the board is out of the game. There are no captures. A player must always leave their opponent a legal move or else forfeit the game.
The winner is the player who first gets all their pieces off the board, or has all their cars blocked in by their opponent.
The game can also be played in Misere, where you force your opponent to move their pieces off the board.
Theory
The 3×3 game can be completely analyzed (strongly solved) and is a win for the first player—a table showing who wins from every possible position is given in Winning Ways, and given this information it is easy to read off a winning strategy.
David des Jardins showed in 1996 that the 4×4 and 5×5 games never end with perfect play—both players get stuck shuffling their cars from side to side to prevent the other from winning. He conjectures that this is true for all larger boards.
For a 3x3 board, there are 56 reachable positions. Out of the 56 reachable positions, 8 of them are winning, 4 of them are losing, and 44 are draws.
|
https://en.wikipedia.org/wiki/Cytochrome%20b
|
Cytochrome b within both molecular and cell biology, is a protein found in the mitochondria of eukaryotic cells. It functions as part of the electron transport chain and is the main subunit of transmembrane cytochrome bc1 and b6f complexes.
Function
In the mitochondrion of eukaryotes and in aerobic prokaryotes, cytochrome b is a component of respiratory chain complex III () — also known as the bc1 complex or ubiquinol-cytochrome c reductase. In plant chloroplasts and cyanobacteria, there is an analogous protein, cytochrome b6, a component of the plastoquinone-plastocyanin reductase (), also known as the b6f complex. These complexes are involved in electron transport, the pumping of protons to create a proton-motive force (PMF). This proton gradient is used for the generation of ATP. These complexes play a vital role in cells.
Structure
Cytochrome b/b6 is an integral membrane protein of approximately 400 amino acid residues that probably has 8 transmembrane segments. In plants and cyanobacteria, cytochrome b6 consists of two protein subunits encoded by the petB and petD genes. Cytochrome b/b6 non-covalently binds two heme groups, known as b562 and b566. Four conserved histidine residues are postulated to be the ligands of the iron atoms of these two heme groups.
Use in phylogenetics
Cytochrome b is commonly used as a region of mitochondrial DNA for determining phylogenetic relationships between organisms, due to its sequence variability. It is considered to be most useful in determining relationships within families and genera. Comparative studies involving cytochrome b have resulted in new classification schemes and have been used to assign newly described species to a genus as well as to deepen the understanding of evolutionary relationships.
Clinical significance
Mutations in cytochrome b primarily result in exercise intolerance in human patients; though more rare, severe multi-system pathologies have also been reported.
Single-point mutations in cy
|
https://en.wikipedia.org/wiki/Jeffrey%20Lagarias
|
Jeffrey Clark Lagarias (born November 16, 1949 in Pittsburgh, Pennsylvania, United States) is a mathematician and professor at the University of Michigan.
Education
While in high school in 1966, Lagarias studied astronomy at the Summer Science Program.
He completed an S.B. and S.M. in Mathematics at the Massachusetts Institute of Technology in 1972. The title of his thesis was "Evaluation of certain character sums". He was a Putnam Fellow at MIT in 1970. He received his Ph.D. in Mathematics from MIT for his thesis "The 4-part of the class group of a quadratic field", in 1974. His advisor for both his masters and Ph.D was Harold Stark.
Career
In 1975, he joined AT&T Bell Laboratories and eventually became Distinguished Member of Technical Staff. Since 1995, he has been a Technology Consultant at AT&T Research Laboratories. In 2002, he moved to Michigan to work at the University and settle down with his family.
While his recent work has been in theoretical computer science, his original training was in analytic algebraic number theory. He has since worked in many areas, both pure and applied, and considers himself a mathematical generalist.
Lagarias discovered an elementary problem that is equivalent to the Riemann hypothesis, namely whether
for all n > 0, we have
with equality only when n = 1. Here Hn is the nth harmonic number, the sum of the reciprocals of the first positive integers, and σ(n) is the divisor function, the sum of the positive divisors of n.
He disproved Keller's conjecture in dimensions at least 10. Lagarias has also done work on the Collatz conjecture and Li's criterion and has written several highly cited papers in symbolic computation with Dave Bayer.
Awards and honors
He received in 1986 a Lester R. Ford award from the Mathematical Association of America and again in 2007.
In 2012, he became a fellow of the American Mathematical Society.
|
https://en.wikipedia.org/wiki/Bethe%20ansatz
|
In physics, the Bethe ansatz is an ansatz for finding the exact wavefunctions of certain quantum many-body models, most commonly for one-dimensional lattice models. It was first used by Hans Bethe in 1931 to find the exact eigenvalues and eigenvectors of the one-dimensional antiferromagnetic isotropic (XXX) Heisenberg model.
Since then the method has been extended to other spin chains and statistical lattice models.
"Bethe ansatz problems" were one of the topics featuring in the "To learn" section of Richard Feynman's blackboard at the time of his death.
Discussion
In the framework of many-body quantum mechanics, models solvable by the Bethe ansatz can be contrasted with free fermion models. One can say that the dynamics of a free model is one-body reducible: the many-body wave function for fermions (bosons) is the anti-symmetrized (symmetrized) product of one-body wave functions. Models solvable by the Bethe ansatz are not free: the two-body sector has a non-trivial scattering matrix, which in general depends on the momenta.
On the other hand, the dynamics of the models solvable by the Bethe ansatz is two-body reducible: the many-body scattering matrix is a product of two-body scattering matrices. Many-body collisions happen as a sequence of two-body collisions and the many-body wave function can be represented in a form which contains only elements from two-body wave functions. The many-body scattering matrix is equal to the product of pairwise scattering matrices.
The generic form of the (coordinate) Bethe ansatz for a many-body wavefunction is
in which is the number of particles, their position, is the set of all permutations of the integers , is the parity of the permutation taking values either positive or negative one, is the (quasi-)momentum of the -th particle, is the scattering phase shift function and is the sign function. This form is universal (at least for non-nested systems), with the momentum and scattering functions being model-dep
|
https://en.wikipedia.org/wiki/ProCurve
|
HP ProCurve was the name of the networking division of Hewlett-Packard from 1998 to 2010 and was associated with the products that it sold. The name of the division was changed to HP Networking in September 2010 after HP bought 3Com Corporation.
History
The HP division that became the HP ProCurve division began in Roseville, California, in 1979. Originally it was part of HP's Data Systems Division (DSD) and known as DSD-Roseville. Later, it was called the Roseville Networks Division (RND), then the Workgroup Networks Division (WND), before becoming the ProCurve Networking Business (PNB). The trademark filing date for the ProCurve name was February 25, 1998. On August 11, 2008, HP announced the acquisition of Colubris Networks, a maker of wireless networking products. This was completed on October 1, 2008. In November 2008, HP ProCurve was moved into HP's largest business division, the Technology Services Group organization, with HP Enterprise Account Managers being compensated for sales.
In November 2009, HP announced its intent to acquire 3Com for $2.7 billion. In April 2010, HP completed its acquisition.
At Interop Las Vegas in April 2010, HP began publicly using HP Networking as the name for its networking division. Following HP's 2015 acquisition of Aruba Networks and the company's subsequent split later that year, HP Networks was combined with Aruba to form HPE's "Intelligent Edge" business unit under the Aruba Networks brand.
Products
A variety of different networking products have been made by HP. The first products were named EtherTwist while printer connectivity products carried the JetDirect name. As the EtherTwist name faded, most of HP's networking products were given AdvanceStack names. Later, the then-ProCurve division began to offer LAN switches, Core, Datacenter, Distribution, Edge, Web managed and Unmanaged Switches. The ProCurve was also used with Network Management, Routing and Security products.
Notable uses
The International Space Station m
|
https://en.wikipedia.org/wiki/Valentino%27s%20syndrome
|
Valentino's syndrome is pain presenting in the right lower quadrant of the abdomen caused by a duodenal ulcer with perforation through the retroperitoneum.
It is named after Rudolph Valentino, an Italian actor, who presented with right lower quadrant pain in New York, which turned out to be a perforated peptic ulcer. He subsequently died from an infection and organ dysfunction in spite of surgery to repair the perforation. Due to his popularity, his case received much attention at the time and is still considered a rare medical condition.
However, the degree of peritoneal findings is strongly influenced by a number of factors, including the size of perforation, amount of bacterial and gastric contents contaminating the abdominal cavity, time between perforation and presentation, and spontaneous sealing of perforation.
Signs and symptoms
Patients with perforated Valentino's syndrome usually present with a sudden onset of severe, sharp abdominal pain in the right lower quadrant (RLQ), that is similar to acute appendicitis. Most patients describe generalized pain; a few present with severe epigastric pain, located in the upper abdominal area. As even slight movement can tremendously worsen their pain, these patients assume a fetal position. These patients may also demonstrate signs and symptoms of septic shock, such as tachycardia (increased heart rate), hypotension (low blood pressure), and anuria (when no urine is produced from the kidneys). Not surprisingly, these indicators of shock may be absent in elderly, immunocompromised patients or in those with diabetes. Patients also experience nausea, vomiting, decreased appetite, and sweating.
Cause
The cause for Valentino's syndrome is due to a perforated ulcer located in the duodenum. This occurs when ulcers that have gone untreated for long periods of time, and as a result has burned through the stomach wall. Risk factors for a perforated ulcers include bacterial infection, such as H. pylori, and routine use of n
|
https://en.wikipedia.org/wiki/Carbaminohemoglobin
|
Carbaminohemoglobin (carbaminohaemoglobin BrE) (CO2Hb, also known as carbhemoglobin and carbohemoglobin) is a compound of hemoglobin and carbon dioxide, and is one of the forms in which carbon dioxide exists in the blood. Twenty-three percent of carbon dioxide is carried in blood this way (70% is converted into bicarbonate by carbonic anhydrase and then carried in plasma, 7% carried as free CO2, dissolved in plasma).
Synthesis
When the tissues release carbon dioxide into the bloodstream, around 10% is dissolved into the plasma. The rest of the carbon dioxide is carried either directly or indirectly by hemoglobin. Approximately 10% of the carbon dioxide carried by hemoglobin is in the form of carbaminohemoglobin. This carbaminohemoglobin is formed by the reaction between carbon dioxide and an amino (-NH2) residue from the globin molecule, resulting in the formation of a carbamino residue (-NH.COO−). The rest of the carbon dioxide is transported in the plasma as bicarbonate anions.
Mechanism
When carbon dioxide binds to hemoglobin, carbaminohemoglobin is formed, lowering hemoglobin's affinity for oxygen via the Bohr effect. The reaction is formed between a carbon dioxide molecule and an amino residue. In the absence of oxygen, unbound hemoglobin molecules have a greater chance of becoming carbaminohemoglobin. The Haldane effect relates to the increased affinity of de-oxygenated hemoglobin for : offloading of oxygen to the tissues thus results in increased affinity of the hemoglobin for carbon dioxide, and , which the body needs to get rid of, which can then be transported to the lung for removal. Because the formation of this compound generates hydrogen ions, haemoglobin is needed to buffer it.
Hemoglobin can bind to four molecules of carbon dioxide. The carbon dioxide molecules form a carbamate with the four terminal-amine groups of the four protein chains in the deoxy form of the molecule. Thus, one hemoglobin molecule can transport four carbon dioxide molec
|
https://en.wikipedia.org/wiki/Samsung%20Q1
|
The Samsung Q1 (known as Samsung SENS Q1 in South Korea) was a family of ultra-mobile PCs produced by Samsung Electronics starting in 2007. They had a 7" (18 cm) LCD and were made in several different versions with either Windows XP Tablet PC Edition or Windows Vista Home Premium.
Variations
Q1 series
Samsung Q1
Intel Celeron M ULV (Ultra Low Voltage) 353 running at 900 MHz
40 GB 1.8" Hard Drive (ZIF interface)
512MB DDR2 533
Max memory 2GB DDR2 533
Mobile Intel 915GMS Express Chipset
7 inch WVGA (800×480) resistive (single-touch) touch screen (using finger or stylus) with the included "Easy Display Manager" software allowing the user to downscale from 1024×600 and 1024×768 with a few button presses.
VGA port
Weighs 0.78 kg
3-cell battery (up to 3 hours) or 6-cell battery (up to 6 hours)
WLAN 802.11b/g
LAN port 100 mbit
CompactFlash port Type II
Stereo speakers
Array mics
AVS mode using Windows XP embedded
Bluetooth enabled
Digital Multimedia Broadcasting
2 USB ports
The Q1 is one of the first ultra-mobile computers (UMPC) produced under Microsoft's "Origami" project. The Q1 can boot into two different modes: typical Windows XP (OS can be replaced), and AVS mode running Windows XP Embedded. AVS mode runs in a separate partition and boots directly to a music, photo, and video player with no Windows Explorer interface. The AVS feature is unique to the Q1.
Samsung Q1 SSD
The SSD version is identical to the Q1 except that the 40 GB hard disk drive has been replaced by Samsung's 32 GB solid-state drive. At release, the SSD version was about twice as expensive as the normal Q1.
Samsung Q1b
The Q1b was Samsung's second UMPC device, with a much improved battery life and 30% brighter screen compared to the Q1. The CF card slot and the Ethernet port were removed on this version. It also had a mono speaker and a single microphone.
VIA C7-M ULV @ 1 GHz
5 Hour Battery Life (using standard 3-cell battery)
30% Brighter Screen (LED backlight)
Wi-Fi (802.11 b/g support)
B
|
https://en.wikipedia.org/wiki/Holland%27s%20schema%20theorem
|
Holland's schema theorem, also called the fundamental theorem of genetic algorithms, is an inequality that results from coarse-graining an equation for evolutionary dynamics. The Schema Theorem says that short, low-order schemata with above-average fitness increase exponentially in frequency in successive generations. The theorem was proposed by John Holland in the 1970s. It was initially widely taken to be the foundation for explanations of the power of genetic algorithms. However, this interpretation of its implications has been criticized in several publications reviewed in, where the Schema Theorem is shown to be a special case of the Price equation with the schema indicator function as the macroscopic measurement.
A schema is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of cylinder sets, and hence form a topological space.
Description
Consider binary strings of length 6. The schema 1*10*1 describes the set of all strings of length 6 with 1's at positions 1, 3 and 6 and a 0 at position 4. The * is a wildcard symbol, which means that positions 2 and 5 can have a value of either 1 or 0. The order of a schema is defined as the number of fixed positions in the template, while the defining length is the distance between the first and last specific positions. The order of 1*10*1 is 4 and its defining length is 5. The fitness of a schema is the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function. Using the established methods and genetic operators of genetic algorithms, the schema theorem states that short, low-order schemata with above-average fitness increase exponentially in successive generations. Expressed as an equation:
Here is the number of strings belonging to schema at generation , is the observed average fitness of schema and is the
|
https://en.wikipedia.org/wiki/Generalized%20minimal%20residual%20method
|
In mathematics, the generalized minimal residual method (GMRES) is an iterative method for the numerical solution of an indefinite nonsymmetric system of linear equations. The method approximates the solution by the vector in a Krylov subspace with minimal residual. The Arnoldi iteration is used to find this vector.
The GMRES method was developed by Yousef Saad and Martin H. Schultz in 1986. It is a generalization and improvement of the MINRES method due to Paige and Saunders in 1975. The MINRES method requires that the matrix is symmetric, but has the advantage that it only requires handling of three vectors. GMRES is a special case of the DIIS method developed by Peter Pulay in 1980. DIIS is applicable to non-linear systems.
The method
Denote the Euclidean norm of any vector v by . Denote the (square) system of linear equations to be solved by
The matrix A is assumed to be invertible of size m-by-m. Furthermore, it is assumed that b is normalized, i.e., that .
The n-th Krylov subspace for this problem is
where is the initial error given an initial guess . Clearly if .
GMRES approximates the exact solution of by the vector that minimizes the Euclidean norm of the residual .
The vectors might be close to linearly dependent, so instead of this basis, the Arnoldi iteration is used to find orthonormal vectors which form a basis for . In particular, .
Therefore, the vector can be written as with , where is the m-by-n matrix formed by . In other words, finding the n-th approximation of the solution (i.e., ) is reduced to finding the vector , which is determined via minimizing the residue as described below.
The Arnoldi process also constructs , an ()-by- upper Hessenberg matrix which satisfies
an equality which is used to simplify the calculation of (see below). Note that, for symmetric matrices, a symmetric tri-diagonal matrix is actually achieved, resulting in the MINRES method.
Because columns of are orthonormal, we have
where
is the first v
|
https://en.wikipedia.org/wiki/Ecological%20facilitation
|
Ecological facilitation or probiosis describes species interactions that benefit at least one of the participants and cause harm to neither. Facilitations can be categorized as mutualisms, in which both species benefit, or commensalisms, in which one species benefits and the other is unaffected. This article addresses both the mechanisms of facilitation and the increasing information available concerning the impacts of facilitation on community ecology.
Categories
There are two basic categories of facilitative interactions:
Mutualism is an interaction between species that is beneficial to both. A familiar example of a mutualism is the relationship between flowering plants and their pollinators. The plant benefits from the spread of pollen between flowers, while the pollinator receives some form of nourishment, either from nectar or the pollen itself.
Commensalism is an interaction in which one species benefits and the other species is unaffected. Epiphytes (plants growing on other plants, usually trees) have a commensal relationship with their host plant because the epiphyte benefits in some way (e.g., by escaping competition with terrestrial plants or by gaining greater access to sunlight) while the host plant is apparently unaffected.
Strict categorization, however, is not possible for some complex species interactions. For example, seed germination and survival in harsh environments is often higher under so-called nurse plants than on open ground. A nurse plant is one with an established canopy, beneath which germination and survival are more likely due to increased shade, soil moisture, and nutrients. Thus, the relationship between seedlings and their nurse plants is commensal. However, as the seedlings grow into established plants, they are likely to compete with their former benefactors for resources.
Mechanisms
The beneficial effects of species on one another are realized in various ways, including refuge from physical stress, predation, and competi
|
https://en.wikipedia.org/wiki/Exertion
|
Exertion is the physical or perceived use of energy. Exertion traditionally connotes a strenuous or costly effort, resulting in generation of force, initiation of motion, or in the performance of work. It often relates to muscular activity and can be quantified, empirically and by measurable metabolic response.
Physical
In physics, exertion is the expenditure of energy against, or inductive of, inertia as described by Isaac Newton's third law of motion. In physics, force exerted equivocates work done. The ability to do work can be either positive or negative depending on the direction of exertion relative to gravity. For example, a force exerted upwards, like lifting an object, creates positive work done on that object.
Exertion often results in force generated, a contributing dynamic of general motion. In mechanics it describes the use of force against a body in the direction of its motion (see vector).
Physiological
Exertion, physiologically, can be described by the initiation of exercise, or, intensive and exhaustive physical activity that causes cardiovascular stress or a sympathetic nervous response. This can be continuous or intermittent exertion.
Exertion requires, of the body, modified oxygen uptake, increased heart rate, and autonomic monitoring of blood lactate concentrations. Mediators of physical exertion include cardio-respiratory and musculoskeletal strength, as well as metabolic capability. This often correlates to an output of force followed by a refractory period of recovery. Exertion is limited by cumulative load and repetitive motions.
Muscular energy reserves, or stores for biomechanical exertion, stem from metabolic, immediate production of ATP and increased oxygen consumption. Muscular exertion generated depends on the muscle length and the velocity at which it is able to shorten, or contract.
Perceived exertion can be explained as subjective, perceived experience that mediates response to somatic sensations and mechanisms. A rating of pe
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.