source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Geiriadur%20Prifysgol%20Cymru
|
Geiriadur Prifysgol Cymru (GPC) (The University of Wales Dictionary) is the only standard historical dictionary of the Welsh language, aspiring to be "comparable in method and scope to the Oxford English Dictionary". Vocabulary is defined in Welsh, and English equivalents are given. Detailed attention is given to variant forms, collocations, and etymology.
The first edition was published in four volumes between 1967 and 2002, containing 7.3 million words of text in 3,949 pages, documenting 106,000 headwords. There are almost 350,000 dated citations dating from the year 631 up to 2000, with 323,000 Welsh definitions and 290,000 English equivalents, of which 85,000 have included etymologies.
History
In 1921, a small team at the National Library of Wales, Aberystwyth, organised by the Rev. J. Bodvan Anwyl, arranged for volunteer readers to record words. The task of editing the dictionary was undertaken by R. J. Thomas in the 1948/49 academic year.
The first edition of Volume I appeared in 1967, followed by Volume II in 1987, Volume III in 1998, and Volume IV, edited by Gareth A. Bevan and P. J. Donovan, in December 2002. Work then immediately began on a second edition.
Following the retirement of the previous editors Gareth A. Bevan and Patrick J. Donovan, Andrew Hawke was appointed as managing editor in January 2008.
The second edition is based not only on the Dictionary's own collection of citation slips, but also on a wide range of electronic resources such as the Welsh Prose 1300-1425 website (Cardiff University), JISC Historic Books including EEBO and EECO (The British Library), Welsh Newspapers Online and Welsh Journals Online (National Library of Wales), and the National Terminology Portal (Bangor University).
In 2011, collaborative work began to convert the Dictionary data so that it could be used in the XML-based iLEX dictionary writing system, as well as to produce an online dictionary. After three years of work, on 26 June 2014, GPC Online was launche
|
https://en.wikipedia.org/wiki/Standard%20components%20%28food%20processing%29
|
Standard components is a food technology term, when manufacturers buy in a standard component they would use a pre-made product in the production of their food.
They help products to be the same in consistency, they are quick and easy to use in batch production of food products.
Some examples are pre-made stock cubes, marzipan, icing, ready made pastry.
Usage
Manufacturers use standard components as they save time and sometimes cost a lot less and it also helps with consistency in products.
If a manufacturer is to use a standard component from another supplier it is essential that a precise and accurate specification is produced by the manufacturer so that the component meets the standards set by the manufacturer.
Advantages
Saves preparation time.
Fewer steps in the production process
Less effort and skill required by staff
Less machinery and equipment needed
Good quality
Saves money from all aspects
Can be bought in bulk
High-quality consistency
Food preparation is hygienic
Disadvantages
Have to rely on other manufacturers to supply products
Fresh ingredients may taste better
May require special storage conditions
Less reliable than doing it yourself
Cost more to make
Can't control the nutritional value of the product
There is a larger risk of cross contamination.
GCSE food technology
References
Food Technology Nelson Thornes, 2001 pg. 144
Components
Food industry
Food ingredients
|
https://en.wikipedia.org/wiki/Semiconductor%20fabrication%20plant
|
In the microelectronics industry, a semiconductor fabrication plant (commonly called a fab; sometimes foundry) is a factory for semiconductor device fabrication.
Fabs require many expensive devices to function. Estimates put the cost of building a new fab over one billion U.S. dollars with values as high as $3–4 billion not being uncommon. TSMC invested $9.3 billion in its Fab15 300 mm wafer manufacturing facility in Taiwan. The same company estimations suggest that their future fab might cost $20 billion. A foundry model emerged in the 1990s: Foundries that produced their own designs were known as integrated device manufacturers (IDMs). Companies that farmed out manufacturing of their designs to foundries were termed fabless semiconductor companies. Those foundries, which did not create their own designs, were called pure-play semiconductor foundries.
The central part of a fab is the clean room, an area where the environment is controlled to eliminate all dust, since even a single speck can ruin a microcircuit, which has nanoscale features much smaller than dust particles. The clean room must also be damped against vibration to enable nanometer-scale alignment of machines and must be kept within narrow bands of temperature and humidity. Vibration control may be achieved by using deep piles in the cleanroom's foundation that anchor the cleanroom to the bedrock, careful selection of the construction site, and/or using vibration dampers. Controlling temperature and humidity is critical for minimizing static electricity. Corona discharge sources can also be used to reduce static electricity. Often, a fab will be constructed in the following manner: (from top to bottom): the roof, which may contain air handling equipment that draws, purifies and cools outside air, an air plenum for distributing the air to several floor-mounted fan filter units, which are also part of the cleanroom's ceiling, the cleanroom itself, which may or may not have more than one story, a re
|
https://en.wikipedia.org/wiki/Cell%20signaling
|
In biology, cell signaling (cell signalling in British English) or cell communication is the ability of a cell to receive, process, and transmit signals with its environment and with itself. Cell signaling is a fundamental property of all cellular life in prokaryotes and eukaryotes. Signals that originate from outside a cell (or extracellular signals) can be physical agents like mechanical pressure, voltage, temperature, light, or chemical signals (e.g., small molecules, peptides, or gas). Cell signaling can occur over short or long distances, and as a result can be classified as autocrine, juxtacrine, intracrine, paracrine, or endocrine. Signaling molecules can be synthesized from various biosynthetic pathways and released through passive or active transports, or even from cell damage.
Receptors play a key role in cell signaling as they are able to detect chemical signals or physical stimuli. Receptors are generally proteins located on the cell surface or within the interior of the cell such as the cytoplasm, organelles, and nucleus. Cell surface receptors usually bind with extracellular signals (or ligands), which causes a conformational change in the receptor that leads it to initiate enzymic activity, or to open or close ion channel activity. Some receptors do not contain enzymatic or channel-like domains but are instead linked to enzymes or transporters. Other intracellular receptors like nuclear receptors have a different mechanism such as changing their DNA binding properties and cellular localization to the nucleus.
Signal transduction begins with the transformation (or transduction) of a signal into a chemical one, which can directly activate an ion channel (ligand-gated ion channel) or initiate a second messenger system cascade that propagates the signal through the cell. Second messenger systems can amplify a signal, in which activation of a few receptors results in multiple secondary messengers being activated, thereby amplifying the initial sig
|
https://en.wikipedia.org/wiki/Logical%20matrix
|
A logical matrix, binary matrix, relation matrix, Boolean matrix, or (0, 1)-matrix is a matrix with entries from the Boolean domain Such a matrix can be used to represent a binary relation between a pair of finite sets. It is an important tool in combinatorial mathematics and theoretical computer science.
Matrix representation of a relation
If R is a binary relation between the finite indexed sets X and Y (so ), then R can be represented by the logical matrix M whose row and column indices index the elements of X and Y, respectively, such that the entries of M are defined by
In order to designate the row and column numbers of the matrix, the sets X and Y are indexed with positive integers: i ranges from 1 to the cardinality (size) of X, and j ranges from 1 to the cardinality of Y. See the article on indexed sets for more detail.
Example
The binary relation R on the set is defined so that aRb holds if and only if a divides b evenly, with no remainder. For example, 2R4 holds because 2 divides 4 without leaving a remainder, but 3R4 does not hold because when 3 divides 4, there is a remainder of 1. The following set is the set of pairs for which the relation R holds.
{(1, 1), (1, 2), (1, 3), (1, 4), (2, 2), (2, 4), (3, 3), (4, 4)}.
The corresponding representation as a logical matrix is
which includes a diagonal of ones, since each number divides itself.
Other examples
A permutation matrix is a (0, 1)-matrix, all of whose columns and rows each have exactly one nonzero element.
A Costas array is a special case of a permutation matrix.
An incidence matrix in combinatorics and finite geometry has ones to indicate incidence between points (or vertices) and lines of a geometry, blocks of a block design, or edges of a graph.
A design matrix in analysis of variance is a (0, 1)-matrix with constant row sums.
A logical matrix may represent an adjacency matrix in graph theory: non-symmetric matrices correspond to directed graphs, symmetric matrices to ordinary grap
|
https://en.wikipedia.org/wiki/Nacrite
|
Nacrite Al2Si2O5(OH)4 is a clay mineral that is polymorphous (or polytypic) with kaolinite. It crystallizes in the monoclinic system. X-ray diffraction analysis is required for positive identification.
Nacrite was first described in 1807 for an occurrence in Saxony, Germany. The name is from nacre in reference to the dull luster of the surface of nacrite masses scattering light with slight iridescences resembling those of the mother of pearls secreted by oysters.
References
Clay minerals group
Polymorphism (materials science)
Monoclinic minerals
Minerals in space group 9
|
https://en.wikipedia.org/wiki/Cartan%20model
|
In mathematics, the Cartan model is a differential graded algebra that computes the equivariant cohomology of a space.
References
Stefan Cordes, Gregory Moore, Sanjaye Ramgoolam, Lectures on 2D Yang-Mills Theory, Equivariant Cohomology and Topological Field Theories, , 1994.
Algebraic topology
|
https://en.wikipedia.org/wiki/Fan-in
|
Fan-in is the number of inputs a logic gate can handle. For instance the fan-in for the AND gate shown in the figure is 3. Physical logic gates with a large fan-in tend to be slower than those with a small fan-in. This is because the complexity of the input circuitry increases the input capacitance of the device. Using logic gates with higher fan-in will help in reducing the depth of a logic circuit; this is because circuit design is realized by the target logic family at a digital level, meaning any large fan-in logic gates are simply the smaller fan-in gates chained together in series at a given depth to widen the circuit instead.
Fan-in tree of a node refers to a collection of signals that contribute to the input signal of that node.
In quantum logic gates the fan-in always has to be equal to the number of outputs, the Fan-out. Gates for which the numbers of inputs and outputs differ would not be reversible (unitary) and are therefore not allowed.
See also
Fan-out, a related concept, which is the number of inputs that a given logic output drives.
References
Logic gates
de:Fan-Out#Fan-In
|
https://en.wikipedia.org/wiki/Wigner%20distribution%20function
|
The Wigner distribution function (WDF) is used in signal processing as a transform in time-frequency analysis.
The WDF was first proposed in physics to account for quantum corrections to classical statistical mechanics in 1932 by Eugene Wigner, and it is of importance in quantum mechanics in phase space (see, by way of comparison: Wigner quasi-probability distribution, also called the Wigner function or the Wigner–Ville distribution).
Given the shared algebraic structure between position-momentum and time-frequency conjugate pairs, it also usefully serves in signal processing, as a transform in time-frequency analysis, the subject of this article. Compared to a short-time Fourier transform, such as the Gabor transform, the Wigner distribution function provides the highest possible temporal vs frequency resolution which is mathematically possible within the limitations of the uncertainty principle. The downside is the introduction of large cross terms between every pair of signal components and between positive and negative frequencies, which makes the original formulation of the function a poor fit for most analysis applications. Subsequent modifications have been proposed which preserve the sharpness of the Wigner distribution function but largely suppress cross terms.
Mathematical definition
There are several different definitions for the Wigner distribution function. The definition given here is specific to time-frequency analysis. Given the time series , its non-stationary auto-covariance function is given by
where denotes the average over all possible realizations of the process and is the mean, which may or may not be a function of time. The Wigner function is then given by first expressing the autocorrelation function in terms of the average time and time lag , and then Fourier transforming the lag.
So for a single (mean-zero) time series, the Wigner function is simply given by
The motivation for the Wigner function is that it reduces to
|
https://en.wikipedia.org/wiki/Integrated%20injection%20logic
|
Integrated injection logic (IIL, I2L, or I2L) is a class of digital circuits built with multiple collector bipolar junction transistors (BJT). When introduced it had speed comparable to TTL yet was almost as low power as CMOS, making it ideal for use in VLSI (and larger) integrated circuits. The gates can be made smaller with this logic family than with CMOS because complementary transistors are not needed. Although the logic voltage levels are very close (High: 0.7V, Low: 0.2V), I2L has high noise immunity because it operates by current instead of voltage. I2L was developed in 1971 by Siegfried K. Wiedmann and Horst H. Berger who originally called it merged-transistor logic (MTL).
A disadvantage of this logic family is that the gates draw power when not switching unlike with CMOS.
Construction
The I2L inverter gate is constructed with a PNP common base current source transistor and an NPN common emitter open collector inverter transistor (i.e. they are connected to the GND). On a wafer, these two transistors are merged. A small voltage (around 1 volts) is supplied to the emitter of the current source transistor to control the current supplied to the inverter transistor. Transistors are used for current sources on integrated circuits because they are much smaller than resistors.
Because the inverter is open collector, a wired AND operation may be performed by connecting an output from each of two or more gates together. Thus the fan-out of an output used in such a way is one. However, additional outputs may be produced by adding more collectors to the inverter transistor. The gates can be constructed very simply with just a single layer of interconnect metal.
In a discrete implementation of an I2L circuit, bipolar NPN transistors with multiple collectors can be replaced with multiple discrete 3-terminal NPN transistors connected in parallel having their bases connected together and their emitters connected likewise. The current source transistor may be replaced
|
https://en.wikipedia.org/wiki/Mining%20simulator
|
A mining simulator is a type of simulation used for entertainment as well as in training purposes for mining companies. These simulators replicate elements of real-world mining operations on surrounding screens displaying three-dimensional imagery, motion platforms, and scale models of typical and atypical mining environments and machinery. The results of the simulations can provide useful information in the form of greater competence in on-site safety, which can lead to greater efficiency and decreased risk of accidents.
Training
Mining simulators are used to replicate real-world conditions of mining, assessing real-time responses from the trainee operator to react to what tasks or obstacles appear around them. This is often achieved through the use of surrounding three-dimensional imagery, motion platforms, and realistic replicas of actual mining equipment. Trainee operator employees are often taught in a program where they are scored against both their peers and an expert benchmark to produce a final evaluation of competence with the tasks they may need to complete in real-life.
Criticism
Mining companies that have implemented mining simulators into their training have shown greater employee competence in on-site safety, leading to an overall more productive working environment, and a higher chance of profitability for the company in the long-run by decreasing the risk of accidents, injuries, or deaths on the site though prior education. Being able to simulate real-world mining hazards in a safe and controlled environment has also shown to help prepare employees on proper procedure and protocol in the event of an on-site accident without the need to physically experience one, which often cannot be safely taught in the real-world. Simulating mining environments further helps to familiarize employees with mining equipment and vehicles before entering a real job site, leading to increased productivity, and a chance to correct inefficiencies while still in traini
|
https://en.wikipedia.org/wiki/NProtect%20GameGuard
|
nProtect GameGuard (sometimes called GG) is an anti-cheating rootkit developed by INCA Internet. It is widely installed in many online games to block possibly malicious applications and prevent common methods of cheating. nProtect GameGuard provides B2B2C (Business to Business to Consumer) security services for online game companies and portal sites. The software is considered to be one of three software programs which "dominate the online game security market".
GameGuard uses rootkits to proactively prevent cheat software from running. GameGuard hides the game application process, monitors the entire memory range, terminates applications defined by the game vendor and INCA Internet to be cheats (QIP for example), blocks certain calls to Direct X functions and Windows APIs, keylogs keyboard input, and auto-updates itself to change as new possible threats surface.
Since GameGuard essentially works like a rootkit, players may experience unintended and potentially unwanted side effects. If set, GameGuard blocks any installation or activation of hardware and peripherals (e.g., a mouse) while the program is running. Since GameGuard monitors any changes in the computer's memory, it will cause performance issues when the protected game loads multiple or large resources all at once.
Additionally, some versions of GameGuard had an unpatched privilege escalation bug, allowing any program to issue commands as if they were running under an Administrator account.
GameGuard possesses a database on game hacks based on security references from more than 260 game clients. Some editions of GameGuard are now bundled with INCA Internet's Tachyon anti-virus/anti-spyware library, and others with nProtect Key Crypt, an anti-key-logger software that protects the keyboard input information.
List of online games using GameGuard
GameGuard is used in many online games.
9Dragons
Atlantica Online
Blackshot
Blade & Soul
Cabal Online
City Racer
Combat Arms: Reloaded
Combat Arms: Th
|
https://en.wikipedia.org/wiki/Digital%20Compression%20System
|
Digital Compression System, or DCS, is a sound system developed by Williams Electronics. This advanced sound board was used in Williams and Bally pinball games, coin-op arcade video games by Midway Manufacturing, and mechanical and video slot machines by Williams Gaming. This sound system became the standard for these game platforms.
The DCS Sound system was created by Williams sound engineers Matt Booty and Ed Keenan, and further developed by Andrew Eloff.
Versions of DCS
DCS ROM-based mono: The first version of DCS used an Analog Devices ADSP2105 DSP (clocked at 10 MHz) and a DMA-driven DAC, outputting in mono. This was used for the majority of Williams and Midway's pinball games (starting with 1993's Indiana Jones: The Pinball Adventure), as well as Midway's video games, up until the late 1990s. The pinball game, The Twilight Zone, was originally supposed to use the DCS System, but because the DCS board was still in development at the time, all of the music and sounds for this game were reprogrammed for the Yamaha YM2151 / Harris CVSD sound board.
DCS-95: This was a revised version of the original DCS System (allowing for 16MB of data instead of 8MB to be addressed), used for Williams and Midway's WPC-95 pinball system.
DCS2 ROM-based stereo: This version used the ADSP2104 DSP and two DMA-driven DACs, outputting in stereo. This was used in Midway's Zeus-based hardware, and in the short-lived Pinball 2000 platform.
DCS2 RAM-based stereo: This version used the ADSP2115 DSP and two DMA-driven DACs, outputting in stereo. This was used in Midway's 3DFX-based hardware (NFL Blitz, etc.). This system would be adopted by Atari Games, following their acquisition by WMS Industries.
DCS2 RAM-based multi-channel: This version used the ADSP2181 DSP and up to six DMA-driven DACs, outputting in multichannel sound.
Pinball games using DCS
Attack From Mars (1995) (DCS95)
Cactus Canyon (1998) (DCS95)
The Champion Pub (1998) (DCS95)
Cirqus Voltaire (1997) (DCS95)
Congo (1995) (DC
|
https://en.wikipedia.org/wiki/X.75
|
X.75 is an International Telecommunication Union (ITU) (formerly CCITT) standard specifying the interface for interconnecting two X.25 networks. X.75 is almost identical to X.25. The significant difference is that while X.25 specifies the interface between a subscriber (Data Terminal Equipment (DTE)) and the network (Data Circuit-terminating Equipment (DCE)), X.75 specifies the interface between two networks (Signalling Terminal Equipment (STE)), and refers to these two STE as STE-X and STE-Y. This gives rise to some subtle differences in the protocol compared with X.25. For example, X.25 only allows network-generated reset and clearing causes to be passed from the network (DCE) to the subscriber (DTE), and not the other way around, since the subscriber is not a network. However, at the interconnection of two X.25 networks, either network might reset or clear an X.25 call, so X.75 allows network-generated reset and clearing causes to be passed in either direction.
Although outside the scope of both X.25 and X.75, which define external interfaces to an X.25 network, X.75 can also be found as the protocol operating between switching nodes inside some X.25 networks.
Further reading
External links
ITU-T Recommendation X.75
Network layer protocols
Wide area networks
ITU-T recommendations
ITU-T X Series Recommendations
X.25
|
https://en.wikipedia.org/wiki/Ohio%20Scientific
|
Ohio Scientific, Inc. (OSI, originally Ohio Scientific Instruments, Inc.), was a privately owned American computer company based in Ohio that built and marketed computer systems, expansions, and software from 1975 to 1986. Their best-known products were the Challenger series of microcomputers and Superboard single-board computers. The company was the first to market microcomputers with hard disk drives in 1977.
The company was incorporated as Ohio Scientific Instruments in Hiram, Ohio, by husband and wife Mike and Charity Cheiky and business associate Dale A. Dreisbach in 1975. Originally a maker of electronic teaching aids, the company leaned quickly into microcomputer production, after their original educational products failed in the marketplace while their computer-oriented products sparked high interest in the hobbyist community. The company moved to Aurora, Ohio, occupying a 72,000-square-foot factory. The company reached the $1 million revenue mark in 1976; by the end of 1980, the company generated $18 million in revenue. Ohio Scientific's manufacturing presence likewise expanded into greater Ohio as well as California and Puerto Rico.
In 1980, the company was acquired by telecommunications conglomerate M/A-COM of Burlington, Massachusetts, for $5 million. M-A/COM soon consolidated the company's product lines, in order to focus their new subsidiary on manufacturing business systems. During their tenure under M-A/COM, Ohio Scientific was renamed M/A-COM Office Systems. M-A/COM struggled financially themselves and sold the division in 1983 to Kendata Inc. of Trumbull, Connecticut, who immediately renamed it back to Ohio Scientific. Kendata, previously only a corporate reseller of computer systems, failed to maintain Ohio Scientific's manufacturing lines and subsequently sold the division to AB Fannyudde of Sweden. The flagship Aurora factory, by then only employing 16 people, was finally shut down in October 1983.
Beginnings (1975–1976)
Ohio Scientific was
|
https://en.wikipedia.org/wiki/Digital%20Control%20Bus
|
DCB (Digital Control Bus, Digital Connection Bus or Digital Communication Bus in some sources) was a proprietary data interchange interface by Roland Corporation, developed in 1981 and introduced in 1982 in their Roland Juno-60 and Roland Jupiter-8 products. DCB functions were basically the same as MIDI, but unlike MIDI (which is capable of transmitting a wide array of information), DCB could provide note on/off, program change and VCF/VCA control only. DCB-to-MIDI adapters were produced for a number of early Roland products. The DCB interface was made in 2 variants, the earlier one used 20-pin sockets and cables, later switching to the 14-pin Amphenol DDK connector vaguely resembling a parallel port.
Supporting equipment
DCB was quickly replaced by MIDI in the early 1980s which Roland helped co-develop with Sequential Circuits. The only DCB-equipped instruments produced were the Roland Jupiter-8 and JUNO-60; Roland produced at least two DCB sequencers, the JSQ-60 and the MSQ-700. The latter was capable of saving eight sequences, or a total of 3000 notes, and was capable of transmitting and receiving data via MIDI (though it could not convert signals between DCB and MIDI, nor could it use both protocols simultaneously). Roland later released the MD-8, a rather large black box capable of converting MIDI signals to DCB and vice versa. While this allows note on/off to be sent to a JUNO-60 by MIDI, the solution pales in comparison to the full MIDI implementation on the JUNO-60's successor, the Roland Juno-106. A few other companies offer similar conversion boxes to connect DCB instruments to regular MIDI systems for the support of vintage synthesizers in modern sound production environments; one of the more fully-featured devices being the Kenton PRO-DCB Mk3 which has some bi-directional control limited to a few parameters.
Implementation
Following information comes from the Roland JUNO-60 Service Notes, First Edition, page 17–19.
Physical connection
DCB
|
https://en.wikipedia.org/wiki/PComb3H
|
pComb3H, a derivative of pComb3 optimized for expression of human fragments, is a phagemid used to express proteins such as zinc finger proteins and antibody fragments on phage pili for the purpose of phage display selection.
For the purpose of phage production, it contains the bacterial ampicillin resistance gene (for B-lactamase), allowing the growth of only transformed bacteria.
References
Molecular biology
Plasmids
|
https://en.wikipedia.org/wiki/Initial%20algebra
|
In mathematics, an initial algebra is an initial object in the category of -algebras for a given endofunctor . This initiality provides a general framework for induction and recursion.
Examples
Functor
Consider the endofunctor sending to , where is the one-point (singleton) set, the terminal object in the category. An algebra for this endofunctor is a set (called the carrier of the algebra) together with a function . Defining such a function amounts to defining a point and a function .
Define
and
Then the set of natural numbers together with the function is an initial -algebra. The initiality (the universal property for this case) is not hard to establish; the unique homomorphism to an arbitrary -algebra , for an element of and a function on , is the function sending the natural number to , that is, , the -fold application of to .
The set of natural numbers is the carrier of an initial algebra for this functor: the point is zero and the function is the successor function.
Functor
For a second example, consider the endofunctor on the category of sets, where is the set of natural numbers. An algebra for this endofunctor is a set together with a function . To define such a function, we need a point and a function . The set of finite lists of natural numbers is an initial algebra for this functor. The point is the empty list, and the function is cons, taking a number and a finite list, and returning a new finite list with the number at the head.
In categories with binary coproducts, the definitions just given are equivalent to the usual definitions of a natural number object and a list object, respectively.
Final coalgebra
Dually, a final coalgebra is a terminal object in the category of -coalgebras. The finality provides a general framework for coinduction and corecursion.
For example, using the same functor as before, a coalgebra is defined as a set together with a function . Defining such a function amounts to defining a partial funct
|
https://en.wikipedia.org/wiki/ETwinning
|
The eTwinning action is an initiative of the European Commission that aims to encourage European schools to collaborate using Information and Communication Technologies (ICT) by providing the necessary infrastructure (online tools, services, support). Teachers registered in the eTwinning action are enabled to form partnerships and develop collaborative, pedagogical school projects in any subject area with the sole requirements to employ ICT to develop their project and collaborate with teachers from other European countries.
Formation
The project was founded in 2005 under the European Union's e-Learning program and it has been integrated in the Lifelong Learning program since 2007. eTwinning is part of Erasmus+, the EU program for education, training, and youth.
History
The eTwinning action was launched in January 2005. Its main objectives complied with the decision by the Barcelona European Council in March 2002 to promote school twinning as an opportunity for all students to learn and practice ICT skills and to promote awareness of the multicultural European model of society.
More than 13,000 schools were involved in eTwinning within its first year. In 2008, over 50,000 teachers and 4,000 projects have been registered, while a new eTwinning platform was launched. As of January 2018, over 70,000 projects are running in classrooms across Europe. By 2021, more than 226,000 schools in taken part in this work.
In early 2009, the eTwinning motto changed from "School partnerships in Europe" to "The community for schools in Europe".
In 2022, eTwinning moved to a new platform.
Participating countries
Member States of the European Union are part of eTwinning: Austria, Belgium, Bulgaria, Croatia, Cyprus, Czech Republic, Denmark, Estonia, Finland, France, Germany, Greece, Hungary, Ireland, Italy, Latvia, Lithuania, Luxembourg, Malta, Poland, Portugal, Romania, Slovakia, Slovenia, Spain, Sweden and The Netherlands. Overseas territories and countries are also eligible. I
|
https://en.wikipedia.org/wiki/Acoustic%20cleaning
|
Acoustic cleaning is a maintenance method used in material-handling and storage systems that handle bulk granular or particulate materials, such as grain elevators, to remove the buildup of material on surfaces. An acoustic cleaning apparatus, usually built into the material-handling equipment, works by generating powerful sound waves which shake particulates loose from surfaces, reducing the need for manual cleaning.
History and design
An acoustic cleaner consists of a sound source similar to an air horn found on trucks and trains, attached to the material-handling equipment, which directs a loud sound into the interior. It is powered by compressed air rather than electricity so there is no danger of sparking, which could set off an explosion. It consists of two parts:
The acoustic driver. In the driver, compressed air escaping past a diaphragm causes it to vibrate, generating the sound. It is usually made from solid machined stainless steel. The diaphragm, the only moving part, is usually manufactured from special aerospace grade titanium to ensure performance and longevity.
The bell, a flaring horn, usually made from spun 316 grade stainless steel. The bell serves as a sound resonator, and its flaring shape couples the sound efficiently to the air, increasing the volume of sound radiated.
The overall length of acoustic cleaner horns range from 430 mm to over 3 metres long. The device can operate from a pressure range of 4.8 to 6.2 bars or 70 to 90 psi. The resultant sound pressure level will be around 200 dB.
There are generally 4 ways to control the operation of an acoustic cleaner:
The most common is by a simple timer
Supervisory control and data acquisition (SCADA)
Programmable logic controller (PLC)
Manually by ball valve
An acoustic cleaner will typically sound for 10 seconds and then wait for a further 500 seconds before sounding again. This ratio for on/off is approximately proportional to the working life of the diaphragm. Provided the operatin
|
https://en.wikipedia.org/wiki/Neovascularization
|
Neovascularization is the natural formation of new blood vessels (neo- + vascular + -ization), usually in the form of functional microvascular networks, capable of perfusion by red blood cells, that form to serve as collateral circulation in response to local poor perfusion or ischemia.
Growth factors that inhibit neovascularization include those that affect endothelial cell division and differentiation. These growth factors often act in a paracrine or autocrine fashion; they include fibroblast growth factor, placental growth factor, insulin-like growth factor, hepatocyte growth factor, and platelet-derived endothelial growth factor.
There are three different pathways that comprise neovascularization: (1) vasculogenesis, (2) angiogenesis, and (3) arteriogenesis.
Three pathways of neovascularization
Vasculogenesis
Vasculogenesis is the de novo formation of blood vessels. This primarily occurs in the developing embryo with the development of the first primitive vascular plexus, but also occurs to a limited extent with post-natal vascularization. Embryonic vasculogenesis occurs when endothelial cells precursors (hemangioblasts) begin to proliferate and migrate into avascular areas. There, they aggregate to form the primitive network of vessels characteristic of embryos. This primitive vascular system is necessary to provide adequate blood flow to cells, supplying oxygen and nutrients, and removing metabolic wastes.
Angiogenesis
Angiogenesis is the most common type of neovascularization seen in development and growth, and is important to both physiological and pathological processes. Angiogenesis occurs through the formation of new vessels from pre-existing vessels. This occurs through the sprouting of new capillaries from post-capillary venules, requiring precise coordination of multiple steps and the participation and communication of multiple cell types. The complex process is initiated in response to local tissue ischemia or hypoxia, leading to the release of
|
https://en.wikipedia.org/wiki/Conditional%20random%20field
|
Conditional random fields (CRFs) are a class of statistical modeling methods often applied in pattern recognition and machine learning and used for structured prediction. Whereas a classifier predicts a label for a single sample without considering "neighbouring" samples, a CRF can take context into account. To do so, the predictions are modelled as a graphical model, which represents the presence of dependencies between the predictions. What kind of graph is used depends on the application. For example, in natural language processing, "linear chain" CRFs are popular, for which each prediction is dependent only on its immediate neighbours. In image processing, the graph typically connects locations to nearby and/or similar locations to enforce that they receive similar predictions.
Other examples where CRFs are used are: labeling or parsing of sequential data for natural language processing or biological sequences, part-of-speech tagging, shallow parsing, named entity recognition, gene finding, peptide critical functional region finding, and object recognition and image segmentation in computer vision.
Description
CRFs are a type of discriminative undirected probabilistic graphical model.
Lafferty, McCallum and Pereira define a CRF on observations and random variables as follows:
Let be a graph such that , so that is indexed by the vertices of .
Then is a conditional random field when each random variable , conditioned on , obeys the Markov property with respect to the graph; that is, its probability is dependent only on its neighbours in G:
, where means
that and are neighbors in .
What this means is that a CRF is an undirected graphical model whose nodes can be divided into exactly two disjoint sets and , the observed and output variables, respectively; the conditional distribution is then modeled.
Inference
For general graphs, the problem of exact inference in CRFs is intractable. The inference problem for a CRF is basically the same as for an MR
|
https://en.wikipedia.org/wiki/Global%20Descriptor%20Table
|
The Global Descriptor Table (GDT) is a data structure used by Intel x86-family processors starting with the 80286 in order to define the characteristics of the various memory areas used during program execution, including the base address, the size, and access privileges like executability and writability. These memory areas are called segments in Intel terminology.
Description
The GDT is a table of 8-byte entries. Each entry may refer to a segment descriptor, Task State Segment (TSS), Local Descriptor Table (LDT), or call gate. Call gates were designed for transferring control between x86 privilege levels, although this mechanism is not used on most modern operating systems.
There is also a Local Descriptor Table (LDT). Multiple LDTs can be defined in the GDT, but only one is current at any one time: usually associated with the current Task. While the LDT contains memory segments which are private to a specific process, the GDT contains global segments. The x86 processors have facilities for automatically switching the current LDT on specific machine events, but no facilities for automatically switching the GDT.
Every memory access performed by a process always goes through a segment. On the 80386 processor and later, because of 32-bit segment offsets and limits, it is possible to make segments cover the entire addressable memory, which makes segment-relative addressing transparent to the user.
In order to reference a segment, a program must use its index inside the GDT or the LDT. Such an index is called a segment selector (or selector). The selector must be loaded into a segment register to be used. Apart from the machine instructions which allow one to set/get the position of the GDT, and of the Interrupt Descriptor Table (IDT), in memory, every machine instruction referencing memory has an implicit segment register, occasionally two. Most of the time this segment register can be overridden by adding a segment prefix before the instruction.
Loading a select
|
https://en.wikipedia.org/wiki/Psychology%20of%20programming
|
The psychology of programming (PoP) is the field of research that deals with the psychological aspects of writing programs (often computer programs). The field has also been called the empirical studies of programming (ESP). It covers research into computer programmers' cognition, tools and methods for programming-related activities, and programming education.
Psychologically, computer programming is a human activity which involves cognitions such as reading and writing computer language, learning, problem solving, and reasoning.
History
The history of psychology of programming dates back to late 1970s and early 1980s, when researchers realized that computational power should not be the only thing to be evaluated in programming tools and technologies, but also the usability from the users. In the first Workshop on Empirical Studies of Programmers, Ben Shneiderman listed several important destinations for researchers. These destinations include refining the use of current languages, improving present and future languages, developing special purpose languages, and improving tools and methods. Two important workshop series have been devoted to psychology of programming in the last two decades: the Workshop on Empirical Studies of Programmers (ESP), based primarily in the US, and the Psychology of Programming Interest Group Workshop (PPIG), having a European character. ESP has a broader scope than pure psychology in programming, and on the other hand, PPIG is more focused in the field of PoP. However, PPIG workshops and the organization PPIG itself is informal in nature, It is group of people who are interested in PoP that comes together and publish their discussions.
Goals and purposes
It is desirable to achieve a programming performance such that creating a program meets its specifications, is on schedule, is adaptable for the future and runs efficiently. Being able to satisfy all these goals at a low cost is a difficult and common problem in software engineering a
|
https://en.wikipedia.org/wiki/Zero-crossing%20rate
|
The zero-crossing rate (ZCR) is the rate at which a signal changes from positive to zero to negative or from negative to zero to positive. Its value has been widely used in both speech recognition and music information retrieval, being a key feature to classify percussive sounds.
ZCR is defined formally as
where is a signal of length and is an indicator function.
In some cases only the "positive-going" or "negative-going" crossings are counted, rather than all the crossings, since between a pair of adjacent positive zero-crossings there must be a single negative zero-crossing.
For monophonic tonal signals, the zero-crossing rate can be used as a primitive pitch detection algorithm. Zero crossing rates are also used for Voice activity detection (VAD), which determines whether human speech is present in an audio segment or not.
See also
Zero crossing
Digital signal processing
References
Signal processing
Rates
|
https://en.wikipedia.org/wiki/Q-difference%20polynomial
|
In combinatorial mathematics, the q-difference polynomials or q-harmonic polynomials are a polynomial sequence defined in terms of the q-derivative. They are a generalized type of Brenke polynomial, and generalize the Appell polynomials. See also Sheffer sequence.
Definition
The q-difference polynomials satisfy the relation
where the derivative symbol on the left is the q-derivative. In the limit of , this becomes the definition of the Appell polynomials:
Generating function
The generalized generating function for these polynomials is of the type of generating function for Brenke polynomials, namely
where is the q-exponential:
Here, is the q-factorial and
is the q-Pochhammer symbol. The function is arbitrary but assumed to have an expansion
Any such gives a sequence of q-difference polynomials.
References
A. Sharma and A. M. Chak, "The basic analogue of a class of polynomials", Riv. Mat. Univ. Parma, 5 (1954) 325–337.
Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263. (Provides a very brief discussion of convergence.)
Q-analogs
Polynomials
|
https://en.wikipedia.org/wiki/Weinstein%20conjecture
|
In mathematics, the Weinstein conjecture refers to a general existence problem for periodic orbits of Hamiltonian or Reeb vector flows. More specifically, the conjecture claims that on a compact contact manifold, its Reeb vector field should carry at least one periodic orbit.
By definition, a level set of contact type admits a contact form obtained by contracting the Hamiltonian vector field into the symplectic form. In this case, the Hamiltonian flow is a Reeb vector field on that level set. It is a fact that any contact manifold (M,α) can be embedded into a canonical symplectic manifold, called the symplectization of M, such that M is a contact type level set (of a canonically defined Hamiltonian) and the Reeb vector field is a Hamiltonian flow. That is, any contact manifold can be made to satisfy the requirements of the Weinstein conjecture. Since, as is trivial to show, any orbit of a Hamiltonian flow is contained in a level set, the Weinstein conjecture is a statement about contact manifolds.
It has been known that any contact form is isotopic to a form that admits a closed Reeb orbit; for example, for any contact manifold there is a compatible open book decomposition, whose binding is a closed Reeb orbit. This is not enough to prove the Weinstein conjecture, though, because the Weinstein conjecture states that every contact form admits a closed Reeb orbit, while an open book determines a closed Reeb orbit for a form which is only isotopic to the given form.
The conjecture was formulated in 1978 by Alan Weinstein. In several cases, the existence of a periodic orbit was known. For instance, Rabinowitz showed that on star-shaped level sets of a Hamiltonian function on a symplectic manifold, there were always periodic orbits (Weinstein independently proved the special case of convex level sets). Weinstein observed that the hypotheses of several such existence theorems could be subsumed in the condition that the level set be of contact type. (Weinstein's ori
|
https://en.wikipedia.org/wiki/Q-exponential
|
In combinatorial mathematics, a q-exponential is a q-analog of the exponential function,
namely the eigenfunction of a q-derivative. There are many q-derivatives, for example, the classical q-derivative, the Askey-Wilson operator, etc. Therefore, unlike the classical exponentials, q-exponentials are not unique. For example, is the q-exponential corresponding to the classical q-derivative while are eigenfunctions of the Askey-Wilson operators.
Definition
The q-exponential is defined as
where is the q-factorial and
is the q-Pochhammer symbol. That this is the q-analog of the exponential follows from the property
where the derivative on the left is the q-derivative. The above is easily verified by considering the q-derivative of the monomial
Here, is the q-bracket.
For other definitions of the q-exponential function, see , , and .
Properties
For real , the function is an entire function of . For , is regular in the disk .
Note the inverse, .
Addition Formula
The analogue of does not hold for real numbers and . However, if these are operators satisfying the commutation relation , then holds true.
Relations
For , a function that is closely related is It is a special case of the basic hypergeometric series,
Clearly,
Relation with Dilogarithm
has the following infinite product representation:
On the other hand, holds.
When ,
By taking the limit ,
where is the dilogarithm.
In physics
The Q-exponential function is also known as the quantum dilogarithm.
References
Q-analogs
Exponentials
|
https://en.wikipedia.org/wiki/Windows%20Mobility%20Center
|
Windows Mobility Center is a component of Microsoft Windows, introduced in Windows Vista, that centralizes information and settings most relevant to mobile computing.
History
A mobility center that displayed device settings pertinent to mobile devices was first shown during the Windows Hardware Engineering Conference of 2004. It was based on the Activity Center user interface design that originated with Microsoft's abandoned Windows "Neptune" project, and was slated for inclusion in Windows Vista, then known by its codename Longhorn.
Overview
The Windows Mobility Center user interface consists of square tiles that each contain information and settings related to a component, such as audio settings, battery life and power schemes, display brightness, and wireless network strength and status. The tiles that appear within the interface depend on the hardware of the system and device drivers.
Windows Mobility Center is located in the Windows Control Panel and also be launched by pressing the keys in Windows Vista and 7. By default, WMC is inaccessible on desktop computers, but this limitation can be bypassed if one modifies the Windows Registry.
Windows Mobility Center is extensible; original equipment manufacturers can customize the interface with additional tiles and company branding. Though not supported by Microsoft, it is possible for individual developers to create tiles for the interface as well.
See also
Features new to Windows Vista
References
Mobile computers
Mobility Center
Windows Vista
|
https://en.wikipedia.org/wiki/CDC%206000%20series
|
The CDC 6000 series is a discontinued family of mainframe computers manufactured by Control Data Corporation in the 1960s. It consisted of the CDC 6200, CDC 6300, CDC 6400, CDC 6500, CDC 6600 and CDC 6700 computers, which were all extremely rapid and efficient for their time. Each is a large, solid-state, general-purpose, digital computer that performs scientific and business data processing as well as multiprogramming, multiprocessing, Remote Job Entry, time-sharing, and data management tasks under the control of the operating system called SCOPE (Supervisory Control Of Program Execution). By 1970 there also was a time-sharing oriented operating system named KRONOS. They were part of the first generation of supercomputers. The 6600 was the flagship of Control Data's 6000 series.
Overview
The CDC 6000 series computers are composed of four main functional devices:
the central memory
one or two high-speed central processors
ten peripheral processors (Peripheral Processing Unit, or PPU) and
a display console.
The 6000 series has a distributed architecture.
The family's members differ primarily by the number and kind of central processor(s):
The CDC 6600 is a single CPU with 10 functional units that can operate in parallel, each working on an instruction at the same time.
The CDC 6400 is a single CPU with an identical instruction set, but with a single unified arithmetic function unit that can only do one instruction at a time.
The CDC 6500 is a dual-CPU system with two 6400 central processors
The CDC 6700 is also a dual-CPU system, with a 6600 and a 6400 central processor.
Certain features and nomenclature had also been used in the earlier CDC 3000 series:
Arithmetic was ones complement.
The name COMPASS was used by CDC for the assembly languages on both families.
The name SCOPE was used for its operating system implementations on the 3000 and 6000 series.
The only currently (as of 2018) running CDC 6000 series machine, a 6500, has been restored by Liv
|
https://en.wikipedia.org/wiki/2000%20Today
|
2000 Today was an internationally broadcast television special to commemorate the beginning of the Year 2000. This program included New Year's Eve celebrations, musical performances, and other features from participating nations.
Most international broadcasts such as the Olympic Games coverage originate from a limited area for worldwide distribution. 2000 Today was rare in that its live and taped programming originated from member countries and represented all continents including Europe, Asia, Africa, South America and North America & Oceania.
Development
2000 Today was conceived as part of the Millennium celebrations, given the numerical significance of the change from 1999 to 2000. 2000 Today was commissioned by the BBC as one of the five main millennium projects that were broadcast across TV, radio and online services throughout 1999 and 2000.
Most nations that observe the Islamic calendar were not involved in 2000 Today. However, a few predominantly Muslim nations were represented among the programme's worldwide broadcasters such as Egypt (ERTU) and Indonesia (RCTI). Africa was minimally represented in 2000 Today. The only participating nations from that continent were Egypt and South Africa. Portugal-based RTP África distributed the programme to some African nations.
Antarctica was mentioned on the programme schedule, although it was unclear if 2000 Today coverage was recorded or live.
Production
The programme was produced and televised by an international consortium of 60 broadcasters, headed by the BBC in the United Kingdom and WGBH (Now known as GBH) in Boston, United States. The editorial board also included representatives from ABC (Australia), CBC (Canada), CCTV (China), ETC (Egypt), RTL (Germany), SABC (South Africa), TF1 (France), TV Asahi (Japan), TV Globo (Brazil) and ABC (USA). The BBC provided the production hub for receiving and distributing the 78 international satellite feeds required for this broadcast. The idents for the programme were de
|
https://en.wikipedia.org/wiki/SS-50%20bus
|
The SS-50 bus was an early computer bus designed as a part of the SWTPC 6800 Computer System that used the Motorola 6800 CPU. The SS-50 motherboard would have around seven 50-pin connectors for CPU and memory boards plus eight 30-pin connectors for I/O boards. The I/O section was sometimes called the SS-30 bus.
Southwest Technical Products Corporation introduced this bus in November 1975 and soon other companies were selling add-in boards. Some of the early boards were floppy disk systems from Midwest Scientific Instruments, Smoke Signal Broadcasting, and Percom Data; an EPROM programmer from the Micro Works; video display boards from Gimix; memory boards from Seals. By 1978 there were a dozen SS-50 board suppliers and several compatible SS-50 computers.
In 1979 SWTPC modified the SS-50 bus to support the new Motorola MC6809 processor. These changes were compatible with most existing boards and this upgrade gave the SS-50 Bus a long life. SS-50 based computers were made until the late 1980s.
The SS-50C bus, the S/09 version of the SS-50 bus, extended the address by four address lines to 20 address lines to allow up to a megabyte of memory in a system.
Boards for the SS-50 bus were typically 9 inches wide and 5.5 inches high. The board had Molex 0.156 inch connectors while the motherboard had the pins. This arrangement made for low cost printed circuit boards that did not need gold plated edge connectors. The tin plated Molex connectors were only rated for a few insertions and were sometimes a problem in hobbyist systems where the boards were being swapped often. Later systems would often come with gold plated Molex connectors.
The SS-30 I/O Bus had the address decoding on the motherboard. Each slot was allocated 4 address (the later MC6809 version upped this to 16 address.) This made for very simple I/O boards, the Motorola peripheral chips connected directly to this bus. Cards designed using the SS-30 bus often had their external connectors mounted such that
|
https://en.wikipedia.org/wiki/Modularity%20%28biology%29
|
Modularity refers to the ability of a system to organize discrete, individual units that can overall increase the efficiency of network activity and, in a biological sense, facilitates selective forces upon the network. Modularity is observed in all model systems, and can be studied at nearly every scale of biological organization, from molecular interactions all the way up to the whole organism.
Evolution of Modularity
The exact evolutionary origins of biological modularity has been debated since the 1990s. In the mid 1990s, Günter Wagner argued that modularity could have arisen and been maintained through the interaction of four evolutionary modes of action:
[1] Selection for the rate of adaptation: If different complexes evolve at different rates, then those evolving more quickly reach fixation in a population faster than other complexes. Thus, common evolutionary rates could be forcing the genes for certain proteins to evolve together while preventing other genes from being co-opted unless there is a shift in evolutionary rate.
[2] Constructional selection: When a gene exists in many duplicated copies, it may be maintained because of the many connections it has (also termed pleiotropy). There is evidence that this is so following whole genome duplication, or duplication at a single locus. However, the direct relationship that duplication processes have with modularity has yet to be directly examined.
[3] Stabilizing selection: While seeming antithetical to forming novel modules, Wagner maintains that it is important to consider the effects of stabilizing selection as it may be "an important counter force against the evolution of modularity". Stabilizing selection, if ubiquitously spread across the network, could then be a "wall" that makes the formation of novel interactions more difficult and maintains previously established interactions. Against such strong positive selection, other evolutionary forces acting on the network must exist, with gaps of relaxed
|
https://en.wikipedia.org/wiki/Computer%20network
|
A computer network is a set of computers sharing resources located on or provided by network nodes. Computers use common communication protocols over digital interconnections to communicate with each other. These interconnections are made up of telecommunication network technologies based on physically wired, optical, and wireless radio-frequency methods that may be arranged in a variety of network topologies.
The nodes of a computer network can include personal computers, servers, networking hardware, or other specialized or general-purpose hosts. They are identified by network addresses and may have hostnames. Hostnames serve as memorable labels for the nodes and are rarely changed after initial assignment. Network addresses serve for locating and identifying the nodes by communication protocols such as the Internet Protocol.
Computer networks may be classified by many criteria, including the transmission medium used to carry signals, bandwidth, communications protocols to organize network traffic, the network size, the topology, traffic control mechanisms, and organizational intent.
Computer networks support many applications and services, such as access to the World Wide Web, digital video and audio, shared use of application and storage servers, printers and fax machines, and use of email and instant messaging applications.
History
Computer networking may be considered a branch of computer science, computer engineering, and telecommunications, since it relies on the theoretical and practical application of the related disciplines. Computer networking was influenced by a wide array of technology developments and historical milestones.
In the late 1950s, a network of computers was built for the U.S. military Semi-Automatic Ground Environment (SAGE) radar system using the Bell 101 modem. It was the first commercial modem for computers, released by AT&T Corporation in 1958. The modem allowed digital data to be transmitted over regular unconditioned telephone
|
https://en.wikipedia.org/wiki/Cyrillic%20Projector
|
The Cyrillic Projector is a sculpture created by American artist Jim Sanborn in the early 1990s, and purchased by the University of North Carolina at Charlotte in 1997. It is currently installed between the campus's Friday and Fretwell Buildings.
An encrypted trilogy
The encrypted sculpture Cyrillic Projector is part of an encrypted family of three intricate puzzle-sculptures by Sanborn, the other two named Kryptos and Antipodes. The Kryptos sculpture (located at CIA headquarters in Langley, Virginia) has text which is duplicated on Antipodes. Antipodes has two sides — one with the Latin alphabet and one with Cyrillic. The Latin side is similar to Kryptos. The Cyrillic side is similar to the Cyrillic Projector.
Solution
The encrypted text of the Cyrillic Projector was first reportedly solved by Frank Corr in early July 2003, followed by an equivalent decryption by Mike Bales in September of the same year. Both endeavors gave results in the Russian language. The first English translation of the text was led by Elonka Dunin.
The sculpture includes two messages. The first is a Russian text that explains the use of psychological control to develop and maintain potential sources of information. The second is a partial quote about the Soviet dissident, Nobel Peace Prize awarded scientist Sakharov. The text is from a classified KGB memo, detailing concerns that his report at the 1982 Pugwash conference was going to be used by the U.S. for anti-Soviet propaganda purposes.
Notes
References
External links
Transcript of Cyrillic Projector text
Kryptos website
Kryptos Group press release, 2003, about the solution
Игры разума, подвижного как ртуть, September 30, 2003:Computerra
History of cryptography
Outdoor sculptures in North Carolina
Art in North Carolina
1993 sculptures
University of North Carolina at Charlotte
Sculptures by Jim Sanborn
Buildings and structures in Charlotte, North Carolina
Bronze sculptures in North Carolina
|
https://en.wikipedia.org/wiki/Hermitian%20symmetric%20space
|
In mathematics, a Hermitian symmetric space is a Hermitian manifold which at every point has an inversion symmetry preserving the Hermitian structure. First studied by Élie Cartan, they form a natural generalization of the notion of Riemannian symmetric space from real manifolds to complex manifolds.
Every Hermitian symmetric space is a homogeneous space for its isometry group and has a unique decomposition as a product of irreducible spaces and a Euclidean space. The irreducible spaces arise in pairs as a non-compact space that, as Borel showed, can be embedded as an open subspace of its compact dual space. Harish Chandra showed that each non-compact space can be realized as a bounded symmetric domain in a complex vector space. The simplest case involves the groups SU(2), SU(1,1) and their common complexification SL(2,C). In this case the non-compact space is the unit disk, a homogeneous space for SU(1,1). It is a bounded domain in the complex plane C. The one-point compactification of C, the Riemann sphere, is the dual space, a homogeneous space for SU(2) and SL(2,C).
Irreducible compact Hermitian symmetric spaces are exactly the homogeneous spaces of simple compact Lie groups by maximal closed connected subgroups which contain a maximal torus and have center isomorphic to the circle group. There is a complete classification of irreducible spaces, with four classical series, studied by Cartan, and two exceptional cases; the classification can be deduced from Borel–de Siebenthal theory, which classifies closed connected subgroups containing a maximal torus. Hermitian symmetric spaces appear in the theory of Jordan triple systems, several complex variables, complex geometry, automorphic forms and group representations, in particular permitting the construction of the holomorphic discrete series representations of semisimple Lie groups.
Hermitian symmetric spaces of compact type
Definition
Let H be a connected compact semisimple Lie group, σ an automorphism of H
|
https://en.wikipedia.org/wiki/Urban%20prairie
|
Urban prairie is a term to describe vacant urban land that has reverted to green space. Previous structures occupying the urban lots have been demolished, leaving patchy areas of green space that are usually untended and unmanaged, forming an involuntary park. Sometimes, however, the prairie spaces are intentionally created to facilitate amenities, such as green belts, community gardens and wildlife reserve habitats.
History
Urban prairies can result from several factors. The value of aging buildings may fall too low to provide financial incentives for their owners to maintain them. Vacant properties may have resulted from deurbanization or crime, or may have been seized by local government as a response to unpaid property taxes. Since vacant structures can pose health and safety threats (such as fire hazards), or be used as a location for criminal activity, cities often demolish them.
Sometimes areas are cleared of buildings as part of a revitalization plan with the intention of redeveloping the land. In flood-prone areas, government agencies may purchase developed lots and then demolish the structures to improve drainage during floods. Some neighborhoods near major industrial or environmental clean-up sites are acquired and leveled to create a buffer zone and minimize the risks associated with pollution or industrial accidents. Such areas may become nothing more than fields of overgrown vegetation, which then provide habitat for wildlife. Sometimes it is possible for residents of the city to fill up the unplanned empty space with urban parks or community gardens.
Urban prairie is sometimes planned by the government or non-profit groups for community gardens and conservation, to restore or reintroduce a wildlife habitat, help the environment, and educate people about the prairie. Detroit, Michigan is one particular city that has many urban prairies.
In the case of the city of Christchurch in New Zealand, earthquake damage from the 2011 earthquake caused the und
|
https://en.wikipedia.org/wiki/Krull%27s%20theorem
|
In mathematics, and more specifically in ring theory, Krull's theorem, named after Wolfgang Krull, asserts that a nonzero ring has at least one maximal ideal. The theorem was proved in 1929 by Krull, who used transfinite induction. The theorem admits a simple proof using Zorn's lemma, and in fact is equivalent to Zorn's lemma, which in turn is equivalent to the axiom of choice.
Variants
For noncommutative rings, the analogues for maximal left ideals and maximal right ideals also hold.
For pseudo-rings, the theorem holds for regular ideals.
A slightly stronger (but equivalent) result, which can be proved in a similar fashion, is as follows:
Let R be a ring, and let I be a proper ideal of R. Then there is a maximal ideal of R containing I.
This result implies the original theorem, by taking I to be the zero ideal (0). Conversely, applying the original theorem to R/I leads to this result.
To prove the stronger result directly, consider the set S of all proper ideals of R containing I. The set S is nonempty since I ∈ S. Furthermore, for any chain T of S, the union of the ideals in T is an ideal J, and a union of ideals not containing 1 does not contain 1, so J ∈ S. By Zorn's lemma, S has a maximal element M. This M is a maximal ideal containing I.
Krull's Hauptidealsatz
Another theorem commonly referred to as Krull's theorem:
Let be a Noetherian ring and an element of which is neither a zero divisor nor a unit. Then every minimal prime ideal containing has height 1.
Notes
References
Ideals (ring theory)
|
https://en.wikipedia.org/wiki/Dot%20blot
|
A dot blot (or slot blot) is a technique in molecular biology used to detect proteins. It represents a simplification of the western blot method, with the exception that the proteins to be detected are not first separated by electrophoresis. Instead, the sample is applied directly on a membrane in a single spot, and the blotting procedure is performed.
The technique offers significant savings in time, as chromatography or gel electrophoresis, and the complex blotting procedures for the gel are not required. However, it offers no information on the size of the target protein.
Uses
Performing a dot blot is similar in idea to performing a western blot, with the advantage of faster speed and lower cost.
Dot blots are also performed to screen the binding capabilities of an antibody.
Methods
A general dot blot protocol involves spotting 1–2 microliters of a samples onto a nitrocellulose or PVDF membrane and letting it air dry. Samples can be in the form of tissue culture supernatants, blood serum, cell extracts, or other preparations.
The membrane is incubated in blocking buffer to prevent non-specific binding of antibodies. It is then incubated with a primary antibody followed by a detection antibody or a primary antibody conjugated to a detection molecule (commonly HRP or alkaline phosphatase). After antibody binding, the membrane is incubated with a chemiluminescent substrate and imaged.
Apparatus
Dot blot is conventionally performed on a piece of nitrocellulose membrane or PVDF membrane. After the protein samples are spotted onto the membrane, the membrane is placed in a plastic container and sequentially incubated in blocking buffer, antibody solutions, or rinsing buffer on shaker. Finally, for chemiluminescence imaging, the piece of membrane need to be wrapped in a transparent plastic film filled with enzyme substrate.
Vacuum-assisted dot blot apparatus has been used to facilitate the rinsing and incubating process by using vacuum to extract the solution
|
https://en.wikipedia.org/wiki/Cyclic%20module
|
In mathematics, more specifically in ring theory, a cyclic module or monogenous module is a module over a ring that is generated by one element. The concept is a generalization of the notion of a cyclic group, that is, an Abelian group (i.e. Z-module) that is generated by one element.
Definition
A left R-module M is called cyclic if M can be generated by a single element i.e. for some x in M. Similarly, a right R-module N is cyclic if for some .
Examples
2Z as a Z-module is a cyclic module.
In fact, every cyclic group is a cyclic Z-module.
Every simple R-module M is a cyclic module since the submodule generated by any non-zero element x of M is necessarily the whole module M. In general, a module is simple if and only if it is nonzero and is generated by each of its nonzero elements.
If the ring R is considered as a left module over itself, then its cyclic submodules are exactly its left principal ideals as a ring. The same holds for R as a right R-module, mutatis mutandis.
If R is F[x], the ring of polynomials over a field F, and V is an R-module which is also a finite-dimensional vector space over F, then the Jordan blocks of x acting on V are cyclic submodules. (The Jordan blocks are all isomorphic to ; there may also be other cyclic submodules with different annihilators; see below.)
Properties
Given a cyclic R-module M that is generated by x, there exists a canonical isomorphism between M and , where denotes the annihilator of x in R.
Every module is a sum of cyclic submodules.
See also
Finitely generated module
References
Module theory
|
https://en.wikipedia.org/wiki/Fura-2-acetoxymethyl%20ester
|
Fura-2-acetoxymethyl ester, often abbreviated Fura-2AM, is a membrane-permeant derivative of the ratiometric calcium indicator Fura-2 used in biochemistry to measure cellular calcium concentrations by fluorescence. When added to cells, Fura-2AM crosses cell membranes and once inside the cell, the acetoxymethyl groups are removed by cellular esterases. Removal of the acetoxymethyl esters regenerates "Fura-2", the pentacarboxylate calcium indicator. Measurement of Ca2+-induced fluorescence at both 340 nm and 380 nm allows for calculation of calcium concentrations based 340/380 ratios. The use of the ratio automatically cancels out certain variables such as local differences in fura-2 concentration or cell thickness that would otherwise lead to artifacts when attempting to image calcium concentrations in cells.
References
Biochemistry methods
Cell culture reagents
Cell imaging
Fluorescent dyes
Oxazoles
Benzofuran ethers at the benzene ring
Acetate esters
Formals
Glycol ethers
Anilines
|
https://en.wikipedia.org/wiki/Homeotopy
|
In algebraic topology, an area of mathematics, a homeotopy group of a topological space is a homotopy group of the group of self-homeomorphisms of that space.
Definition
The homotopy group functors assign to each path-connected topological space the group of homotopy classes of continuous maps
Another construction on a space is the group of all self-homeomorphisms , denoted If X is a locally compact, locally connected Hausdorff space then a fundamental result of R. Arens says that will in fact be a topological group under the compact-open topology.
Under the above assumptions, the homeotopy groups for are defined to be:
Thus is the mapping class group for In other words, the mapping class group is the set of connected components of as specified by the functor
Example
According to the Dehn-Nielsen theorem, if is a closed surface then i.e., the zeroth homotopy group of the automorphisms of a space is the same as the outer automorphism group of its fundamental group.
References
Algebraic topology
Homeomorphisms
|
https://en.wikipedia.org/wiki/Dynamin
|
Dynamin is a GTPase responsible for endocytosis in the eukaryotic cell. Dynamin is part of the "dynamin superfamily", which includes classical dynamins, dynamin-like proteins, Mx proteins, OPA1, mitofusins, and GBPs. Members of the dynamin family are principally involved in the scission of newly formed vesicles from the membrane of one cellular compartment and their targeting to, and fusion with, another compartment, both at the cell surface (particularly caveolae internalization) as well as at the Golgi apparatus. Dynamin family members also play a role in many processes including division of organelles, cytokinesis and microbial pathogen resistance.
Structure
Dynamin itself is a 96 kDa enzyme, and was first isolated when researchers were attempting to isolate new microtubule-based motors from the bovine brain. Dynamin has been extensively studied in the context of clathrin-coated vesicle budding from the cell membrane. Beginning from the N-terminus, Dynamin consists of a GTPase domain connected to a helical stalk domain via a flexible neck region containing a Bundle Signalling Element and GTPase Effector Domain. At the opposite end of the stalk domain is a loop that links to a membrane-binding Pleckstrin homology domain. The protein strand then loops back towards the GTPase domain and terminates with a Proline Rich Domain that binds to the Src Homology domains of many proteins.
Function
During clathrin-mediated endocytosis, the cell membrane invaginates to form a budding vesicle. Dynamin binds to and assembles around the neck of the endocytic vesicle, forming a helical polymer arranged such that the GTPase domains dimerize in an asymmetric manner across helical rungs. The polymer constricts the underlying membrane upon GTP binding and hydrolysis via conformational changes emanating from the flexible neck region that alters the overall helical symmetry. Constriction around the vesicle neck leads to the formation of a hemi-fission membrane state that ultima
|
https://en.wikipedia.org/wiki/Charles%20H.%20Bennett%20%28illustrator%29
|
Charles Henry Bennett (26 July 1828 – 2 April 1867) was a British Victorian illustrator who pioneered techniques in comic illustration.
Beginnings
Charles Henry Bennett was born at 3 Tavistock Row in Covent Garden on 26 July 1828 and was baptised a month later in St. Paul's Church, Covent Garden. He was the eldest of the three children of Charles and Harriet Bennett, originally from Teston in Kent. His father was a boot-maker. Little is known of Charles' childhood, although some speculate that he received some education, possibly at St. Clement Dane's School.
At the age of twenty, Charles married Elizabeth Toon, the daughter of a Shoreditch warehouseman, on Christmas Day 1848, also in St. Paul's Church. Their first son, who they named after Charles, was born a year later and by 1851 the family was settled in Lyon's Inn in the Strand. At the time of their wedding, Charles was attempting to support his family by selling newspapers; however, in the 1851 census three years later he described himself as an artist and portrait painter.
By 1861, Charles and Elizabeth had six children and lived in Wimbledon. Charles, the eldest, was by this time at school, while the youngest, George, was just seven months old.
Early career
As a child, Charles developed a passion for art, drawing for his inspiration the motley crowds he saw daily in the market. His father did not support Charles' artwork and considered it a waste of time.
As an adult, Charles became part of the London bohemian scene, and was a founder member of the Savage Club, each member of which was "a working man in literature or art". As well as socializing over convivial dinners, members of the club published a magazine called The Train. Charles Bennett contributed many illustrations, signed 'Bennett' rather than with his CHB monogram, but the magazine was short-lived.
The mid-nineteenth century saw the launch of many cheap, mostly short-lived periodicals in London, and Bennett contributed small illustrat
|
https://en.wikipedia.org/wiki/Romance%20of%20the%20Three%20Kingdoms%20II
|
is the second in the Romance of the Three Kingdoms series of turn-based strategy games produced by Koei and based on the historical novel Romance of the Three Kingdoms.
Gameplay
Upon starting the game, players choose from one of six scenarios that determine the initial layout of power in ancient China. The scenarios loosely depict allegiances and territories controlled by the warlords as according to the novel, although gameplay does not follow events in the novel after the game begins.
The six scenarios are listed as follows:
Dong Zhuo seizes control of Luoyang (AD 189)
Warlords struggle for power (AD 194)
Liu Bei seeks shelter in Jing Province (AD 201)
Cao Cao covets supremacy over China (AD 208)
The empire divides into three (AD 215)
Rise of Wei, Wu and Shu (AD 220)
After choosing the scenario, players determine which warlord(s) they will control. Custom characters may be inserted into territories unoccupied by other forces, as well. A total of 41 different provinces exist, as well as over 200 unique characters. Each character has three statistics, which range from 10 to 100 (the higher the better). A warlord's Intelligence, War Ability and Charm influence how successful he or she will be when performing certain tasks, such as dueling or increasing land value in a province.
The player wins the game by conquering all territories in China. This is accomplished by being in control of every province on the map.
New features
A reputation system that affects the rate of officers' loyalties towards their lords
Added treasures and special items that can increase an officer's stats
Advisers can help their lords predict the chances of success in executing a plan. An adviser with Intelligence stat of 100 will always accurately predict the result.
Intercepting messengers
Ability to create new lords on the map based on custom characters created by players
Reception
Computer Gaming World stated that Romance of the Three Kingdoms II "did a better job of simulating
|
https://en.wikipedia.org/wiki/Dasymeter
|
A dasymeter was meant initially as a device to demonstrate the buoyant effect of gases like air (as shown in the adjacent pictures). A dasymeter which allows weighing acts as a densimeter used to measure the density of gases.
Principle
The Principle of Archimedes permits to derive a formula which does not rely on any information of volume: a sample, the big sphere in the adjacent images, of known mass-density is weighed in vacuum and then immersed into the gas and weighed again.
(The above formula was taken from the article buoyancy and still has to be solved for the density of the gas.)
From the known mass density of the sample (sphere) and its two weight-values, the mass-density of the gas can be calculated as:
Construction and use
It consists of a thin sphere made of glass, ideally with an average density close to that of the gas to be investigated. This sphere is immersed in the gas and weighed.
History of the dasymeter
The dasymeter was invented in 1650 by Otto von Guericke. Archimedes used a pair of scales which he immersed into water to demonstrate the buoyant effect of water. A dasymeter can be seen as a variant of that pair of scales, only immersed into gas.
External links
Volume Conversion
Measuring instruments
Laboratory equipment
Laboratory glassware
|
https://en.wikipedia.org/wiki/Signage
|
Signage is the design or use of signs and symbols to communicate a message. A signage also means signs collectively or being considered as a group. The term signage is documented to have been popularized in 1975 to 1980.
Signs are any kind of visual graphics created to display information to a particular audience. This is typically manifested in the form of wayfinding information in places such as streets or on the inside and outside buildings. Signs vary in form and size based on location and intent, from more expansive banners, billboards, and murals, to smaller street signs, street name signs, sandwich boards and lawn signs. Newer signs may also use digital or electronic displays.
The main purpose of signs is to communicate, to convey information designed to assist the receiver with decision-making based on the information provided. Alternatively, promotional signage may be designed to persuade receivers of the merits of a given product or service. Signage is distinct from labeling, which conveys information about a particular product or service.
Definition and etymology
The term, 'sign' comes from the old French signe (noun), signer (verb), meaning a gesture or a motion of the hand. This, in turn, stems from Latin 'signum' indicating an"identifying mark, token, indication, symbol; proof; military standard, ensign; a signal, an omen; sign in the heavens, constellation." In the English, the term is also associated with a flag or ensign. In France, a banner not infrequently took the place of signs or sign boards in the Middle Ages. Signs, however, are best known in the form of painted or carved , inns, cinemas, etc. They are one of various emblematic methods for publicly calling attention to the place to which they refer.
The term, 'signage' appears to have come into use in the 20th century as a collective noun used to describe a class of signs, especially advertising and promotional signs which came to prominence in the first decades of the twentieth century.
|
https://en.wikipedia.org/wiki/Darboux%27s%20theorem%20%28analysis%29
|
In mathematics, Darboux's theorem is a theorem in real analysis, named after Jean Gaston Darboux. It states that every function that results from the differentiation of another function has the intermediate value property: the image of an interval is also an interval.
When ƒ is continuously differentiable (ƒ in C1([a,b])), this is a consequence of the intermediate value theorem. But even when ƒ′ is not continuous, Darboux's theorem places a severe restriction on what it can be.
Darboux's theorem
Let be a closed interval, be a real-valued differentiable function. Then has the intermediate value property: If and are points in with , then for every between and , there exists an in such that .
Proofs
Proof 1. The first proof is based on the extreme value theorem.
If equals or , then setting equal to or , respectively, gives the desired result. Now assume that is strictly between and , and in particular that . Let such that . If it is the case that we adjust our below proof, instead asserting that has its minimum on .
Since is continuous on the closed interval , the maximum value of on is attained at some point in , according to the extreme value theorem.
Because , we know cannot attain its maximum value at . (If it did, then for all , which implies .)
Likewise, because , we know cannot attain its maximum value at .
Therefore, must attain its maximum value at some point . Hence, by Fermat's theorem, , i.e. .
Proof 2. The second proof is based on combining the mean value theorem and the intermediate value theorem.
Define .
For define and .
And for define and .
Thus, for we have .
Now, define with .
is continuous in .
Furthermore, when and when ; therefore, from the Intermediate Value Theorem, if then, there exists such that .
Let's fix .
From the Mean Value Theorem, there exists a point such that .
Hence, .
Darboux function
A Darboux function is a real-valued function ƒ which has the "intermediate value property": for
|
https://en.wikipedia.org/wiki/Saltation%20%28biology%29
|
In biology, saltation () is a sudden and large mutational change from one generation to the next, potentially causing single-step speciation. This was historically offered as an alternative to Darwinism. Some forms of mutationism were effectively saltationist, implying large discontinuous jumps.
Speciation, such as by polyploidy in plants, can sometimes be achieved in a single and in evolutionary terms sudden step. Evidence exists for various forms of saltation in a variety of organisms.
History
Prior to Charles Darwin most evolutionary scientists had been saltationists. Jean-Baptiste Lamarck was a gradualist but similar to other scientists of the period had written that saltational evolution was possible. Étienne Geoffroy Saint-Hilaire endorsed a theory of saltational evolution that "monstrosities could become the founding fathers (or mothers) of new species by instantaneous transition from one form to the next." Geoffroy wrote that environmental pressures could produce sudden transformations to establish new species instantaneously. In 1864 Albert von Kölliker revived Geoffroy's theory that evolution proceeds by large steps, under the name of heterogenesis.
With the publication of On the Origin of Species in 1859 Charles Darwin wrote that most evolutionary changes proceeded gradually but he did not deny the existence of jumps.
From 1860 to 1880 saltation had a minority interest but by 1890 had become a major interest to scientists. In their paper on evolutionary theories in the 20th century Levit et al wrote:
The advocates of saltationism deny the Darwinian idea of slowly and gradually growing divergence of character as the only source of evolutionary progress. They would not necessarily completely deny gradual variation, but claim that cardinally new ‘body plans’ come into being as a result of saltations (sudden, discontinuous and crucial changes, for example, the series of macromutations). The latter are responsible for the sudden appearance of new higher
|
https://en.wikipedia.org/wiki/Peter%20B.%20Andrews
|
Peter Bruce Andrews (born 1937) is an American mathematician and Professor of Mathematics, Emeritus at Carnegie Mellon University in Pittsburgh, Pennsylvania, and the creator of the mathematical logic Q0. He received his Ph.D. from Princeton University in 1964 under the tutelage of Alonzo Church. He received the Herbrand Award in 2003. His research group designed the TPS automated theorem prover. A subsystem ETPS (Educational Theorem Proving System) of TPS is used to help students learn logic by interactively constructing natural deduction proofs.
Publications
Andrews, Peter B. (1965). A Transfinite Type Theory with Type Variables. North Holland Publishing Company, Amsterdam.
Andrews, Peter B. (1971). "Resolution in type theory". Journal of Symbolic Logic 36, 414–432.
Andrews, Peter B. (1981). "Theorem proving via general matings". J. Assoc. Comput. March. 28, no. 2, 193–214.
Andrews, Peter B. (1986). An introduction to mathematical logic and type theory: to truth through proof. Computer Science and Applied Mathematics. . Academic Press, Inc., Orlando, FL.
Andrews, Peter B. (1989). "On connections and higher-order logic". J. Automat. Reason. 5, no. 3, 257–291.
Andrews, Peter B.; Bishop, Matthew; Issar, Sunil; Nesmith, Dan; Pfenning, Frank; Xi, Hongwei (1996). "TPS: a theorem-proving system for classical type theory". J. Automat. Reason. 16, no. 3, 321–353.
Andrews, Peter B. (2002). An introduction to mathematical logic and type theory: to truth through proof. Second edition. Applied Logic Series, 27. . Kluwer Academic Publishers, Dordrecht.
References
External links
Peter B. Andrews
1937 births
Living people
20th-century American mathematicians
21st-century American mathematicians
American logicians
Mathematical logicians
Carnegie Mellon University faculty
Princeton University alumni
|
https://en.wikipedia.org/wiki/International%20Society%20of%20Biometeorology
|
The International Society of Biometeorology (ISB) is a professional society for scientists interested in biometeorology, specifically environmental and ecological aspects of the interaction of the atmosphere and biosphere. The organization's stated purpose is: "to provide one international organization for the promotion of interdisciplinary collaboration of meteorologists, physicians, physicists, biologists, climatologists, ecologists and other scientists and to promote the development of Biometeorology".
The International Society of Biometeorology was founded in 1956 at UNESCO headquarters in Paris, France, by S. W. Tromp, a Dutch geologist, H. Ungeheuer, a German meteorologist, and several human physiologists of which F. Sargent II of the United States became the first President of the society.
ISB affiliated organizations include: the International Association for Urban Climate, the International Society for Agricultural Meteorology, the International Union of Biological Sciences, the World Health Organization, and the World Meteorological Organization. ISB affiliate members include: the American Meteorological Society, the Centre for Renewable Energy Sources, the German Meteorological Society, the Society for the Promotion of Medicine-Meteorological Research e.V., International Society of Medical Hydrology and Climatology, and the UK Met Office.
Publications
ISB publishes the following journals:
Bulletin of the Society of Biometeorology
International Journal of Biometeorology
References
External links
Biometeorology
International scientific organizations
Meteorological societies
Climatological research organizations
Biology organizations
International medical associations
|
https://en.wikipedia.org/wiki/BitLocker
|
BitLocker is a full volume encryption feature included with Microsoft Windows versions starting with Windows Vista. It is designed to protect data by providing encryption for entire volumes. By default, it uses the Advanced Encryption Standard (AES) algorithm in cipher block chaining (CBC) or "xor–encrypt–xor (XEX)-based Tweaked codebook mode with ciphertext Stealing" (XTS) mode with a 128-bit or 256-bit key. CBC is not used over the whole disk; it is applied to each individual sector.
History
BitLocker originated as a part of Microsoft's Next-Generation Secure Computing Base architecture in 2004 as a feature tentatively codenamed "Cornerstone" and was designed to protect information on devices, particularly if a device was lost or stolen. Another feature, titled "Code Integrity Rooting", was designed to validate the integrity of Microsoft Windows boot and system files. When used in conjunction with a compatible Trusted Platform Module (TPM), BitLocker can validate the integrity of boot and system files before decrypting a protected volume; an unsuccessful validation will prohibit access to a protected system. BitLocker was briefly called Secure Startup before Windows Vista's release to manufacturing.
BitLocker is available on:
Enterprise and Ultimate editions of Windows Vista and Windows 7
Pro and Enterprise editions of Windows 8 and 8.1
Windows Server 2008 and later
Pro, Enterprise, and Education editions of Windows 10
Pro, Enterprise, and Education editions of Windows 11
Features
Initially, the graphical BitLocker interface in Windows Vista could only encrypt the operating system volume. Starting with Windows Vista with Service Pack 1 and Windows Server 2008, volumes other than the operating system volume could be encrypted using the graphical tool. Still, some aspects of the BitLocker (such as turning autolocking on or off) had to be managed through a command-line tool called manage-bde.wsf.
The version of BitLocker included in Windows 7 and Windows S
|
https://en.wikipedia.org/wiki/Uranium%20Information%20Centre
|
The Uranium Information Centre (UIC) was an Australian organisation primarily concerned with increasing the public understanding of uranium mining and nuclear electricity generation.
Founded in 1978, the Centre worked for many years to provide information about the development of the Australian uranium industry, the contribution it can make to world energy supplies and the benefits it can bring Australia. It was a broker of information on all aspects of the mining and processing of uranium, the nuclear fuel cycle, and the role of nuclear energy in helping to meet world electricity demand.
The Centre was funded by companies involved in uranium exploration, mining and export in Australia.
In 1995 Ian Hore-Lacy assumed the role of General Manager of the UIC, a position he held until 2001. The UIC's website was established in the year of his appointment. After leaving the UIC, Ian Hore-Lacy went on to work for the World Nuclear Association (WNA) as Director of Public Information for 12 years and as of 2015 he continues to work there as a Senior Research Analyst. In the late 2000s, the UIC's main information-providing function was assumed by the WNA and World Nuclear News (WNN), based in London, UK.
In 2008 the UIC's purely domestic function was taken over by the Australian Uranium Association, and was subsequently absorbed by the Minerals Council of Australia's uranium portfolio in 2013.
See also
List of uranium mines
World Uranium Hearing
Uranium mining debate
External links
World Nuclear Association Homepage
World Nuclear News Homepage
Australian Uranium Association Homepage
Australian educational websites
Organizations established in 1978
1978 establishments in Australia
Nuclear organizations
Uranium mining in Australia
|
https://en.wikipedia.org/wiki/Specific%20strength
|
The specific strength is a material's (or muscle's) strength (force per unit area at failure) divided by its density. It is also known as the strength-to-weight ratio or strength/weight ratio or strength-to-mass ratio. In fiber or textile applications, tenacity is the usual measure of specific strength. The SI unit for specific strength is Pa⋅m3/kg, or N⋅m/kg, which is dimensionally equivalent to m2/s2, though the latter form is rarely used. Specific strength has the same units as specific energy, and is related to the maximum specific energy of rotation that an object can have without flying apart due to centrifugal force.
Another way to describe specific strength is breaking length, also known as self support length: the maximum length of a vertical column of the material (assuming a fixed cross-section) that could suspend its own weight when supported only at the top. For this measurement, the definition of weight is the force of gravity at the Earth's surface (standard gravity, 9.80665 m/s2) applying to the entire length of the material, not diminishing with height. This usage is more common with certain specialty fiber or textile applications.
The materials with the highest specific strengths are typically fibers such as carbon fiber, glass fiber and various polymers, and these are frequently used to make composite materials (e.g. carbon fiber-epoxy). These materials and others such as titanium, aluminium, magnesium and high strength steel alloys are widely used in aerospace and other applications where weight savings are worth the higher material cost.
Note that strength and stiffness are distinct. Both are important in design of efficient and safe structures.
Calculations of breaking length
where is the length, is the tensile strength, is the density and is the acceleration due to gravity ( m/s)
Examples
The data of this table is from best cases, and has been established for giving a rough figure.
Note: Multiwalled carbon nanotubes have the h
|
https://en.wikipedia.org/wiki/Laurel%20wreath
|
A laurel wreath is a round wreath made of connected branches and leaves of the bay laurel (), an aromatic broadleaf evergreen, or later from spineless butcher's broom (Ruscus hypoglossum) or cherry laurel (Prunus laurocerasus). It is a symbol of triumph and is worn as a chaplet around the head, or as a garland around the neck.
Wreaths and crowns in antiquity, including the laurel wreath, trace back to Ancient Greece. In Greek mythology, the god Apollo, who is patron of lyrical poetry, musical performance
and skill-based athletics, is conventionally depicted wearing a laurel wreath on his head in all three roles. Wreaths were awarded to victors in athletic competitions, including the ancient Olympics; for victors in athletics they were made of wild olive tree known as "kotinos" (), (sc. at Olympia) – and the same for winners of musical and poetic competitions. In Rome they were symbols of martial victory, crowning a successful commander during his triumph. Whereas ancient laurel wreaths are most often depicted as a horseshoe shape, modern versions are usually complete rings.
In common modern idiomatic usage, a laurel wreath or "crown" refers to a victory. The expression "resting on one's laurels" refers to someone relying entirely on long-past successes for continued fame or recognition, where to "look to one's laurels" means to be careful of losing rank to competition.
Background
Apollo, the patron of sport, is associated with the wearing of a laurel wreath. This association arose from the ancient Greek mythology story of Apollo and Daphne. Apollo mocked the god of love, Eros (Cupid), for his use of bow and arrow, since Apollo is also patron of archery. The insulted Eros then prepared two arrows—one of gold and one of lead. He shot Apollo with the gold arrow, instilling in the god a passionate love for the river nymph Daphne. He shot Daphne with the lead arrow, instilling in her a hatred of Apollo. Apollo pursued Daphne until she begged to be free of him and w
|
https://en.wikipedia.org/wiki/Li%C3%A9nard%20equation
|
In mathematics, more specifically in the study of dynamical systems and differential equations, a Liénard equation is a second order differential equation, named after the French physicist Alfred-Marie Liénard.
During the development of radio and vacuum tube technology, Liénard equations were intensely studied as they can be used to model oscillating circuits. Under certain additional assumptions Liénard's theorem guarantees the uniqueness and existence of a limit cycle for such a system. A Liénard system with piecewise-linear functions can also contain homoclinic orbits.
Definition
Let and be two continuously differentiable functions on with an even function and an odd function. Then the second order ordinary differential equation of the form
is called a Liénard equation.
Liénard system
The equation can be transformed into an equivalent two-dimensional system of ordinary differential equations. We define
then
is called a Liénard system.
Alternatively, since the Liénard equation itself is also an autonomous differential equation, the substitution leads the Liénard equation to become a first order differential equation:
which is an Abel equation of the second kind.
Example
The Van der Pol oscillator
is a Liénard equation. The solution of a Van der Pol oscillator has a limit cycle. Such cycle has a solution of a Liénard equation with negative at small and positive otherwise. The Van der Pol equation has no exact, analytic solution. Such solution for a limit cycle exists if is a constant piece-wise function.
Liénard's theorem
A Liénard system has a unique and stable limit cycle surrounding the origin if it satisfies the following additional properties:
g(x) > 0 for all x > 0;
F(x) has exactly one positive root at some value p, where F(x) < 0 for 0 < x < p and F(x) > 0 and monotonic for x > p.
See also
Autonomous differential equation
Abel equation of the second kind
Biryukov equation
Footnotes
External links
Dynamical systems
D
|
https://en.wikipedia.org/wiki/Norton%20Zinder
|
Norton David Zinder (November 7, 1928 – February 3, 2012) was an American biologist famous for his discovery of genetic transduction. Zinder was born in New York City, received his A.B. from Columbia University in 1947, Ph.D. from the University of Wisconsin–Madison in 1952, and became a member of the National Academy of Sciences in 1969. He led a lab at Rockefeller University until shortly before his death.
In 1966 he was awarded the NAS Award in Molecular Biology from the National Academy of Sciences.
Genetic transduction and RNA bacteriophage
Working as a graduate student with Joshua Lederberg, Zinder discovered that a bacteriophage can carry genes from one bacterium to another. Initial experiments were carried out using Salmonella. Zinder and Lederberg named this process of genetic exchange transduction.
Later, Zinder discovered the first bacteriophage that contained RNA as its genetic material. At that time, Harvey Lodish (now of the Massachusetts Institute of Technology and Whitehead Institute for Biomedical Research) worked in his lab.
Norton Zinder died in 2012 of pneumonia after a long illness.
References
Further reading
Papers authored by Norton Zinder
Laboratory of Genetics at Rockefeller University
Historical plaque at UW–Madison noting Zinder's contribution to molecular genetics
Biography of Norton Zinder
1928 births
2012 deaths
American microbiologists
Rockefeller University faculty
The Bronx High School of Science alumni
University of Wisconsin–Madison alumni
Phage workers
Members of the United States National Academy of Sciences
Human Genome Project scientists
Scientists from New York City
Columbia College (New York) alumni
|
https://en.wikipedia.org/wiki/Fire%20protection
|
Fire protection is the study and practice of mitigating the unwanted effects of potentially destructive fires. It involves the study of the behaviour, compartmentalisation, suppression and investigation of fire and its related emergencies, as well as the research and development, production, testing and application of mitigating systems. In structures, be they land-based, offshore or even ships, the owners and operators are responsible to maintain their facilities in accordance with a design-basis that is rooted in laws, including the local building code and fire code, which are enforced by the authority having jurisdiction.
Buildings must be constructed in accordance with the version of the building code that is in effect when an application for a building permit is made. Building inspectors check on compliance of a building under construction with the building code. Once construction is complete, a building must be maintained in accordance with the current fire code, which is enforced by the fire prevention officers of a local fire department. In the event of fire emergencies, Firefighters, fire investigators, and other fire prevention personnel are called to mitigate, investigate and learn from the damage of a fire. Lessons learned from fires are applied to the authoring of both building codes and fire codes.
Classifying fires
When deciding on what fire protection is appropriate for any given situation, it is important to assess the types of fire hazards that may be faced.
Some jurisdictions operate systems of classifying fires using code letters. Whilst these may agree on some classifications, they also vary. Below is a table showing the standard operated in Europe and Australia against the system used in the United States.
1 Technically there is no such thing as a "Class E" fire, as electricity itself does not burn. However, it is considered a dangerous and very deadly complication to a fire, therefore using the incorrect extinguishing method can result in
|
https://en.wikipedia.org/wiki/Acidophobe
|
An acidophobe is an organism that is intolerant of acidic environments. The terms acidophobia, acidophoby and acidophobic are also used. The term acidophobe is variously applied to plants, bacteria, protozoa, animals, chemical compounds, etc. The antonymous term is acidophile.
Plants are known to be well-defined with respect to their pH tolerance, and only a small number of species thrive well under a broad range of acidity. Therefore the categorization acidophile/acidophobe is well-defined. Sometimes a complementary classification is used (calcicole/calcifuge, with calcicoles being "lime-loving" plants). In gardening, soil pH is a measure of acidity or alkalinity of soil, with pH = 7 indicating the neutral soil. Therefore acydophobes would prefer pH above 7. Acid intolerance of plants may be mitigated by lime addition and by calcium and nitrogen fertilizers.
Acidophobic species are used as a natural instrument of monitoring the degree of acidifying contamination of soil and watercourses. For example, when monitoring vegetation, a decrease of acidophobic species would be indicative of acid rain increase in the area. A similar approach is used with aquatic species.
Acidophobes
Whiteworms (Enchytraeus albidus), a popular live food for aquarists, are acidophobes.
Acidophobic compounds are the ones which are unstable in acidic media.
Acidophobic crops: alfalfa, clover
References
Physiology
|
https://en.wikipedia.org/wiki/T/TCP
|
T/TCP (Transactional Transmission Control Protocol) was a variant of the Transmission Control Protocol (TCP).
It was an experimental TCP extension for efficient transaction-oriented (request/response) service.
It was developed to fill the gap between TCP and UDP, by Bob Braden in 1994.
Its definition can be found in RFC 1644 (that obsoletes RFC 1379). It is faster than TCP and delivery reliability is comparable to that of TCP.
T/TCP suffers from several major security problems as described by Charles Hannum in September 1996. It has not gained widespread popularity.
RFC 1379 and RFC 1644 that define T/TCP were moved to Historic Status in May 2011 by RFC 6247 for security reasons.
Alternatives
TCP Fast Open is a more recent alternative.
See also
TCP Cookie Transactions
Further reading
Richard Stevens, Gary Wright, "TCP/IP Illustrated: TCP for transactions, HTTP, NNTP, and the UNIX domain protocols" (Volume 3 of TCP/IP Illustrated) // Addison-Wesley, 1996 (), 2000 (). Part 1 "TCP for Transactions". Chapters 1-12, pages 1–159
References
External links
Example exploit of T/TCP in a post to Bugtraq by Vasim Valejev
TCP extensions
Internet Standards
|
https://en.wikipedia.org/wiki/Operation%20%28mathematics%29
|
In mathematics, an operation is a function which takes zero or more input values (also called "operands" or "arguments") to a well-defined output value. The number of operands is the arity of the operation.
The most commonly studied operations are binary operations (i.e., operations of arity 2), such as addition and multiplication, and unary operations (i.e., operations of arity 1), such as additive inverse and multiplicative inverse. An operation of arity zero, or nullary operation, is a constant. The mixed product is an example of an operation of arity 3, also called ternary operation.
Generally, the arity is taken to be finite. However, infinitary operations are sometimes considered, in which case the "usual" operations of finite arity are called finitary operations.
A partial operation is defined similarly to an operation, but with a partial function in place of a function.
Types of operation
There are two common types of operations: unary and binary. Unary operations involve only one value, such as negation and trigonometric functions. Binary operations, on the other hand, take two values, and include addition, subtraction, multiplication, division, and exponentiation.
Operations can involve mathematical objects other than numbers. The logical values true and false can be combined using logic operations, such as and, or, and not. Vectors can be added and subtracted. Rotations can be combined using the function composition operation, performing the first rotation and then the second. Operations on sets include the binary operations union and intersection and the unary operation of complementation. Operations on functions include composition and convolution.
Operations may not be defined for every possible value of its domain. For example, in the real numbers one cannot divide by zero or take square roots of negative numbers. The values for which an operation is defined form a set called its domain of definition or active domain. The set which contains the
|
https://en.wikipedia.org/wiki/Bonox
|
Bonox is a beef extract made in Australia, currently owned by Bega Cheese after it acquired the brand from Kraft Heinz in 2017. It is primarily a drink but can also be used as stock in cooking.
History
Bonox was invented by Camron Thomas for Fred Walker of Fred Walker & Co. in 1918. Bonox was launched the following year.
The Walker company was purchased by Kraft Foods Inc. sometime after Walker's death in 1935. The product was produced by Kraft (from 2012 Kraft Foods, from 2015 Kraft Heinz) until 2017, when Bonox, along with other brands, was sold to Bega Cheese. It kept the same recipe and jar designs.
, Bonox continues to be produced by Bega.
Nutritional information
This concentrated beef extract contains iron and niacin. It is a thick dark brown liquid paste which can be added to soups or stews for flavoring and can also be added to hot water and served as a beverage.
Approximate per 100g
Energy, including dietary fiber 401 kJ
Moisture 56.6 g
Protein 16.6 g
Nitrogen 2.66 g
Fat 0.2 g
Ash 19.8 g
Starch 6.5 g
Available carbohydrate, without sugar alcohols 6.5 g
Available carbohydrate, with sugar alcohols 6.5 g
Minerals
Calcium (Ca) 110 mg
Copper (Cu) 0.11 mg
Fluoride (F) 190 ug
Iron (Fe) 2 mg
Magnesium (Mg) 60 mg
Manganese (Mn) 0.13 mg
Phosphorus (P) 360 mg
Potassium (K) 690 mg
Selenium (Se) 4 ug
Sodium (Na) 6660 mg
Sulphur (S) 160 mg
Zinc (Zn) 1.5 mg
Vitamins
Thiamin (B1) 0.36 mg
Riboflavin (B2) 0.27 mg
Niacin (B3) 5.4 mg
Niacin Equivalents 8.17 mg
Pantothenic acid (B5) 0.38 mg
Pyridoxine (B6) 0.23 mg
Biotin (B7) 12 ug
See also
Bovril
References
Products introduced in 1919
Food ingredients
Australian brands
|
https://en.wikipedia.org/wiki/Biodemography%20of%20human%20longevity
|
Biodemography is a multidisciplinary approach, integrating biological knowledge (studies on human biology and animal models) with demographic research on human longevity and survival. Biodemographic studies are important for understanding the driving forces of the current longevity revolution (dramatic increase in human life expectancy), forecasting the future of human longevity, and identification of new strategies for further increase in healthy and productive life span.
Theory
Biodemographic studies have found a remarkable similarity in survival dynamics between humans and laboratory animals. Specifically, three general biodemographic laws of survival are found:
Gompertz–Makeham law of mortality
Compensation law of mortality
Late-life mortality deceleration (now disputed)
The Gompertz–Makeham law states that death rate is a sum of an age-independent component (Makeham term) and an age-dependent component (Gompertz function), which increases exponentially with age.
The compensation law of mortality (late-life mortality convergence) states that the relative differences in death rates between different populations of the same biological species are decreasing with age, because the higher initial death rates are compensated by lower pace of their increase with age.
The disputed late-life mortality deceleration law states that death rates stop increasing exponentially at advanced ages and level off to the late-life mortality plateau. A consequence of this deceleration is that there would be no fixed upper limit to human longevity — no fixed number which separates possible and impossible values of lifespan. If true, this would challenges the common belief in existence of a fixed maximal human life span.
Biodemographic studies have found that even genetically identical laboratory animals kept in constant environment have very different lengths of life, suggesting a crucial role of chance and early-life developmental noise in longevity determination. This leads
|
https://en.wikipedia.org/wiki/High%20Assurance%20Internet%20Protocol%20Encryptor
|
A High Assurance Internet Protocol Encryptor (HAIPE) is a Type 1 encryption device that complies with the National Security Agency's HAIPE IS (formerly the HAIPIS, the High Assurance Internet Protocol Interoperability Specification). The cryptography used is Suite A and Suite B, also specified by the NSA as part of the Cryptographic Modernization Program. HAIPE IS is based on IPsec with additional restrictions and enhancements. One of these enhancements includes the ability to encrypt multicast data using a "preplaced key" (see definition in List of cryptographic key types). This requires loading the same key on all HAIPE devices that will participate in the multicast session in advance of data transmission. A HAIPE is typically a secure gateway that allows two enclaves to exchange data over an untrusted or lower-classification network.
Examples of HAIPE devices include:
L3Harris Technologies' Encryption Products
KG-245X 10Gbit/s (HAIPE IS v3.1.2 and Foreign Interoperable),
KG-245A fully tactical 1 Gbit/s (HAIPE IS v3.1.2 and Foreign Interoperable)
RedEagle
ViaSat's AltaSec Products
KG-250, and
KG-255 [1 Gbit/s]
General Dynamics Mission Systems TACLANE Products
FLEX (KG-175F)
10G (KG-175X)
Nano (KG-175N)
Airbus Defence & Space ECTOCRYP Transparent Cryptography
Three of these devices are compliant to the HAIPE IS v3.0.2 specification while the remaining devices use the HAIPE IS version 1.3.5, which has a couple of notable limitations: limited support for routing protocols or open network management.
A HAIPE is an IP encryption device, looking up the destination IP address of a packet in its internal Security Association Database (SAD) and picking the encrypted tunnel based on the appropriate entry. For new communications, HAIPEs use the internal Security Policy Database (SPD) to set up new tunnels with the appropriate algorithms and settings. Due to lack of support for modern commercial routing protocols the HAIPEs often must be preprogrammed with sta
|
https://en.wikipedia.org/wiki/Predictive%20analytics
|
Predictive analytics is a form of business analytics applying machine learning to generate a predictive model for certain business applications. As such, it encompasses a variety of statistical techniques from predictive modeling and machine learning that analyze current and historical facts to make predictions about future or otherwise unknown events. It represents a major subset of machine learning applications; in some contexts, it is synonymous with machine learning.
In business, predictive models exploit patterns found in historical and transactional data to identify risks and opportunities. Models capture relationships among many factors to allow assessment of risk or potential associated with a particular set of conditions, guiding decision-making for candidate transactions.
The defining functional effect of these technical approaches is that predictive analytics provides a predictive score (probability) for each individual (customer, employee, healthcare patient, product SKU, vehicle, component, machine, or other organizational unit) in order to determine, inform, or influence organizational processes that pertain across large numbers of individuals, such as in marketing, credit risk assessment, fraud detection, manufacturing, healthcare, and government operations including law enforcement.
Definition
Predictive analytics is a set of business intelligence (BI) technologies that uncovers relationships and patterns within large volumes of data that can be used to predict behavior and events. Unlike other BI technologies, predictive analytics is forward-looking, using past events to anticipate the future. Predictive analytics statistical techniques include data modeling, machine learning, AI, deep learning algorithms and data mining. Often the unknown event of interest is in the future, but predictive analytics can be applied to any type of unknown whether it be in the past, present or future. For example, identifying suspects after a crime has been committ
|
https://en.wikipedia.org/wiki/HTML%20email
|
HTML email is the use of a subset of HTML to provide formatting and semantic markup capabilities in email that are not available with plain text: Text can be linked without displaying a URL, or breaking long URLs into multiple pieces. Text is wrapped to fit the width of the viewing window, rather than uniformly breaking each line at 78 characters (defined in RFC 5322, which was necessary on older text terminals). It allows in-line inclusion of images, tables, as well as diagrams or mathematical formulae as images, which are otherwise difficult to convey (typically using ASCII art).
Adoption
Most graphical email clients support HTML email, and many default to it. Many of these clients include both a GUI editor for composing HTML emails and a rendering engine for displaying received HTML emails.
Since its conception, a number of people have vocally opposed all HTML email (and even MIME itself), for a variety of reasons. For instance, the ASCII Ribbon Campaign advocated that all email should be sent in ASCII text format. The campaign was unsuccessful and was abandoned in 2013. While still considered inappropriate in many newsgroup postings and mailing lists, its adoption for personal and business mail has only increased over time. Some of those who strongly opposed it when it first came out now see it as mostly harmless.
According to surveys by online marketing companies, adoption of HTML-capable email clients is now nearly universal, with less than 3% reporting that they use text-only clients. The majority of users prefer to receive HTML emails over plain text.
Compatibility
Email software that complies with RFC 2822 is only required to support plain text, not HTML formatting. Sending HTML formatted emails can therefore lead to problems if the recipient's email client does not support it. In the worst case, the recipient will see the HTML code instead of the intended message.
Among those email clients that do support HTML, some do not render it consistently
|
https://en.wikipedia.org/wiki/IEEE%20P802.1p
|
IEEE P802.1p was a task group active from 1995 to 1998, responsible for adding traffic class expediting and dynamic multicast filtering to the IEEE 802.1D standard. The task group developed a mechanism for implementing quality of service (QoS) at the media access control (MAC) level. Although this technique is commonly referred to as IEEE 802.1p, the group's work with the new priority classes and Generic Attribute Registration Protocol (GARP) was not published separately but was incorporated into a major revision of the standard, IEEE 802.1D-1998, which subsequently was incorporated into IEEE 802.1Q-2014 standard. The work also required a short amendment extending the frame size of the Ethernet standard by four bytes which was published as IEEE 802.3ac in 1998.
The QoS technique developed by the working group, also known as class of service (CoS), is a 3-bit field called the Priority Code Point (PCP) within an Ethernet frame header when using VLAN tagged frames as defined by IEEE 802.1Q. It specifies a priority value of between 0 and 7 inclusive that can be used by QoS disciplines to differentiate traffic.
Priority levels
Eight different classes of service are available as expressed through the 3-bit PCP field in an IEEE 802.1Q header added to the frame. The way traffic is treated when assigned to any particular class is undefined and left to the implementation. The IEEE, however, has made some broad recommendations:
Note that the above recommendations have been in force since IEEE 802.1Q-2005 and were revised from the original recommendations in IEEE 802.1D-2004 to better accommodate differentiated services for IP networking.
See also
IEEE 802.1
IEEE 802.11e
IEEE 802.3
Type of service (ToS)
Ethernet priority flow control
References
External links
IEEE 802.1D-2004 (contains original 802.1p changes - now part of 802.1Q-2014)
IEEE 802.1Q-2014 (incorporates 802.1D)
Quality of service
IEEE 802.1p
Working groups
Ethernet standards
|
https://en.wikipedia.org/wiki/MIK%20%28character%20set%29
|
MIK (МИК) is an 8-bit Cyrillic code page used with DOS. It is based on the character set used in the Bulgarian Pravetz 16 IBM PC compatible system. Kermit calls this character set "BULGARIA-PC" / "bulgaria-pc". In Bulgaria, it was sometimes incorrectly referred to as code page 856 (which clashes with IBM's definition for a Hebrew code page). This code page is known by FreeDOS as Code page 3021.
This is the most widespread DOS/OEM code page used in Bulgaria, rather than CP 808, CP 855, CP 866 or CP 872.
Almost every DOS program created in Bulgaria, which has Bulgarian strings in it, was using MIK as encoding, and many such programs are still in use.
Character set
Each character is shown with its equivalent Unicode code point and its decimal code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as ASCII.
Notes for implementors of mapping tables to Unicode
Implementors of mapping tables to Unicode should note that the MIK Code page unifies some characters:
Binary character manipulations
The MIK code page maintains in alphabetical order all Cyrillic letters which enables very easy character manipulation in binary form:
10xx xxxx - is a Cyrillic Letter
100x xxxx - is an Upper-case Cyrillic Letter
101x xxxx - is a Lower-case Cyrillic Letter
In such case testing and character manipulating functions as:
IsAlpha(), IsUpper(), IsLower(), ToUpper() and ToLower(),
are bit operations and sorting is by simple comparison of character values.
See also
Hardware code page
References
External links
https://www.unicode.org/Public/MAPPINGS/VENDORS/IBM/IBM_conversions.html Unicode Consortium's mappings between IBM's code pages and Unicode
http://www.cl.cam.ac.uk/~mgk25/unicode.html#conv UTF-8 and Unicode FAQ for Unix/Linux by Markus Kuhn
DOS code pages
Character encoding
|
https://en.wikipedia.org/wiki/P-bodies
|
In cellular biology, P-bodies, or processing bodies, are distinct foci formed by phase separation within the cytoplasm of a eukaryotic cell consisting of many enzymes involved in mRNA turnover. P-bodies are highly conserved structures and have been observed in somatic cells originating from vertebrates and invertebrates, plants and yeast. To date, P-bodies have been demonstrated to play fundamental roles in general mRNA decay, nonsense-mediated mRNA decay, adenylate-uridylate-rich element mediated mRNA decay, and microRNA (miRNA) induced mRNA silencing. Not all mRNAs which enter P-bodies are degraded, as it has been demonstrated that some mRNAs can exit P-bodies and re-initiate translation. Purification and sequencing of the mRNA from purified processing bodies showed that these mRNAs are largely translationally repressed upstream of translation initiation and are protected from 5' mRNA decay.
P-bodies were originally proposed to be the sites of mRNA degradation in the cell and involved in decapping and digestion of mRNAs earmarked for destruction. Later work called this into question suggesting P bodies store mRNA until needed for translation.
In neurons, P-bodies are moved by motor proteins in response to stimulation. This is likely tied to local translation in dendrites.
History
P-bodies were first described in the scientific literature by Bashkirov et al. in 1997, in which they describe "small granules… discrete, prominent foci" as the cytoplasmic location of the mouse exoribonuclease mXrn1p. It wasn’t until 2002 that a glimpse into the nature and importance of these cytoplasmic foci was published., when researchers demonstrated that multiple proteins involved with mRNA degradation localize to the foci. Their importance was recognized after experimental evidence was obtained pointing to P-bodies as the sites of mRNA degradation in the cell. The researchers named these structures processing bodies or "P bodies". During this time, many descriptive names w
|
https://en.wikipedia.org/wiki/Very-high-density%20cable%20interconnect
|
A very-high-density cable interconnect (VHDCI) is a 68-pin connector that was introduced in the SPI-2 document of SCSI-3. The VHDCI connector is a very small connector that allows placement of four wide SCSI connectors on the back of a single PCI card slot. Physically, it looks like a miniature Centronics type connector. It uses the regular 68-contact pin assignment. The male connector (plug) is used on the cable and the female connector ("receptacle") on the device.
Other uses
Apart from the standardized use with the SCSI interface, several vendors have also used VHDCI connectors for other types of interfaces:
Nvidia: for an external PCI Express 8-lane interconnect, and used in Quadro Plex VCS and in Quadro NVS 420 as a display port connector
ATI Technologies: on the FireMV 2400 to convey two DVI and two VGA signals on a single connector, and ganging two of these connectors side by side in order to allow the FireMV 2400 to be a low-profile quad display card. The Radeon X1950 XTX Crossfire Edition also used a pair of the connectors to grant more inter-card bandwidth than the PCI Express bus allowed at the time for Crossfire.
AMD: Some Visiontek variants of the Radeon HD 7750 use a VHDCI connector alongside a Mini DisplayPort to allow a 5 (breakout to 4 HDMI+1 mDP) display Eyefinity array on a low profile card. VisionTek also released a similar Radeon HD 5570, though it lacked a Mini DisplayPort.
Juniper Networks: for their 12- and 48-port 100BASE-TX PICs (physical interface cards). The cable connects to the VHDCI connector on the PIC on one end, via an RJ-21 connector on the other end, to an RJ-45 patch panel.
Cisco: 3750 StackWise stacking cables
National Instruments: on their high-speed digital I/O cards.
AudioScience uses VHDCI to carry multiple analog balanced audio and digital AES/EBU audio streams, and clock and GPIO signals.
See also
SCSI connector
References
Electrical signal connectors
Analog video connectors
Digital display connectors
Networkin
|
https://en.wikipedia.org/wiki/Knowledge%20integration
|
Knowledge integration is the process of synthesizing multiple knowledge models (or representations) into a common model (representation).
Compared to information integration, which involves merging information having different schemas and representation models, knowledge integration focuses more on synthesizing the understanding of a given subject from different perspectives.
For example, multiple interpretations are possible of a set of student grades, typically each from a certain perspective. An overall, integrated view and understanding of this information can be achieved if these interpretations can be put under a common model, say, a student performance index.
The Web-based Inquiry Science Environment (WISE), from the University of California at Berkeley has been developed along the lines of knowledge integration theory.
Knowledge integration has also been studied as the process of incorporating new information into a body of existing knowledge with an interdisciplinary approach. This process involves determining how the new information and the existing knowledge interact, how existing knowledge should be modified to accommodate the new information, and how the new information should be modified in light of the existing knowledge.
A learning agent that actively investigates the consequences of new information can detect and exploit a variety of learning opportunities; e.g., to resolve knowledge conflicts and to fill knowledge gaps. By exploiting these learning opportunities the learning agent is able to learn beyond the explicit content of the new information.
The machine learning program KI, developed by Murray and Porter at the University of Texas at Austin, was created to study the use of automated and semi-automated knowledge integration to assist knowledge engineers constructing a large knowledge base.
A possible technique which can be used is semantic matching. More recently, a technique useful to minimize the effort in mapping validation and vi
|
https://en.wikipedia.org/wiki/Shiplap
|
Shiplap is a type of wooden board used commonly as exterior siding in the construction of residences, barns, sheds, and outbuildings.
Exterior walls
Shiplap is either rough-sawn or milled pine or similarly inexpensive wood between wide with a rabbet on opposite sides of each edge. The rabbet allows the boards to overlap in this area. The profile of each board partially overlaps that of the board next to it creating a channel that gives shadow line effects, provides excellent weather protection and allows for dimensional movement.
Useful for its strength as a supporting member, and its ability to form a relatively tight seal when lapped, shiplap is usually used as a type of siding for buildings that do not require extensive maintenance and must withstand cold and aggressive climates. Rough-sawn shiplap is attached vertically in post and beam construction, usually with 51–65 mm (6d–8d) common nails, while milled versions, providing a tighter seal, are more commonly placed horizontally, more suited to two-by-four frame construction.
Small doors and shutters such as those found in barns and sheds are often constructed of shiplap cut directly from the walls, with only thin members framing or crossing the back for support. Shiplap is also used indoors for the rough or rustic look that it creates when used as paneling or a covering for a wall or ceiling. Shiplap is often used to describe any rabbeted siding material that overlaps in a similar fashion.
Interior design
In interior design, shiplap is a style of wooden wall siding characterized by long planks, normally painted white, that are mounted horizontally with a slight gap between them in a manner that evokes exterior shiplap walls. A disadvantage of the style is that the gaps are prone to accumulating dust.
Installing shiplap horizontally in a room can help carry the eye around the space, making it feel larger. Installing it vertically helps emphasize the height of the room, making it feel taller. Rectangu
|
https://en.wikipedia.org/wiki/List%20of%20RFCs
|
This is a partial list of RFCs (request for comments memoranda). A Request for Comments (RFC) is a publication in a series from the principal technical development and standards-setting bodies for the Internet, most prominently the Internet Engineering Task Force (IETF).
While there are over 9,150 RFCs as of February 2022, this list consists of RFCs that have related articles. A complete list is available from the IETF website.
Numerical list
This is a partial list of RFCs (request for comments memoranda). A Request for Comments (RFC) is a publication in a series from the principal technical development and standards-setting bodies for the Internet, most prominently the Internet Engineering Task Force (IETF).
While there are over 9,150 RFCs as of February 2022, this list consists of RFCs that have related articles. A complete list is available from the IETF website.
Topical list
Obsolete RFCs are indicated with struck-through text.
References
External links
RFC-Editor - Document Retrieval - search engine
RFC Database - contains various lists of RFCs
RFC Bibliographic Listing - Listing of bibliographic entries for all RFCs. Also notes when an RFC has been made obsolete.
Internet Standards
Internet-related lists
|
https://en.wikipedia.org/wiki/Year%20zero
|
A year zero does not exist in the Anno Domini (AD) calendar year system commonly used to number years in the Gregorian calendar (nor in its predecessor, the Julian calendar); in this system, the year is followed directly by year . However, there is a year zero in both the astronomical year numbering system (where it coincides with the Julian year ), and the ISO 8601:2004 system, the interchange standard for all calendar numbering systems (where year zero coincides with the Gregorian year ; see conversion table). There is also a year zero in most Buddhist and Hindu calendars.
History
The Anno Domini era was introduced in 525 by Scythian monk Dionysius Exiguus (c. 470 – c. 544), who used it to identify the years on his Easter table. He introduced the new era to avoid using the Diocletian era, based on the accession of Roman Emperor Diocletian, as he did not wish to continue the memory of a persecutor of Christians. In the preface to his Easter table, Dionysius stated that the "present year" was "the consulship of Probus Junior" which was also 525 years "since the incarnation of our Lord Jesus Christ". How he arrived at that number is unknown.
Dionysius Exiguus did not use 'AD' years to date any historical event. This practice began with the English cleric Bede (c. 672–735), who used AD years in his (731), popularizing the era. Bede also used - only once - a term similar to the modern English term 'before Christ', though the practice did not catch on for nearly a thousand years, when books by Dionysius Petavius treating calendar science gained popularity. Bede did not sequentially number days of the month, weeks of the year, or months of the year. However, he did number many of the days of the week using the counting origin one in Ecclesiastical Latin.
Previous Christian histories used several titles for dating events: ("in the year of the world") beginning on the purported first day of creation; or ("in the year of Adam") beginning at the creation of Adam fiv
|
https://en.wikipedia.org/wiki/Iron%20fertilization
|
Iron fertilization is the intentional introduction of iron-containing compounds (like iron sulfate) to iron-poor areas of the ocean surface to stimulate phytoplankton production. This is intended to enhance biological productivity and/or accelerate carbon dioxide () sequestration from the atmosphere. Iron is a trace element necessary for photosynthesis in plants. It is highly insoluble in sea water and in a variety of locations is the limiting nutrient for phytoplankton growth. Large algal blooms can be created by supplying iron to iron-deficient ocean waters. These blooms can nourish other organisms.
Ocean iron fertilization is an example of a geoengineering technique. Iron fertilization attempts to encourage phytoplankton growth, which removes carbon from the atmosphere for at least a period of time. This technique is controversial because there is limited understanding of its complete effects on the marine ecosystem, including side effects and possibly large deviations from expected behavior. Such effects potentially include release of nitrogen oxides, and disruption of the ocean's nutrient balance. Controversy remains over the effectiveness of atmospheric sequestration and ecological effects. Since 1990, 13 major large scale experiments have been carried out to evaluate efficiency and possible consequences of iron fertilization in ocean waters. A study in 2017 determined that the method is unproven; sequestering efficiency is low and sometimes no effect was seen and the amount of iron deposits that is needed to make a small cut in the carbon emissions is in the million tons per year.
Approximately 25 per cent of the ocean surface has ample macronutrients, with little plant biomass (as defined by chlorophyll). The production in these high-nutrient low-chlorophyll (HNLC) waters is primarily limited by micronutrients, especially iron. The cost of distributing iron over large ocean areas is large compared with the expected value of carbon credits. Research in the
|
https://en.wikipedia.org/wiki/Haplogroup%20Z
|
In human mitochondrial genetics, Haplogroup Z is a human mitochondrial DNA (mtDNA) haplogroup.
Origin
Haplogroup Z is believed to have arisen in Central Asia, and is a descendant of haplogroup CZ.
Distribution
The greatest clade diversity of haplogroup Z is found in East Asia and Central Asia. However, its greatest frequency appears in some peoples of Russia, such as Evens from Kamchatka (8/39 Z1a2a, 3/39 Z1a3, 11/39 = 28.2% Z total) and from Berezovka, Srednekolymsky District, Sakha Republic (3/15 Z1a3, 1/15 Z1a2a, 4/15 = 26.7% Z total), and among the Saami people of northern Scandinavia. With the exception of three Khakasses who belong to Z4, two Yakut who belong to Z3a1, two Yakut, a Yakutian Evenk, a Buryat, and an Altai Kizhi who belong to Z3(xZ3a, Z3c), and the presence of the Z3c clade among populations of Altai Republic, nearly all members of haplogroup Z in North Asia and Europe belong to subclades of Z1. The TMRCA of Z1 is 20,400 [95% CI 7,400 <-> 34,000] ybp according to Sukernik et al. 2012, 20,400 [95% CI 7,800 <-> 33,800] ybp according to Fedorova et al. 2013, or 19,600 [95% CI 12,500 <-> 29,300] ybp according to YFull. Among the members (Z1, Z2, Z3, Z4, and Z7) of haplogroup Z, Nepalese populations were characterized by rare clades Z3a1a and Z7, of which Z3a1a was the most frequent sub-clade in Newar, with a frequency of 16.5%. Z3, found in East Asia, North Asia, and MSEA, is the oldest member of haplogroup Z with an estimated age of ~ 25.4 Kya. Haplogroup Z3a1a is also detected in other Nepalese populations, such as Magar (5.4%), Tharu, Kathmandu (mixed population) and Nepali-other (mixed population from Kathmandu and Eastern Nepal). S6). Z3a1a1 detected in Tibet, Myanmar, Nepal, India, Thai-Laos and Vietnam trace their ancestral roots to China with a coalescent age of ~ 8.4 Kya
Fedorova et al. 2013 have reported finding Z*(xZ1a, Z3, Z4) in 1/388 Turks and 1/491 Kazakhs. These individuals should belong to Z1* (elsewhere observed in a Tofalar),
|
https://en.wikipedia.org/wiki/Computers%2C%20Freedom%20and%20Privacy%20Conference
|
The Computers, Freedom and Privacy Conference (or CFP, or the Conference on Computers, Freedom and Privacy) is an annual academic conference held in the United States or Canada about the intersection of computer technology, freedom, and privacy issues. The conference was founded in 1991, and since at least 1999, it has been organized under the aegis of the Association for Computing Machinery. It was originally sponsored by CPSR.
CFP91
The first CFP was held in 1991 in Burlingame, California.
CFP92
The second CFP was held on March 18–20, 1992 in Washington, DC. It was the first under the auspices of the Association for Computing Machinery. The conference chair was Lance Hoffman. The entire proceedings are available from the Association for Computing Machinery at https://dl.acm.org/doi/proceedings/10.1145/142652.
CFP99
The Computers, Freedom and Privacy 99 Conference, sponsored by the Association for Computing Machinery, the 9th annual CFP, was held in Washington, DC from 6 April 1999 to 8 April 1999.
CFP99 focused on international Internet regulation and privacy protection. There were close to 500 registered participants and attendees included high-level government officials, grassroots advocates and programmers.
The conference chair for CFP99 was Marc Rotenberg and the program coordinator was Ross Stapleton-Gray.
Keynote speakers at CFP99 were Tim Berners-Lee, director of the World Wide Web Consortium,
Vint Cerf, president of the Internet Society and FTC Commissioner Mozelle Thompson.
Others who spoke at CFP99 included:
Others who spoke at CFP99 included:
David Banisar, policy director at the Electronic Privacy Information Center;
US Representative Bob Barr former federal prosecutor and Georgia Republican;
Colin Bennett, a privacy expert at Canada's University of Victoria;
Paula Breuning, a lawyer for the National Telecommunications and Information Administration in the United States Department of Commerce;
Becky Burr, head of the Commerce Department unit
|
https://en.wikipedia.org/wiki/Urban%20mining
|
An urban mine is the stockpile of rare metals in the discarded waste electrical and electronic equipment (WEEE) of a society. Urban mining is the process of recovering these rare metals through mechanical and chemical treatments. In 1997, recycled gold accounted for approximately 20% of the 2700 tons of gold supplied to the market.
The name was coined in the 1980s by Professor Hideo Nanjyo of the Research Institute of Mineral Dressing and Metallurgy at Tohoku University and the idea has gained significant traction in Japan (and in other parts of Asia) in the 21st century.
Research published by the Japanese government's National Institute of Materials Science in 2010 estimated that there were 6,800 tonnes of gold recoverable from used electronic equipment in Japan.
References
Sources
Further reading
Electronic waste
Mining
Recycling
|
https://en.wikipedia.org/wiki/Petrick%27s%20method
|
In Boolean algebra, Petrick's method (also known as Petrick function or branch-and-bound method) is a technique described by Stanley R. Petrick (1931–2006) in 1956 for determining all minimum sum-of-products solutions from a prime implicant chart. Petrick's method is very tedious for large charts, but it is easy to implement on a computer. The method was improved by Insley B. Pyne and Edward Joseph McCluskey in 1962.
Algorithm
Reduce the prime implicant chart by eliminating the essential prime implicant rows and the corresponding columns.
Label the rows of the reduced prime implicant chart , , , , etc.
Form a logical function which is true when all the columns are covered. P consists of a product of sums where each sum term has the form , where each represents a row covering column .
Apply De Morgan's Laws to expand into a sum of products and minimize by applying the absorption law .
Each term in the result represents a solution, that is, a set of rows which covers all of the minterms in the table. To determine the minimum solutions, first find those terms which contain a minimum number of prime implicants.
Next, for each of the terms found in step five, count the number of literals in each prime implicant and find the total number of literals.
Choose the term or terms composed of the minimum total number of literals, and write out the corresponding sums of prime implicants.
Example of Petrick's method
Following is the function we want to reduce:
The prime implicant chart from the Quine-McCluskey algorithm is as follows:
{| class="wikitable" style="text-align:center;"
|-
! || 0 || 1 || 2 || 5 || 6 || 7 || ⇒ || A || B || C
|-
| style="a1;" | K = m(0,1) || || || || || || || ⇒ || 0 || 0 ||
|-
| style="a1;" | L = m(0,2) || || || || || || || ⇒ || 0 || || 0
|-
| style="a1;" | M
|
https://en.wikipedia.org/wiki/Laminar%20flow%20cabinet
|
A laminar flow cabinet or tissue culture hood is a carefully enclosed bench designed to prevent contamination of semiconductor wafers, biological samples, or any particle sensitive materials. Air is drawn through a HEPA filter and blown in a very smooth, laminar flow towards the user. Due to the direction of air flow, the sample is protected from the user but the user is not protected from the sample. The cabinet is usually made of stainless steel with no gaps or joints where spores might collect.
Such hoods exist in both horizontal and vertical configurations, and there are many different types of cabinets with a variety of airflow patterns and acceptable uses.
Laminar flow cabinets may have a UV-C germicidal lamp to sterilize the interior and contents before usage to prevent contamination of the experiment. Germicidal lamps are usually kept on for fifteen minutes to sterilize the interior before the cabinet is used. The light must be switched off when the cabinet is being used, to limit exposure to skin and eyes as stray ultraviolet light emissions can cause cancer and cataracts.
See also
Asepsis
Biosafety cabinet
Fume hood
References
External links
NSF/ANSI Standard 49
Laboratory equipment
Microbiology equipment
Ventilation
|
https://en.wikipedia.org/wiki/Storage%20virtualization
|
In computer science, storage virtualization is "the process of presenting a logical view of the physical storage resources to" a host computer system, "treating all storage media (hard disk, optical disk, tape, etc.) in the enterprise as a single pool of storage."
A "storage system" is also known as a storage array, disk array, or filer. Storage systems typically use special hardware and software along with disk drives in order to provide very fast and reliable storage for computing and data processing. Storage systems are complex, and may be thought of as a special purpose computer designed to provide storage capacity along with advanced data protection features. Disk drives are only one element within a storage system, along with hardware and special purpose embedded software within the system.
Storage systems can provide either block accessed storage, or file accessed storage. Block access is typically delivered over Fibre Channel, iSCSI, SAS, FICON or other protocols. File access is often provided using NFS or SMB protocols.
Within the context of a storage system, there are two primary types of virtualization that can occur:
Block virtualization used in this context refers to the abstraction (separation) of logical storage (partition) from physical storage so that it may be accessed without regard to physical storage or heterogeneous structure. This separation allows the administrators of the storage system greater flexibility in how they manage storage for end users.
File virtualization addresses the NAS challenges by eliminating the dependencies between the data accessed at the file level and the location where the files are physically stored. This provides opportunities to optimize storage use and server consolidation and to perform non-disruptive file migrations.
Block virtualization
Address space remapping
Virtualization of storage helps achieve location independence by abstracting the physical location of the data. The virtualization system p
|
https://en.wikipedia.org/wiki/Amateur%20radio%20repeater
|
An amateur radio repeater is an electronic device that receives a weak or low-level amateur radio signal and retransmits it at a higher level or higher power, so that the signal can cover longer distances without degradation. Many repeaters are located on hilltops or on tall buildings as the higher location increases their coverage area, sometimes referred to as the radio horizon, or "footprint". Amateur radio repeaters are similar in concept to those used by public safety entities (police, fire department, etc.), businesses, government, military, and more. Amateur radio repeaters may even use commercially packaged repeater systems that have been adjusted to operate within amateur radio frequency bands, but more often amateur repeaters are assembled from receivers, transmitters, controllers, power supplies, antennas, and other components, from various sources.
Introduction
In amateur radio, repeaters are typically maintained by individual hobbyists or local groups of amateur radio operators. Many repeaters are provided openly to other amateur radio operators and typically not used as a remote base station by a single user or group. In some areas multiple repeaters are linked together to form a wide-coverage network, such as the linked system provided by the Independent Repeater Association which covers most of western Michigan, or the Western Intertie Network System ("WINsystem") that now covers a great deal of California, and is in 17 other states, including Hawaii, along with parts of four other countries, Australia, Canada, Great Britain and Japan.
Frequencies
Repeaters are found mainly in the VHF 6-meter (50–54 MHz), 2-meter (144–148 MHz), 1.25-meter band (1 meters) (220–225 MHz) and the UHF 70 centimeter (420–450 MHz) bands, but can be used on almost any frequency pair above 28 MHz. In some areas, 33 centimeters (902–928 MHz) and 23 centimeters (1.24–1.3 GHz) are also used for repeaters. Note that different countries have different rules; for example, in
|
https://en.wikipedia.org/wiki/Radio%20repeater
|
A radio repeater is a combination of a radio receiver and a radio transmitter that receives a signal and retransmits it, so that two-way radio signals can cover longer distances. A repeater sited at a high elevation can allow two mobile stations, otherwise out of line-of-sight propagation range of each other, to communicate. Repeaters are found in professional, commercial, and government mobile radio systems and also in amateur radio.
Repeater systems use two different radio frequencies; the mobiles transmit on one frequency, and the repeater station receives those transmission and transmits on a second frequency. Since the repeater must transmit at the same time as the signal is being received, and may even use the same antenna for both transmitting and receiving, frequency-selective filters are required to prevent the receiver from being overloaded by the transmitted signal. Some repeaters use two different frequency bands to provide isolation between input and output or as a convenience.
In a communications satellite, a transponder serves a similar function, but the transponder does not necessarily demodulate the relayed signals.
Full duplex operation
A repeater is an automatic radio-relay station, usually located on a mountain top, tall building, or radio tower. It allows communication between two or more bases, mobile or portable stations that are unable to communicate directly with each other due to distance or obstructions between them.
The repeater receives on one radio frequency (the "input" frequency), demodulates the signal, and simultaneously re-transmits the information on its "output" frequency. All stations using the repeater transmit on the repeater's input frequency and receive on its output frequency. Since the repeater is usually located at an elevation higher than the other radios using it, their range is greatly extended.
Because the transmitter and receiver are on at the same time, isolation must exist to keep the repeater's own trans
|
https://en.wikipedia.org/wiki/Adapter%20%28genetics%29
|
An adapter or adaptor, or a linker in genetic engineering is a short, chemically synthesized, single-stranded or double-stranded oligonucleotide that can be ligated to the ends of other DNA or RNA molecules. Double stranded adapters can be synthesized to have blunt ends to both terminals or to have sticky end at one end and blunt end at the other. For instance, a double stranded DNA adapter can be used to link the ends of two other DNA molecules (i.e., ends that do not have "sticky ends", that is complementary protruding single strands by themselves). It may be used to add sticky ends to cDNA allowing it to be ligated into the plasmid much more efficiently. Two adapters could base pair to each other to form dimers.
A conversion adapter is used to join a DNA insert cut with one restriction enzyme, say EcoRl, with a vector opened with another enzyme, Bam Hl. This adapter can be used to convert the cohesive end produced by Bam Hl to one produced by Eco Rl or vice versa.
One of its applications is ligating cDNA into a plasmid or other vectors instead of using Terminal deoxynucleotide Transferase enzyme to add poly A to the cDNA fragment.
References
Genetic engineering
|
https://en.wikipedia.org/wiki/Fully%20Buffered%20DIMM
|
Fully Buffered DIMM (or FB-DIMM) is a memory technology that can be used to increase reliability and density of memory systems. Unlike the parallel bus architecture of traditional DRAMs, an FB-DIMM has a serial interface between the memory controller and the advanced memory buffer (AMB). Conventionally, data lines from the memory controller have to be connected to data lines in every DRAM module, i.e. via multidrop buses. As the memory width increases together with the access speed, the signal degrades at the interface between the bus and the device. This limits the speed and memory density, so FB-DIMMs take a different approach to solve the problem.
240-pin DDR2 FB-DIMMs are neither mechanically nor electrically compatible with conventional 240-pin DDR2 DIMMs. As a result, those two DIMM types are notched differently to prevent using the wrong one.
As with nearly all RAM specifications, the FB-DIMM specification was published by JEDEC.
Technology
Fully buffered DIMM architecture introduces an advanced memory buffer (AMB) between the memory controller and the memory module. Unlike the parallel bus architecture of traditional DRAMs, an FB-DIMM has a serial interface between the memory controller and the AMB. This enables an increase to the width of the memory without increasing the pin count of the memory controller beyond a feasible level. With this architecture, the memory controller does not write to the memory module directly; rather it is done via the AMB. AMB can thus compensate for signal deterioration by buffering and resending the signal.
The AMB can also offer error correction, without imposing any additional overhead on the processor or the system's memory controller. It can also use the Bit Lane Failover Correction feature to identify bad data paths and remove them from operation, which dramatically reduces command/address errors. Also, since reads and writes are buffered, they can be done in parallel by the memory controller. This allows simpler i
|
https://en.wikipedia.org/wiki/Schofield%20equation
|
The Schofield Equation is a method of estimating the basal metabolic rate (BMR) of adult men and women published in 1985.
This is the equation used by the WHO in their technical report series. The equation that is recommended to estimate BMR by the US Academy of Nutrition and Dietetics is the Mifflin-St. Jeor equation.
The equations for estimating BMR in kJ/day (kilojoules per day) from body mass (kg) are:
Men:
Women:
The equations for estimating BMR in kcal/day (kilocalories per day) from body mass (kg) are:
Men:
Women:
Key:
W = Body weight in kilograms
SEE = Standard error of estimation
The raw figure obtained by the equation should be adjusted up or downwards, within the confidence limit suggested by the quoted estimation errors, and according to the following principles:
Subjects leaner and more muscular than usual require more energy than the average.
Obese subjects require less.
Patients at the young end of the age range for a given equation require more energy.
Patients at the high end of the age range for a given equation require less energy.
Effects of age and body mass may cancel out: an obese 30-year-old or an athletic 60-year-old may need no adjustment from the raw figure.
To find actual energy needed per day (Estimated Energy Requirement), the base metabolism must then be multiplied by an activity factor.
These are as follows:
Sedentary people of both genders should multiply by 1.3. Sedentary is very physically inactive, inactive in both work and leisure.
Lightly active men should multiply by 1.6 and women by 1.5. Lightly active means the daily routine includes some walking, or intense exercise once or twice per week. Most students are in this category.
Moderately active men should multiply by 1.7 and women by 1.6. Moderately active means intense exercise lasting 20–45 minutes at least three time per week, or a job with a lot of walking, or a moderate intensity job.
Very Active men should multiply by 2.1 and women by 1.9. Very active mea
|
https://en.wikipedia.org/wiki/TERCOM
|
Terrain contour matching, or TERCOM, is a navigation system used primarily by cruise missiles. It uses a contour map of the terrain that is compared with measurements made during flight by an on-board radar altimeter. A TERCOM system considerably increases the accuracy of a missile compared with inertial navigation systems (INS). The increased accuracy allows a TERCOM-equipped missile to fly closer to obstacles and at generally lower altitudes, making it harder to detect by ground radar.
Missiles that employ TERCOM navigation
The cruise missiles that employ a TERCOM system include:
Supersonic Low Altitude Missile project (early version of TERCOM was slated to be used in this never-built missile)
AGM-86B (United States)
AGM-129 ACM (United States)
BGM-109 Tomahawk (some versions, United States)
C-602 anti-ship & land attack cruise missile (China)
Kh-55 Granat NATO reporting name AS-15 Kent (Soviet Union)
Newer Russian cruise missiles, such as Kh-101 and Kh-555 are likely to have TERCOM navigation, but little information is available about these missiles
C-802 or YJ-82 NATO reporting name CSS-N-8 Saccade (China) – it is unclear if this missile employs TERCOM navigation
Hyunmoo III (South Korea)
DH-10 (China)
Babur (Pakistan) land attack cruise missile
Ra'ad (Pakistan) air-launched cruise missile
Naval Strike Missile (anti-ship and land attack missile, Norway)
SOM (missile) (air-launched cruise missile, Turkey)
HongNiao 1/2/3 cruise missiles
9K720 Iskander (short-range ballistic missile and cruise missile variants, Russia)
Storm Shadow cruise missile (UK/France)
See also
Missile guidance
TERPROM
References
External links
"Terrestrial Guidance Methods", Section 16.5.3 of Fundamentals of Naval Weapons Systems
More info at fas.org
Info at aeronautics.ru
Missile guidance
Aircraft instruments
Aerospace engineering
|
https://en.wikipedia.org/wiki/Health%20ecology
|
Health ecology (also known as eco-health) is an emerging field that studies the impact of ecosystems on human health. It examines alterations in the biological, physical, social, and economic environments to understand how these changes affect mental and physical human health. Health ecology focuses on a transdisciplinary approach to understanding all the factors which influence an individual's physiological, social, and emotional well-being.
Eco-health studies often involve environmental pollution. Some examples include an increase in asthma rates due to air pollution, or PCB contamination of game fish in the Great Lakes of the United States. However, health ecology is not necessarily tied to environmental pollution. For example, research has shown that habitat fragmentation is the main factor that contributes to increased rates of Lyme disease in human populations.
History
Ecosystem approaches to public health emerged as a defined field of inquiry and application in the 1990s, primarily through global research supported by the International Development Research Centre (IDRC) in Ottawa, Canada (Lebel, 2003). However, this was a resurrection of an approach to health and ecology traced back to Hippocrates in Western societies. It can also be traced back to earlier eras in Eastern societies. The approach was also popular among scientists in the centuries. However, it fell out of common practice in the twentieth century, when technical professionalism and expertise were assumed sufficient to manage health and disease. In this relatively brief era, evaluating the adverse impacts of environmental change (both the natural and artificial environment) on human health was assigned to medicine and environmental health.
Integrated approaches to health and ecology re-emerged in the 20th century. These revolutionary movements were built on a foundation laid by earlier scholars, including Hippocrates, Rudolf Virchow, and Louis Pasteur. In the 20th century, Calvin Schwabe coi
|
https://en.wikipedia.org/wiki/Great-circle%20navigation
|
Great-circle navigation or orthodromic navigation (related to orthodromic course; ) is the practice of navigating a vessel (a ship or aircraft) along a great circle. Such routes yield the shortest distance between two points on the globe.
Course
The great circle path may be found using spherical trigonometry; this is the spherical version of the inverse geodetic problem.
If a navigator begins at P1 = (φ1,λ1) and plans to travel the great circle to a point at point P2 = (φ2,λ2) (see Fig. 1, φ is the latitude, positive northward, and λ is the longitude, positive eastward), the initial and final courses α1 and α2 are given by formulas for solving a spherical triangle
where λ12 = λ2 − λ1
and the quadrants of α1,α2 are determined by the signs of the numerator and denominator in the tangent formulas (e.g., using the atan2 function).
The central angle between the two points, σ12, is given by
(The numerator of this formula contains the quantities that were used to determine
tanα1.)
The distance along the great circle will then be s12 = Rσ12, where R is the assumed radius
of the Earth and σ12 is expressed in radians.
Using the mean Earth radius, R = R1 ≈ yields results for
the distance s12 which are within 1% of the geodesic length for the WGS84 ellipsoid; see Geodesics on an ellipsoid for details.
Relation to geocentric coordinate system
Detailed evaluation of the optimum direction is possible if the sea surface is approximated by a sphere surface. The standard computation places the ship at a geodetic latitude and geodetic longitude , where is considered positive if north of the equator, and where is considered positive if east of Greenwich. In the geocentric coordinate system centered at the center of the sphere, the Cartesian components are
and the target position is
The North Pole is at
The minimum distance is the distance along a great circle that runs through and . It is calculated in a plane that contains the sphere center and the great circle,
wher
|
https://en.wikipedia.org/wiki/Human%20mitochondrial%20DNA%20haplogroup
|
In human genetics, a human mitochondrial DNA haplogroup is a haplogroup defined by differences in human mitochondrial DNA. Haplogroups are used to represent the major branch points on the mitochondrial phylogenetic tree. Understanding the evolutionary path of the female lineage has helped population geneticists trace the matrilineal inheritance of modern humans back to human origins in Africa and the subsequent spread around the globe.
The letter names of the haplogroups (not just mitochondrial DNA haplogroups) run from A to Z. As haplogroups were named in the order of their discovery, the alphabetical ordering does not have any meaning in terms of actual genetic relationships.
The hypothetical woman at the root of all these groups (meaning just the mitochondrial DNA haplogroups) is the matrilineal most recent common ancestor (MRCA) for all currently living humans. She is commonly called Mitochondrial Eve.
The rate at which mitochondrial DNA mutates is known as the mitochondrial molecular clock. It is an area of ongoing research with one study reporting one mutation per 8000 years.
Phylogeny
This phylogenetic tree is based Van Oven (2009). In June 2022, an alternative phylogeny for haplogroup L was suggested
L (Mitochondrial Eve)
L0
L1-6
L1
L2-6
L5
L2'3'4'6
L2
L3'4'6
L6
L3'4
L4
L3
N
N1: I
N2: W
N9: Y
A
S
X
R
R0 (FMKA pre-HV)
HV: (H, V)
pre-JT or R2'JT
JT: (J, T)
R9: F
R11'B: B
P
U (formerly UK)
U8: K
O
M
M9: E
M12'G: G
M29'Q: Q
D
M8: CZ (C, Z)
Major mtDNA Haplogroups
Macro-haplogroup L
Macro-haplogroup L is the most basal of human mtDNA haplogroups, from which all other haplogroups descend (specifically, from haplogroup L3). It is found mostly in Africa.
Haplogroup L0
L1-7
Haplogroup L1
L2-7
L3'4'6
Haplogroup L2
L346
L34
Haplogroup L3
Haplogroup L4
Haplogroup L6
L5'7
Haplogroup L5
Haplogroup L7
Macro-haplogroup M
Macro-haplogroup M is found mostly in Asia and the Americas. Its descendants are haplogrou
|
https://en.wikipedia.org/wiki/OpenAP
|
OpenAP was the first open source Linux distribution released to replace the factory firmware on a number of commercially available IEEE 802.11b wireless access points, all based on the Eumitcom WL11000SA-N board. The idea of releasing third party and open source firmware for commercially available wireless access points has been followed by a number of more recent projects, such as OpenWrt and HyperWRT.
OpenAP was released in early 2002 by Instant802 Networks, now known as Devicescape Software, complete with instructions for reprogramming the flash on any of the supported devices, full source code under the GNU General Public License, and a mailing list for discussions.
External links
http://savannah.nongnu.org/projects/openap/
Wi-Fi
Free routing software
Custom firmware
|
https://en.wikipedia.org/wiki/GeForce%208%20series
|
The GeForce 8 series is the eighth generation of Nvidia's GeForce line of graphics processing units. The third major GPU architecture developed by Nvidia, Tesla represents the company's first unified shader architecture.
Overview
All GeForce 8 Series products are based on Tesla. As with many GPUs, the larger numbers these cards carry does not guarantee superior performance over previous generation cards with a lower number. For example, the GeForce 8300 and 8400 entry-level cards cannot be compared to the previous GeForce 7200 and 7300 cards due to their inferior performance. The same goes for the high-end GeForce 8800 GTX card, which cannot be compared to the previous GeForce 7800 GTX card due to differences in performance.
Max resolution
Dual Dual-link DVI Support:
Able to drive two flat-panel displays up to 2560×1600 resolution. Available on select GeForce 8800 and 8600 GPUs.
One Dual-link DVI Support:
Able to drive one flat-panel display up to 2560×1600 resolution. Available on select GeForce 8500 GPUs and GeForce 8400 GS cards based on the G98.
One Single-link DVI Support:
Able to drive one flat-panel display up to 1920×1200 resolution. Available on select GeForce 8400 GPUs. GeForce 8400 GS cards based on the G86 only support single-link DVI.
Display capabilities
The GeForce 8 series supports 10-bit per channel display output, up from 8-bit on previous Nvidia cards. This potentially allows higher fidelity color representation and separation on capable displays. The GeForce 8 series, like its recent predecessors, also supports Scalable Link Interface (SLI) for multiple installed cards to act as one via an SLI Bridge, so long as they are of similar architecture.
NVIDIA's PureVideo HD video rendering technology is an improved version of the original PureVideo introduced with GeForce 6. It now includes GPU-based hardware acceleration for decoding HD movie formats, post-processing of HD video for enhanced images, and optional High-bandwidth Digital Content Pro
|
https://en.wikipedia.org/wiki/Reliability%20theory%20of%20aging%20and%20longevity
|
The reliability theory of aging is an attempt to apply the principles of reliability theory to create a mathematical model of senescence. The theory was published in Russian by Leonid A. Gavrilov and Natalia S. Gavrilova as Biologiia prodolzhitelʹnosti zhizni in 1986, and in English translation as The Biology of Life Span: A Quantitative Approach in 1991.
One of the models suggested in the book is based on an analogy with the reliability theory. The underlying hypothesis is based on the previously suggested premise that humans are born in a highly defective state. This is then made worse by environmental and mutational damage; exceptionally high redundancy due to the extremely high number of low-reliable components (e.g.., cells) allows the organism to survive for a while.
The theory suggests an explanation of two aging phenomena for higher organisms: the Gompertz law of exponential increase in mortality rates with age and the "late-life mortality plateau" (mortality deceleration compared to the Gompertz law at higher ages).
The book criticizes a number of hypotheses known at the time, discusses drawbacks of the hypotheses put forth by the authors themselves, and concludes that regardless of the suggested mathematical models, the underlying biological mechanisms remain unknown.
See also
• DNA damage theory of aging
References
Systems theory
Reliability engineering
Failure
Survival analysis
Theories of biological ageing
|
https://en.wikipedia.org/wiki/Credibility%20theory
|
Credibility theory is a branch of actuarial mathematics concerned with determining risk premiums. To achieve this, it uses mathematical models in an effort to forecast the (expected) number of insurance claims based on past observations. Technically speaking, the problem is to find the best linear approximation to the mean of the Bayesian predictive density, which is why credibility theory has many results in common with linear filtering as well as Bayesian statistics more broadly.
For example, in group health insurance an insurer is interested in calculating the risk premium, , (i.e. the theoretical expected claims amount) for a particular employer in the coming year. The insurer will likely have an estimate of historical overall claims experience, , as well as a more specific estimate for the employer in question, . Assigning a credibility factor, , to the overall claims experience (and the reciprocal to employer experience) allows the insurer to get a more accurate estimate of the risk premium in the following manner:
The credibility factor is derived by calculating the maximum likelihood estimate which would minimise the error of estimate. Assuming the variance of and are known quantities taking on the values and respectively, it can be shown that should be equal to:
Therefore, the more uncertainty the estimate has, the lower is its credibility.
Types of Credibility
In Bayesian credibility, we separate each class (B) and assign them a probability (Probability of B). Then we find how likely our experience (A) is within each class (Probability of A given B). Next, we find how likely our experience was over all classes (Probability of A). Finally, we can find the probability of our class given our experience. So going back to each class, we weight each statistic with the probability of the particular class given the experience.
Bühlmann credibility works by looking at the Variance across the population. More specifically, it looks to see how much of t
|
https://en.wikipedia.org/wiki/Domain%20engineering
|
Domain engineering, is the entire process of reusing domain knowledge in the production of new software systems. It is a key concept in systematic software reuse and product line engineering. A key idea in systematic software reuse is the domain. Most organizations work in only a few domains. They repeatedly build similar systems within a given domain with variations to meet different customer needs. Rather than building each new system variant from scratch, significant savings may be achieved by reusing portions of previous systems in the domain to build new ones.
The process of identifying domains, bounding them, and discovering commonalities and variabilities among the systems in the domain is called domain analysis. This information is captured in models that are used in the domain implementation phase to create artifacts such as reusable components, a domain-specific language, or application generators that can be used to build new systems in the domain.
In product line engineering as defined by ISO26550:2015, the Domain Engineering is complemented by Application Engineering which takes care of the life cycle of the individual products derived from the product line.
Purpose
Domain engineering is designed to improve the quality of developed software products through reuse of software artifacts. Domain engineering shows that most developed software systems are not new systems but rather variants of other systems within the same field. As a result, through the use of domain engineering, businesses can maximize profits and reduce time-to-market by using the concepts and implementations from prior software systems and applying them to the target system. The reduction in cost is evident even during the implementation phase. One study showed that the use of domain-specific languages allowed code size, in both number of methods and number of symbols, to be reduced by over 50%, and the total number of lines of code to be reduced by nearly 75%.
Domain engineering foc
|
https://en.wikipedia.org/wiki/Micromagnetics
|
Micromagnetics is a field of physics dealing with the prediction of magnetic behaviors at sub-micrometer length scales. The length scales considered are large enough for the atomic structure of the material to be ignored (the continuum approximation), yet small enough to resolve magnetic structures such as domain walls or vortices.
Micromagnetics can deal with static equilibria, by minimizing the magnetic energy, and with dynamic behavior, by solving the time-dependent dynamical equation.
History
Micromagnetics as a field (i.e., that deals specifically with the behaviour of ferromagnetic materials at sub-micrometer length scales) was introduced in 1963 when William Fuller Brown Jr. published a paper on antiparallel domain wall structures. Until comparatively recently computational micromagnetics has been prohibitively expensive in terms of computational power, but smaller problems are now solvable on a modern desktop PC.
Static micromagnetics
The purpose of static micromagnetics is to solve for the spatial distribution of the magnetization M at equilibrium. In most cases, as the temperature is much lower than the Curie temperature of the material considered, the modulus |M| of the magnetization is assumed to be everywhere equal to the saturation magnetization Ms. The problem then consists in finding the spatial orientation of the magnetization, which is given by the magnetization direction vector m = M/Ms, also called reduced magnetization.
The static equilibria are found by minimizing the magnetic energy,
,
subject to the constraint |M|=Ms or |m|=1.
The contributions to this energy are the following:
Exchange energy
The exchange energy is a phenomenological continuum description of the quantum-mechanical exchange interaction. It is written as:
where A is the exchange constant; mx, my and mz are the components of m;
and the integral is performed over the volume of the sample.
The exchange energy tends to favor configurations where the magnetization varies
|
https://en.wikipedia.org/wiki/Human%20Y-chromosome%20DNA%20haplogroup
|
thumb|500 Y-DNA phylogeny and haplogroup distribution.
(a) Phylogenetic tree. 'kya' means 'thousand years ago'.
(b) Geographical distributions of haplogroups are shown in color.
(c) Geographical color legend.
In genetics, a Y-chromosome DNA haplogroup is a haplogroup defined by mutations in the non-recombining portions of DNA from the male-specific Y chromosome (called Y-DNA). Many people within a haplogroup share similar numbers of short tandem repeats (STRs) and types of mutations called single-nucleotide polymorphisms (SNPs).
The Y-chromosome accumulates roughly two mutations per generation. Y-DNA haplogroups represent major branches of the Y-chromosome phylogenetic tree that share hundreds or even thousands of mutations unique to each haplogroup.
The Y-chromosomal most recent common ancestor (Y-MRCA, informally known as Y-chromosomal Adam) is the most recent common ancestor (MRCA) from whom all currently living humans are descended patrilineally. Y-chromosomal Adam is estimated to have lived roughly 236,000 years ago in Africa. By examining other bottlenecks most Eurasian men (men from populations outside of Africa) are descended from a man who lived in Africa 69,000 years ago (Haplogroup CT). Other major bottlenecks occurred about 50,000 and 5,000 years ago and subsequently the ancestry of most Eurasian men can be traced back to four ancestors who lived 50,000 years ago, who were descendants of African (E-M168).
Naming convention
Y-DNA haplogroups are defined by the presence of a series of Y-DNA SNP markers. Subclades are defined by a terminal SNP, the SNP furthest down in the Y-chromosome phylogenetic tree. The Y Chromosome Consortium (YCC) developed a system of naming major Y-DNA haplogroups with the capital letters A through T, with further subclades named using numbers and lower case letters (YCC longhand nomenclature). YCC shorthand nomenclature names Y-DNA haplogroups and their subclades with the first letter of the major Y-DNA haplogroup followed
|
https://en.wikipedia.org/wiki/List%20of%20defunct%20network%20processor%20companies
|
During the dot-com/internet bubble of the late 1990s and early 2000, the proliferation of many dot-com start-up companies created a secondary bubble in the telecommunications/computer networking infrastructure and telecommunications service provider markets. Venture capital and high tech companies rushed to build next generation infrastructure equipment for the expected explosion of internet traffic. As part of that investment fever, network processors were seen as a method of dealing with the desire for more network services and the ever-increasing data-rates of communication networks.
It has been estimated that dozens of start-up companies were created in the race to build the processors that would be a component of the next generation telecommunications equipment. Once the internet investment bubble burst, the telecom network upgrade cycle was deferred for years (perhaps for a decade). As a result, the majority of these new companies went bankrupt.
As of 2007, the only companies that are shipping network processors in sizeable volumes are Cisco Systems, Marvell, Freescale, Cavium Networks and AMCC.
OC-768/40Gb routing
ClearSpeed left network processor market, reverted to supercomputing applications
Propulsion Networks defunct
BOPS left network processor market, reverted to DSP applications
OC-192/10Gb routing
Terago defunct
Clearwater Networks originally named Xstream Logic, defunct
Silicon Access defunct
Solidum Systems acquired by Integrated Device Technology
Lexra defunct
Fast-Chip defunct
Cognigine Corp. defunct
Internet Machines morphed into IMC Semiconductors, a PCI-Express chip vendor
Acorn Networks defunct
XaQti acquired by Vitesse Semiconductor, product line discontinued
OC-48/2.5Gb routing
IP Semiconductors defunct
Entridia defunct
Stargate Solutions defunct
Gigabit Ethernet routing
Sibyte acquired by Broadcom, product line discontinued
PMC-Sierra product line discontinued
OC-12 routing
C-port acquired by Mot
|
https://en.wikipedia.org/wiki/Smoothing
|
In statistics and image processing, to smooth a data set is to create an approximating function that attempts to capture important patterns in the data, while leaving out noise or other fine-scale structures/rapid phenomena. In smoothing, the data points of a signal are modified so individual points higher than the adjacent points (presumably because of noise) are reduced, and points that are lower than the adjacent points are increased leading to a smoother signal. Smoothing may be used in two important ways that can aid in data analysis (1) by being able to extract more information from the data as long as the assumption of smoothing is reasonable and (2) by being able to provide analyses that are both flexible and robust. Many different algorithms are used in smoothing.
Smoothing may be distinguished from the related and partially overlapping concept of curve fitting in the following ways:
curve fitting often involves the use of an explicit function form for the result, whereas the immediate results from smoothing are the "smoothed" values with no later use made of a functional form if there is one;
the aim of smoothing is to give a general idea of relatively slow changes of value with little attention paid to the close matching of data values, while curve fitting concentrates on achieving as close a match as possible.
smoothing methods often have an associated tuning parameter which is used to control the extent of smoothing. Curve fitting will adjust any number of parameters of the function to obtain the 'best' fit.
Linear smoothers
In the case that the smoothed values can be written as a linear transformation of the observed values, the smoothing operation is known as a linear smoother; the matrix representing the transformation is known as a smoother matrix or hat matrix.
The operation of applying such a matrix transformation is called convolution. Thus the matrix is also called convolution matrix or a convolution kernel. In the case of simple series of
|
https://en.wikipedia.org/wiki/Annubar
|
The Annubar primary element is an averaging Pitot tube manufactured by Rosemount Inc. used to measure the flow of fluid in a pipe.
A Pitot tube measures the difference between the static pressure and the flowing pressure of the media in the pipe. The volumetric flow is calculated from that difference using Bernoulli's principle, taking into account the pipe's inside diameter. An Annubar, as an averaging Pitot tube, takes multiple samples across a section of a pipe or duct, averaging the differential pressures encountered accounting for variations in flow across the section.
References
Measuring instruments
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.