source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/NetWare%20Loadable%20Module
A NetWare Loadable Module (NLM) is a loadable kernel module (a binary code module) that can be loaded into Novell's NetWare operating system. NLMs can implement hardware drivers, server functions (e.g. clustering), applications (e.g. GroupWise), system libraries or utilities. NLMs were supported beginning with the Intel 80386-based NetWare version 3.x. Prior versions of NetWare had a monolithic kernel, and significant hardware or functionality changes required re-linking the kernel from object modules. Due to stability issues with early third-party NLMs, they never became popular for server application programming, with few exceptions like antivirus programs, backup programs and certain database products. Functionality Upon loading, a NLM requests resources, such as memory and process threads, from the NetWare kernel. The NetWare kernel tracks such requests, and can identify memory and other resources assigned to a specific NLM. NLMs may auto-load other NLMs upon which they themselves depend. NLMs may register commands with the NetWare kernel, extending the command vocabulary available at the NetWare console prompt. When properly coded, NLMs can be re-entrant, allowing multiple instances of the same code to be loaded and run. Programming issues Initially, Novell published a development toolkit for NLM programming including kernel API documentation and a C compiler (Watcom), but third-party support for the NLM executable function was very limited. In early NetWare versions (prior to v4.x), all processes were executed in the kernel address space, without specific memory protection. It was therefore possible for bugs in NLMs to overwrite the kernel's or other NLM's address space and ultimately crash the server — in the mainframe-derived Novell terminology, this was known as an ABEND or ABnormal END. Moreover, NetWare used a non-preemptive, or cooperative, multitasking model, meaning that an NLM was required to yield to the kernel regularly. An NLM executing a
https://en.wikipedia.org/wiki/Diphyodont
A diphyodont is any animal with two sets of teeth, initially the deciduous set and consecutively the permanent set. Most mammals are diphyodonts—as to chew their food they need a strong, durable and complete set of teeth. Diphyodonts contrast with polyphyodonts, whose teeth are constantly replaced. Diphyodonts also differ from monophyodonts, which are animals who have only one set of teeth that do not change over a long period of growth. In diphyodonts, the number of teeth that are replaced varies from species to species. In humans, a set of twenty deciduous teeth, or "milk teeth", are replaced by a completely new set of thirty-two adult teeth. In some cases hypodontia or hyperdontia occurs, the latter in cleidocranial dysostosis and Gardner's syndrome. In the hare the anterior incisors are not replaced but the posterior smaller incisors are replaced. Not much is known about the developmental mechanisms regulating diphyodont replacement. The house shrew, Suncus murinus, and the Chinese miniature pig are currently being used to study the diphyodont replacement of the deciduous dentition by replacements and additional permanent teeth. Manatees, elephants and kangaroos differ from most other mammals because they are polyphyodonts. See also Heterodont Polyphyodont Schultz's rule Thecodont dentition Monophyodont References Zoology Dentition types
https://en.wikipedia.org/wiki/Programmable%20interval%20timer
In computing and in embedded systems, a programmable interval timer (PIT) is a counter that generates an output signal when it reaches a programmed count. The output signal may trigger an interrupt. Common features PITs may be one-shot or periodic. One-shot timers will signal only once and then stop counting. Periodic timers signal every time they reach a specific value and then restart, thus producing a signal at periodic intervals. Periodic timers are typically used to invoke activities that must be performed at regular intervals. Counters are usually programmed with fixed intervals that determine how long the counter will count before it will output a signal. IBM PC compatible The Intel 8253 PIT was the original timing device used on IBM PC compatibles. It used a 1.193182 MHz clock signal (one third of the color burst frequency used by NTSC, one twelfth of the system clock crystal oscillator, therefore one quarter of the 4.77 MHz CPU clock) and contains three timers. Timer 0 is used by Microsoft Windows (uniprocessor) and Linux as a system timer, timer 1 was historically used for dynamic random access memory refreshes and timer 2 for the PC speaker. The LAPIC in newer Intel systems offers a higher-resolution (one microsecond) timer. This is used in preference to the PIT timer in Linux kernels starting with 2.6.18. See also High Precision Event Timer Monostable multivibrator NE555 References External links http://www.luxford.com/high-performance-windows-timers https://stackoverflow.com/questions/10567214/what-are-linux-local-timer-interrupts Timing on the PC family under DOS IBM PC compatibles Digital electronics
https://en.wikipedia.org/wiki/Polarimeter
A polarimeter is a scientific instrument used to measure the angle of rotation caused by passing polarized light through an optically active substance. Some chemical substances are optically active, and polarized (uni-directional) light will rotate either to the left (counter-clockwise) or right (clockwise) when passed through these substances. The amount by which the light is rotated is known as the angle of rotation. The direction (clockwise or counterclockwise) and magnitude of the rotation reveals information about the sample's chiral properties such as the relative concentration of enantiomers present in the sample. History Polarization by reflection was discovered in 1808 by Étienne-Louis Malus (1775–1812). Measuring principle The ratio, the purity, and the concentration of two enantiomers can be measured via polarimetry. Enantiomers are characterized by their property to rotate the plane of linear polarized light. Therefore, those compounds are called optically active and their property is referred to as optical rotation. Light sources such as a light bulb, Tungsten Halogen, or the sun emit electromagnetic waves at the frequency of visible light. Their electric field oscillates in all possible planes relative to their direction of propagation. In contrast to that, the waves of linear-polarized light oscillate in parallel planes. If light encounters a polarizer, only the part of the light that oscillates in the defined plane of the polarizer may pass through. That plane is called the plane of polarization. The plane of polarization is turned by optically active compounds. According to the direction in which the light is rotated, the enantiomer is referred to as dextro-rotatory or levo-rotatory. The optical activity of enantiomers is additive. If different enantiomers exist together in one solution, their optical activity adds up. That is why racemates are optically inactive, as they nullify their clockwise and counter clockwise optical activities. The o
https://en.wikipedia.org/wiki/Fast%20folding%20algorithm
In signal processing, the fast folding algorithm (Staelin, 1969) is an efficient algorithm for the detection of approximately-periodic events within time series data. It computes superpositions of the signal modulo various window sizes simultaneously. The FFA is best known for its use in the detection of pulsars, as popularised by SETI@home and Astropulse. It was also used by the Breakthrough Listen Initiative during their 2023 Investigation for Periodic Spectral Signals campaign. See also Pulsar References External links The search for unknown pulsars Signal processing
https://en.wikipedia.org/wiki/Murus%20Dacicus
Murus Dacicus (Latin for Dacian Wall) is a construction method for defensive walls and fortifications developed in ancient Dacia sometime before the Roman conquest. It is a mix between traditional construction methods particular to Dacian builders and methods imported from Greek and Roman architecture and masonry, and – although somewhat similar construction techniques were used before, during and long after the period – it has peculiarities that make it unique. Design Murus Dacicus consisted of two outer walls made out of stone blocks carved in the shape of a rectangular parallelepiped; apparently no mortar was used, thus making them examples of ashlar masonry – but typically done with regular sized, bigger-than-average blocks, due to technological requirements. After each layer of the outer walls was completed, the gap between them would be filled with gravel and rocks cemented together with clay and compacted (cf. also rammed earth technique). The structure was strengthened and consolidated at the level of each layer by horizontal, singed/scorched wood tie beams connected to the outer walls by means of a dovetail joint at the upper surface of the stone block (hence the need for big stone blocks of the same size). Due to its higher flexibility, this structure had a distinct advantage over the 'classical', solid dry stone wall (as seen, e. g., in the cyclopean and ashlar walls in Mycenae): a higher capability of shock absorption and dissipation of kinetic energy from an incoming projectile thrown by a siege weapon. However, archaeological and historical evidence suggests that the wall might have been topped by a wooden palisade instead of stone battlements, which had the obvious disadvantage of being vulnerable to fire. A properly built Dacian Wall would be both labor-intensive and time-consuming. A typical wall for the late period, hastily built in the years between the two Dacian Wars (when Dacia had to rebuild, repair, enlarge or reinforce the defenses of many
https://en.wikipedia.org/wiki/Alexander%20Bogomolny
Alexander Bogomolny (January 4, 1948 July 7, 2018) was a Soviet-born Israeli-American mathematician. He was Professor Emeritus of Mathematics at the University of Iowa, and formerly research fellow at the Moscow Institute of Electronics and Mathematics, senior instructor at Hebrew University and software consultant at Ben Gurion University. He wrote extensively about arithmetic, probability, algebra, geometry, trigonometry and mathematical games. He was known for his contribution to heuristics and mathematics education, creating and maintaining the mathematically themed educational website Cut-the-Knot for the Mathematical Association of America (MAA) Online. He was a pioneer in mathematical education on the internet, having started Cut-the-Knot in October 1996. Education and academic career Bogomolny attended Moscow school No. 444, for gifted children, then entered Moscow State University, where he graduated with a master's degree in mathematics in 1971. From 1971 to 1974 he was a junior research fellow at the Moscow Institute of Electronic Machine Building (MIEM). He emigrated to Israel and became a senior programmer at Lake Kinneret Research Laboratory in Tiberias, Israel (19741977) and a software consultant at Ben Gurion University in Negev, Be’er Sheva, Israel (19761977). From 1976 to 1983 he was a senior instructor and researcher at Hebrew University in Jerusalem. He received his Ph.D. in mathematics at Hebrew University in 1981. His dissertation is titled, A New Numerical Solution for the Stamp Problem and his thesis advisor was Gregory I. Eskin. From 1981 to 1982 he was also a visiting professor at Ohio State University, where he taught mathematics. From 1982 to 1987 he was professor of mathematics at the University of Iowa. From August 1987 to August 1991 he was vice president of software development at CompuDoc, Inc. Cut-the-Knot Cut-the-Knot (CTK) is a free, advertisement-funded educational website which Bogomolny maintained from 1996 to 2018. It
https://en.wikipedia.org/wiki/Landspout
Landspout is a term created by atmospheric scientist Howard B. Bluestein in 1985 for a tornado not associated with a mesocyclone. The Glossary of Meteorology defines a landspout as "Colloquial expression describing tornadoes occurring with a parent cloud in its growth stage and with its vorticity originating in the boundary layer. The parent cloud does not contain a preexisting mid-level mesocyclone. The landspout was so named because it looks like "a weak Florida Keys waterspout over land." Landspouts are typically weaker than mesocyclone-associated tornadoes spawned within supercell thunderstorms, in which the strongest tornadoes form. Characteristics Landspouts are a type of tornado that forms during the growth stage of a cumulus congestus or occasionally a cumulonimbus cloud when an updraft stretches boundary layer vorticity upward into a vertical axis and tightens it into a strong vortex. These generally are smaller and weaker than supercell tornadoes and do not form from a mesocyclone or pre-existing rotation in the cloud. Because of this lower depth, smaller size, and weaker intensity, landspouts are rarely detected by Doppler weather radar (NWS). Landspouts share a strong resemblance and development process to that of waterspouts, usually taking the form of a translucent and highly laminar helical tube. "They are typically narrow, rope-like condensation funnels that form while the thunderstorm cloud is still growing and there is no rotating updraft", according to the National Weather Service. Landspouts are considered tornadoes since a rapidly rotating column of air is in contact with both the surface and a cumuliform cloud. Not all landspouts are visible, and many are first sighted as debris swirling at the surface before eventually filling in with condensation and dust. Orography can influence landspout (and even mesocyclone tornado) formation. A notable example is the propensity for landspout occurrence in the Denver Convergence Vorticity Zone (DC
https://en.wikipedia.org/wiki/Abel%27s%20identity
In mathematics, Abel's identity (also called Abel's formula or Abel's differential equation identity) is an equation that expresses the Wronskian of two solutions of a homogeneous second-order linear ordinary differential equation in terms of a coefficient of the original differential equation. The relation can be generalised to nth-order linear ordinary differential equations. The identity is named after the Norwegian mathematician Niels Henrik Abel. Since Abel's identity relates to the different linearly independent solutions of the differential equation, it can be used to find one solution from the other. It provides useful identities relating the solutions, and is also useful as a part of other techniques such as the method of variation of parameters. It is especially useful for equations such as Bessel's equation where the solutions do not have a simple analytical form, because in such cases the Wronskian is difficult to compute directly. A generalisation of first-order systems of homogeneous linear differential equations is given by Liouville's formula. Statement Consider a homogeneous linear second-order ordinary differential equation on an interval I of the real line with real- or complex-valued continuous functions p and q. Abel's identity states that the Wronskian of two real- or complex-valued solutions and of this differential equation, that is the function defined by the determinant satisfies the relation for each point . Remarks In particular, when the differential equation is real-valued, the Wronskian is always either identically zero, always positive, or always negative at every point in (see proof below). The latter cases imply the two solutions and are linearly independent (see Wronskian for a proof). It is not necessary to assume that the second derivatives of the solutions and are continuous. Abel's theorem is particularly useful if , because it implies that is constant. Proof Differentiating the Wronskian using th
https://en.wikipedia.org/wiki/Vehicular%20communication%20systems
Vehicular communication systems are computer networks in which vehicles and roadside units are the communicating nodes, providing each other with information, such as safety warnings and traffic information. They can be effective in avoiding accidents and traffic congestion. Both types of nodes are dedicated short-range communications (DSRC) devices. DSRC works in 5.9 GHz band with bandwidth of 75 MHz and approximate range of . Vehicular communications is usually developed as a part of intelligent transportation systems (ITS). History The beginnings of vehicular communications go back to the 1970s. Work began on projects such as Electronic Route Guidance System (ERGS) and CACS in the United States and Japan respectively. While the term Inter-Vehicle Communications (IVC) began to circulate in the early 1980s. Various media were used before the standardization activities began, such as lasers, infrared, and radio waves. The PATH project in the United States between 1986 and 1997 was an important breakthrough in vehicular communications projects. Projects related to vehicular communications in Europe were launched with the PROMETHEUS project between 1986 and 1995. Numerous subsequent projects have been implemented all over the world such as the Advanced Safety Vehicle (ASV) program, CHAUFFEUR I and II, FleetNet, CarTALK 2000, etc. In the early 2000s, the term Vehicular Ad Hoc Network (VANET) was introduced as an application of the principles of Mobile Ad-Hoc Networks (MANETs) to the vehicular field. The terms VANET and IVC do not differ and are used interchangeably to refer to communications between vehicles with or without reliance on roadside infrastructure, although some have argued that IVC refers to direct V2V connections only. Many projects have appeared in EU, Japan, USA and other parts of the world for example, ETC, SAFESPOT, PReVENT, COMeSafety, NoW, IVI. Several terms have been used to refer to vehicular communications. These acronyms differ from each ot
https://en.wikipedia.org/wiki/Omega-regular%20language
The ω-regular languages are a class of ω-languages that generalize the definition of regular languages to infinite words. Formal definition An ω-language L is ω-regular if it has the form Aω where A is a regular language not containing the empty string AB, the concatenation of a regular language A and an ω-regular language B (Note that BA is not well-defined) A ∪ B where A and B are ω-regular languages (this rule can only be applied finitely many times) The elements of Aω are obtained by concatenating words from A infinitely many times. Note that if A is regular, Aω is not necessarily ω-regular, since A could be for example {ε}, the set containing only the empty string, in which case Aω=A, which is not an ω-language and therefore not an ω-regular language. It is a straightforward consequence of the definition that the ω-regular languages are precisely the ω-languages of the form A1B1ω ∪ ... ∪ AnBnω for some n, where the Ais and Bis are regular languages and the Bis do not contain the empty string. Equivalence to Büchi automaton Theorem: An ω-language is recognized by a Büchi automaton if and only if it is an ω-regular language. Proof: Every ω-regular language is recognized by a nondeterministic Büchi automaton; the translation is constructive. Using the closure properties of Büchi automata and structural induction over the definition of ω-regular language, it can be easily shown that a Büchi automaton can be constructed for any given ω-regular language. Conversely, for a given Büchi automaton , we construct an ω-regular language and then we will show that this language is recognized by A. For an ω-word w = a1a2... let w(i,j) be the finite segment ai+1...aj-1aj of w. For every , we define a regular language Lq,q' that is accepted by the finite automaton . Lemma: We claim that the Büchi automaton A recognizes the language Proof: Let's suppose word and q0,q1,q2,... is an accepting run of A on w. Therefore, q0 is in and there must be a state in F such
https://en.wikipedia.org/wiki/Metallome
In biochemistry, the metallome is the distribution of metal ions in a cellular compartment. The term was coined in analogy with proteome as metallomics is the study of metallome: the "comprehensive analysis of the entirety of metal and metalloid species within a cell or tissue type". Therefore, metallomics can be considered a branch of metabolomics, even though the metals are not typically considered as metabolites. An alternative definition of "metallomes" as metalloproteins or any other metal-containing biomolecules, and "metallomics" as a study of such biomolecules. Metallointeractome In the study of metallomes the transcriptome, proteome and the metabolome constitutes the whole metallome. A study of the metallome is done to arrive at the metallointeractome. Metallotranscriptome The metallotranscriptome can be defined as the map of the entire transcriptome in the presence of biologically or environmentally relevant concentrations of an essential or toxic metal, respectively. The metallometabolome constitutes the complete pool of small metabolites in a cell at any given time. This gives rise to the whole metallointeractome and knowledge of this is important in comparative metallomics dealing with toxicity and drug discovery. See also Bioinorganic chemistry -omics References Sources electronic-book electronic- Systems biology Metabolism Bioinformatics Biochemistry methods
https://en.wikipedia.org/wiki/Resistant%20starch
Resistant starch (RS) is starch, including its degradation products, that escapes from digestion in the small intestine of healthy individuals. Resistant starch occurs naturally in foods, but it can also be added as part of dried raw foods, or used as an additive in manufactured foods. Some types of resistant starch (RS1, RS2 and RS3) are fermented by the large intestinal microbiota, conferring benefits to human health through the production of short-chain fatty acids, increased bacterial mass, and promotion of butyrate-producing bacteria. Resistant starch has similar physiological effects as dietary fiber, behaving as a mild laxative and possibly causing flatulence. Origin and history The concept of resistant starch arose from research in the 1970s and is currently considered to be one of three starch types: rapidly digested starch, slowly digested starch and resistant starch, each of which may affect levels of blood glucose. The European Commission-supported-research eventually led to a definition of resistant starch. Health effects Resistant starch does not release glucose within the small intestine, but rather reaches the large intestine where it is consumed or fermented by colonic bacteria (gut microbiota). On a daily basis, human intestinal microbiota encounter more carbohydrates than any other dietary component. This includes resistant starch, non-starch polysaccharide fibers, oligosaccharides, and simple sugars which have significance in colon health. The fermentation of resistant starch produces short-chain fatty acids, including acetate, propionate, and butyrate and increased bacterial cell mass. The short-chain fatty acids are produced in the large intestine where they are rapidly absorbed from the colon, then are metabolized in colonic epithelial cells, liver or other tissues. The fermentation of resistant starch produces more butyrate than other types of dietary fibers. Studies have shown that resistant starch supplementation was well tolerated
https://en.wikipedia.org/wiki/CPK%20coloring
In chemistry, the CPK coloring (for Corey–Pauling–Koltun) is a popular color convention for distinguishing atoms of different chemical elements in molecular models. History August Wilhelm von Hofmann was apparently the first to introduce molecular models into organic chemistry, following August Kekule's introduction of the theory of chemical structure in 1858, and Alexander Crum Brown's introduction of printed structural formulas in 1861. At a Friday Evening Discourse at London's Royal Institution on April 7, 1865, he displayed molecular models of simple organic substances such as methane, ethane, and methyl chloride, which he had had constructed from differently colored table croquet balls connected together with thin brass tubes. Hofmann's original colour scheme (carbon = black, hydrogen = white, nitrogen = blue, oxygen = red, chlorine = green, and sulphur = yellow) has evolved into the later color schemes. In 1952, Corey and Pauling published a description of space-filling models of proteins and other biomolecules that they had been building at Caltech. Their models represented atoms by faceted hardwood balls, painted in different bright colors to indicate the respective chemical elements. Their color schema included White for hydrogen Black for carbon Sky blue for nitrogen Red for oxygen They also built smaller models using plastic balls with the same color schema. In 1965 Koltun patented an improved version of the Corey and Pauling modeling technique. In his patent he mentions the following colors: White for hydrogen Black for carbon Blue for nitrogen Red for oxygen Deep yellow for sulfur Purple for phosphorus Light, medium, medium dark, and dark green for the halogens (F, Cl, Br, I) Silver for metals (Co, Fe, Ni, Cu) Typical assignments Typical CPK color assignments include: Several of the CPK colors refer mnemonically to colors of the pure elements or notable compound. For example, hydrogen is a colorless gas, carbon as charcoal, graphit
https://en.wikipedia.org/wiki/Diameter%20Credit-Control%20Application
Diameter Credit-Control Application is a networking protocol for Diameter application used to implement real-time credit-control for a variety of end user services. It is an IETF standard first defined in RFC 4006, and updated in RFC 8506. Purpose The purpose of the diameter credit control application is to provide a framework for real-time charging, primarily meant for the communication between gateways/control-points and the back-end account/balance systems (typically an Online Charging System). The application specifies methods for: Quota management (Reserve, Reauthorize, Abandon) Simple Debit/Credit Balance checks Price inquiries The diameter credit control application does not specify which type units are bought/used and which items are charged. This is left to the service context that has to be specified separately, as is some of the semantics. Examples of units used/bought: Time Upload/Download bytes SMS (Text Messages) Examples of items charged: Money Points Units (e.g. if the balance is kept in the same units as what is being used) Diameter credit control also specifies how to handle the fairly complex issue of multiple unit types used/charged against a single user balance. For instance, a user may pay for both online time and download bytes but has only a single account balance. Session-based charging A session-based credit control process uses several interrogations which may include first, intermediate and last interrogation. During interrogation money is reserved from the user account. Session-based charging is typically used for scenarios where the charged units are continuously consumed, e.g. charging for bytes upload/download. Event-based charging An event-based credit control process uses events as charging mechanism. Event-based charging is typically used when units are not continuously consumed, e.g. a user sending an MMS. Command Codes In order to support Credit Control via Diameter, there are two Diameter messages, the CCR
https://en.wikipedia.org/wiki/AES11
The AES11 standard published by the Audio Engineering Society provides a systematic approach to the synchronization of digital audio signals. AES11 recommends using an AES3 signal to distribute audio clocks within a facility. In this application, the connection is referred to as a Digital Audio Reference Signal (DARS). Further recommendations are made concerning the accuracy of sample clocks as embodied in the interface signal and the use of this format as a convenient synchronization reference where signals must be rendered co-timed for digital processing. Synchronism is defined, and limits are given which take account of relevant timing uncertainties encountered in an audio studio. Related developments AES11 Annex D (in the November 2005 or later printing or version) shows an example method to provide isochronous timing relationships for distributed AES3 structures over asynchronous networks such as AES47 where reference signals may be locked to common timing sources such as GPS. In addition, the Audio Engineering Society has now published a related standard called AES53, that specifies how the timing markers already specified in AES47 may be used to associate an absolute time-stamp with individual audio samples. This may be closely associated with AES11 and used to provide a way of aligning streams from disparate sources, including synchronizing audio to video in networked structures. The media profile defined in annex A of AES67 provides a means of using AES11 synchronization via the Precision Time Protocol. References Audio engineering Sound Broadcast engineering Audio Engineering Society standards
https://en.wikipedia.org/wiki/List%20of%20Nvidia%20graphics%20processing%20units
This list contains general information about graphics processing units (GPUs) and video cards from Nvidia, based on official specifications. In addition some Nvidia motherboards come with integrated onboard GPUs. Limited/Special/Collectors' Editions or AIB versions are not included. Field explanations The fields in the table listed below describe the following: Model – The marketing name for the processor, assigned by The Nvidia. Launch – Date of release for the processor. Code name – The internal engineering codename for the processor (typically designated by an NVXY name and later GXY where X is the series number and Y is the schedule of the project for that generation). Fab – Fabrication process. Average feature size of components of the processor. Bus interface – Bus by which the graphics processor is attached to the system (typically an expansion slot, such as PCI, AGP, or PCI-Express). Memory – The amount of graphics memory available to the processor. SM Count – Number of streaming multiprocessors. Core clock – The factory core clock frequency; while some manufacturers adjust clocks lower and higher, this number will always be the reference clocks used by Nvidia. Memory clock – The factory effective memory clock frequency (while some manufacturers adjust clocks lower and higher, this number will always be the reference clocks used by Nvidia). All DDR/GDDR memories operate at half this frequency, except for GDDR5, which operates at one quarter of this frequency. Core config – The layout of the graphics pipeline, in terms of functional units. Over time the number, type, and variety of functional units in the GPU core has changed significantly; before each section in the list there is an explanation as to what functional units are present in each generation of processors. In later models, shaders are integrated into a unified shader architecture, where any one shader can perform any of the functions listed. Fillrate – Maximum theoretical fill rate in
https://en.wikipedia.org/wiki/Lattice%20constant
A lattice constant or lattice parameter is one of the physical dimensions and angles that determine the geometry of the unit cells in a crystal lattice, and is proportional to the distance between atoms in the crystal. A simple cubic crystal has only one lattice constant, the distance between atoms, but in general lattices in three dimensions have six lattice constants: the lengths a, b, and c of the three cell edges meeting at a vertex, and the angles α, β, and γ between those edges. The crystal lattice parameters a, b, and c have the dimension of length. The three numbers represent the size of the unit cell, that is, the distance from a given atom to an identical atom in the same position and orientation in a neighboring cell (except for very simple crystal structures, this will not necessarily be distance to the nearest neighbor). Their SI unit is the meter, and they are traditionally specified in angstroms (Å); an angstrom being 0.1 nanometer (nm), or 100 picometres (pm). Typical values start at a few angstroms. The angles α, β, and γ are usually specified in degrees. Introduction A chemical substance in the solid state may form crystals in which the atoms, molecules, or ions are arranged in space according to one of a small finite number of possible crystal systems (lattice types), each with fairly well defined set of lattice parameters that are characteristic of the substance. These parameters typically depend on the temperature, pressure (or, more generally, the local state of mechanical stress within the crystal), electric and magnetic fields, and its isotopic composition. The lattice is usually distorted near impurities, crystal defects, and the crystal's surface. Parameter values quoted in manuals should specify those environment variables, and are usually averages affected by measurement errors. Depending on the crystal system, some or all of the lengths may be equal, and some of the angles may have fixed values. In those systems, only some of t
https://en.wikipedia.org/wiki/List%20of%20EC%20numbers%20%28EC%201%29
This list contains a list of EC numbers for the first group, EC 1, oxidoreductases, placed in numerical order as determined by the Nomenclature Committee of the International Union of Biochemistry and Molecular Biology. All official information is tabulated at the website of the committee. The database is developed and maintained by Andrew McDonald. EC 1.1 Acting on the CH-OH group of donors EC 1.1.1 With Nicotinamide adenine dinucleotide or NADP as acceptor : alcohol dehydrogenase : alcohol dehydrogenase (NADP+) : homoserine dehydrogenase : (R,R)-butanediol dehydrogenase EC 1.1.1.5: acetoin dehydrogenase. Now EC 1.1.1.303, diacetyl reductase [(R)-acetoin forming] and EC 1.1.1.304, diacetyl reductase [(S)-acetoin forming] : glycerol dehydrogenase : propanediol-phosphate dehydrogenase : glycerol-3-phosphate dehydrogenase (NAD+) : D-xylulose reductase : L-xylulose reductase : D-arabinitol 4-dehydrogenase : L-arabinitol 4-dehydrogenase : L-arabinitol 2-dehydrogenase : L-iditol 2-dehydrogenase : D-iditol 2-dehydrogenase : galactitol 2-dehydrogenase : mannitol-1-phosphate 5-dehydrogenase : inositol 2-dehydrogenase : glucuronate reductase : glucuronolactone reductase : (-)-menthol dehydrogenase : (+)-neomenthol dehydrogenase : aldose reductase : UDP-glucose 6-dehydrogenase : (R)-4-hydroxyphenyllactate dehydrogenase : histidinol dehydrogenase| : quinate/shikimate dehydrogenase (NAD+) : shikimate dehydrogenase (NADP+) : glyoxylate reductase : L-lactate dehydrogenase : D-lactate dehydrogenase : glycerate dehydrogenase : 3-hydroxybutyrate dehydrogenase : 3-hydroxyisobutyrate dehydrogenase : mevaldate reductase : mevaldate reductase (NADPH) : hydroxymethylglutaryl-CoA reductase (NADPH) : 3-hydroxyacyl-CoA dehydrogenase : acetoacetyl-CoA reductase : malate dehydrogenase : malate dehydrogenase (oxaloacetate-decarboxylating) : malate dehydrogenase (decarboxylating) : malate dehydrogenase (oxaloacetate-decarboxylating) (NADP+) : isocitrate dehydrogenase (NAD+) : isocitrate
https://en.wikipedia.org/wiki/British%20Mathematical%20Olympiad
The British Mathematical Olympiad (BMO) forms part of the selection process for the UK International Mathematical Olympiad team and for other international maths competitions, including the European Girls' Mathematical Olympiad, the Romanian Master of Mathematics and Sciences, and the Balkan Mathematical Olympiad. It is organised by the British Mathematical Olympiad Subtrust, which is part of the United Kingdom Mathematics Trust. There are two rounds, the BMO1 and the BMO2. BMO Round 1 The first round of the BMO is held in November each year, and from 2006 is an open entry competition. The qualification to BMO Round 1 is through the Senior Mathematical Challenge. Students who do not make the qualification through the Senior Mathematical Challenge may be entered at the discretion of their school for a fee of £40. The paper lasts 3½ hours, and consists of six questions (from 2005), each worth 10 marks. The exam in the 2020-2021 cycle was adjusted to consist of two sections, first section with 4 questions each worth 5 marks (only answers required), and second section with 3 question each worth 10 marks (full solutions required). The duration of the exam had been reduced to 2½ hours, due to the difficulties of holding a 3½ hours exam under COVID-19. Candidates are required to write full proofs to the questions. An answer is marked on either a "0+" or a "10-" mark scheme, depending on whether the answer looks generally complete or not. An answer judged incomplete or unfinished is usually capped at 3 or 4, whereas for an answer judged as complete, marks may be deducted for minor errors or poor reasoning but it is likely to get a score of 7 or more. As a result, it is uncommon for an answer to score a middling mark between 4 and 6. While around 1000 gain automatic qualification to sit the BMO1 paper each year, the additional discretionary and international students means that since 2016, on average, around 1600 candidates have been entered for BMO1 each year. Although
https://en.wikipedia.org/wiki/Bacterial%20translation
Bacterial translation is the process by which messenger RNA is translated into proteins in bacteria. Initiation Initiation of translation in bacteria involves the assembly of the components of the translation system, which are: the two ribosomal subunits (50S and 30S subunits); the mature mRNA to be translated; the tRNA charged with N-formylmethionine (the first amino acid in the nascent peptide); guanosine triphosphate (GTP) as a source of energy, and the three prokaryotic initiation factors IF1, IF2, and IF3, which help the assembly of the initiation complex. Variations in the mechanism can be anticipated. The ribosome has three active sites: the A site, the P site, and the E site. The A site is the point of entry for the aminoacyl tRNA (except for the first aminoacyl tRNA, which enters at the P site). The P site is where the peptidyl tRNA is formed in the ribosome. And the E site which is the exit site of the now uncharged tRNA after it gives its amino acid to the growing peptide chain. The selection of an initiation site (usually an AUG codon) depends on the interaction between the 30S subunit and the mRNA template. The 30S subunit binds to the mRNA template at a purine-rich region (the Shine-Dalgarno sequence) upstream of the AUG initiation codon. The Shine-Dalgarno sequence is complementary to a pyrimidine rich region on the 16S rRNA component of the 30S subunit. This sequence has been evolutionarily conserved and plays a major role in the microbial world we know today. During the formation of the initiation complex, these complementary nucleotide sequences pair to form a double stranded RNA structure that binds the mRNA to the ribosome in such a way that the initiation codon is placed at the P site. Well-known coding regions that do not have AUG initiation codons are those of lacI (GUG) and lacA (UUG) in the E. coli lac operon. Two studies have independently shown that 17 or more non-AUG start codons may initiate translation in E. coli. There are three m
https://en.wikipedia.org/wiki/Eukaryotic%20translation
Eukaryotic translation is the biological process by which messenger RNA is translated into proteins in eukaryotes. It consists of four phases: initiation, elongation, termination, and recapping. Initiation Translation initiation is the process by which the ribosome and its associated factors bind to an mRNA and are assembled at the start codon. This process is defined as either cap-dependent, in which the ribosome binds initially at the 5' cap and then travels to the stop codon, or as cap-independent, where the ribosome does not initially bind the 5' cap. Cap-dependent initiation Initiation of translation usually involves the interaction of certain key proteins, the initiation factors, with a special tag bound to the 5'-end of an mRNA molecule, the 5' cap, as well as with the 5' UTR. These proteins bind the small (40S) ribosomal subunit and hold the mRNA in place. eIF3 is associated with the 40S ribosomal subunit and plays a role in keeping the large (60S) ribosomal subunit from prematurely binding. eIF3 also interacts with the eIF4F complex, which consists of three other initiation factors: eIF4A, eIF4E, and eIF4G. eIF4G is a scaffolding protein that directly associates with both eIF3 and the other two components. eIF4E is the cap-binding protein. Binding of the cap by eIF4E is often considered the rate-limiting step of cap-dependent initiation, and the concentration of eIF4E is a regulatory nexus of translational control. Certain viruses cleave a portion of eIF4G that binds eIF4E, thus preventing cap-dependent translation to hijack the host machinery in favor of the viral (cap-independent) messages. eIF4A is an ATP-dependent RNA helicase that aids the ribosome by resolving certain secondary structures formed along the mRNA transcript. The poly(A)-binding protein (PABP) also associates with the eIF4F complex via eIF4G, and binds the poly-A tail of most eukaryotic mRNA molecules. This protein has been implicated in playing a role in circularization of the mRNA
https://en.wikipedia.org/wiki/AV%20input
AV input stands for Audio/Visual input, which is a common label on a connector to receive (AV) audio/visual signals from electronic equipment that generates AV signals (AV output). These terminals are commonly found on such equipment as a television, DVD recorder or VHS recorder, and typically take input from a DVD player, a TV tuner, VHS recorder or camcorder. Types of plugs used for video input Composite video RCA connector BNC connector UHF connector 1/8 inch minijack phone connector S-video DIN plug (also used for Apple Desktop Bus) Component video RCA connector RGBHV Digital video HDMI (High Definition Multimedia Interface) DVI (Digital Video Interface) IEEE 1394 (FireWire) SPDIF (Sony Philips Digital Interface) References Audiovisual connectors
https://en.wikipedia.org/wiki/Directional%20symmetry%20%28time%20series%29
In statistical analysis of time series and in signal processing, directional symmetry is a statistical measure of a model's performance in predicting the direction of change, positive or negative, of a time series from one time period to the next. Definition Given a time series with values at times and a model that makes predictions for those values , then the directional symmetry (DS) statistic is defined as Interpretation The DS statistic gives the percentage of occurrences in which the sign of the change in value from one time period to the next is the same for both the actual and predicted time series. The DS statistic is a measure of the performance of a model in predicting the direction of value changes. The case would indicate that a model perfectly predicts the direction of change of a time series from one time period to the next. See also Statistical finance Notes and references Drossu, Radu, and Zoran Obradovic. "INFFC data analysis: lower bounds and testbed design recommendations." Computational Intelligence for Financial Engineering (CIFEr), 1997., Proceedings of the IEEE/IAFE 1997. IEEE, 1997. Lawrance, A. J., "Directionality and Reversibility in Time Series", International Statistical Review, 59 (1991), 67–79. Tay, Francis EH, and Lijuan Cao. "Application of support vector machines in financial time series forecasting." Omega 29.4 (2001): 309–317. Xiong, Tao, Yukun Bao, and Zhongyi Hu. "Beyond one-step-ahead forecasting: Evaluation of alternative multi-step-ahead forecasting models for crude oil prices." Energy Economics 40 (2013): 405–415. Symmetry Signal processing
https://en.wikipedia.org/wiki/Noise-equivalent%20flux%20density
In optics the noise-equivalent flux density (NEFD) or noise-equivalent irradiance (NEI) of a system is the level of flux density required to be equivalent to the noise present in the system. It is a measure used by astronomers in determining the accuracy of observations. The NEFD can be related to a light detector's noise-equivalent power for a collection area A and a photon bandwidth by: , where a factor (often 2, in the case of switching between measuring a source and measuring off-source) accounts for the photon statistics for the mode of operation. References Physical quantities Vector calculus
https://en.wikipedia.org/wiki/PS/2%20port
The PS/2 port is a 6-pin mini-DIN connector used for connecting keyboards and mice to a PC compatible computer system. Its name comes from the IBM Personal System/2 series of personal computers, with which it was introduced in 1987. The PS/2 mouse connector generally replaced the older DE-9 RS-232 "serial mouse" connector, while the PS/2 keyboard connector replaced the larger 5-pin/180° DIN connector used in the IBM PC/AT design. The PS/2 keyboard port is electrically and logically identical to the IBM AT keyboard port, differing only in the type of electrical connector used. The PS/2 platform introduced a second port with the same design as the keyboard port for use to connect a mouse; thus the PS/2-style keyboard and mouse interfaces are electrically similar and employ the same communication protocol. However, unlike the otherwise similar Apple Desktop Bus connector used by Apple, a given system's keyboard and mouse port may not be interchangeable since the two devices use different sets of commands and the device drivers generally are hard-coded to communicate with each device at the address of the port that is conventionally assigned to that device. (That is, keyboard drivers are written to use the first port, and mouse drivers are written to use the second port.) Communication protocol Each port implements a bidirectional synchronous serial channel. The channel is slightly asymmetrical: it favors transmission from the input device to the computer, which is the majority case. The bidirectional IBM AT and PS/2 keyboard interface is a development of the unidirectional IBM PC keyboard interface, using the same signal lines but adding capability to send data back to the keyboard from the computer; this explains the asymmetry. The interface has two main signal lines, Data and Clock. These are single-ended signals driven by open-collector drivers at each end. Normally, the transmission is from the device to the host. To transmit a byte, the device simply outputs a
https://en.wikipedia.org/wiki/PVCS
PVCS Version Manager (originally named Polytron Version Control System) is a software package by Serena Software Inc., for version control of source code files. PVCS follows the "locking" approach to concurrency control; it has no merge operator built-in (but does, nonetheless, have a separate merge command). However PVCS can also be configured to support several users simultaneously attempting to edit the file; in this case the second chronological committer will have a branch created for them so that both modifications, instead of conflicting, will appear as parallel histories for the same file. This is unlike Concurrent Versions System (CVS) and Subversion where the second committer needs to first merge the changes via the update command and then resolve conflicts (when they exist) before actually committing. Originally developed by Don Kinzer and published by Polytron in 1985, through a history of acquisitions and mergers, the product was at times owned by Sage Software of Rockville (1989), Maryland (unrelated to Sage Software of the UK), Intersolv 1992, Micro Focus International 1998 and Merant PLC 2001. The latter was acquired by Serena Software in 2004, which was then acquired by Silver Lake Partners in 2006. Synergex ported both the PVCS Version Manager and the PVCS Configuration Builder (an extended make utility, including a variant of the command line tool make) to various Unix platforms and OpenVMS. In 2009, Serena Software clarified that it will continue to invest in PVCS and provide support to PVCS customers for the foreseeable future.PVCS Version Manager 8.5 release (2014) introduces both new feature and new platform support. In 2016, Micro Focus International announced the acquisition of Serena Software to again become the custodians of PVCS. See also List of version control software References External links Version control systems
https://en.wikipedia.org/wiki/National%20Audio-Visual%20Conservation%20Center
The National Audiovisual Conservation Center, also known as the Packard Campus for Audio-Visual Conservation, is the Library of Congress's audiovisual archive located inside Mount Pony in Culpeper, Virginia. Establishment From 1969 to 1988, the campus was a high-security storage facility operated by the Federal Reserve Board. With the approval of the United States Congress in 1997, it was purchased by the David and Lucile Packard Foundation from the Federal Reserve Bank of Richmond via a $5.5 million grant, done on behalf of the Library of Congress. With a further $150 million from the Packard Humanities Institute and $82.1 million from Congress, the facility was transformed into the National Audio-Visual Conservation Center, which completed construction in mid-2007, and after transfer of the bulk of archives, opened for free public movie screenings on most weekends in the fall 2008. The campus offered, for the first time, a single site to store all 6.3 million pieces of the library's movie, television, and sound collection. Technically, the Packard Campus (PCAVC) is just the largest part of the whole National Audio-Visual Conservation Center (NAVCC), which also consists of the Library of Congress's Motion Picture and Television Division and Recorded Sound Division reference centers on Capitol Hill, the Mary Pickford Theater, and any other Library of Congress audio-visual storage facilities that remain outside the Packard Campus. The PCAVC design, named Best of 2007 by Mid-Atlantic Construction Magazine, involved upgrading the existing bunker and creating an entirely new, below-ground entry building that also includes a large screening room, office space and research facilities. Designers BAR Architects, project-architect SmithGroup and landscape designers SWA Group, along with DPR Construction, Inc., collaborated in what is now the largest green-roofed commercial facility in the eastern United States, blending into the surrounding environment and ecosystem. Fed
https://en.wikipedia.org/wiki/Transporter%20Classification%20Database
The Transporter Classification Database (or TCDB) is an International Union of Biochemistry and Molecular Biology (IUBMB)-approved classification system for membrane transport proteins, including ion channels. Classification The upper level of classification and a few examples of proteins with known 3D structure: 1. Channels and pores 1.A α-type channels 1.A.1 Voltage-gated ion channel superfamily 1.A.2 Inward-rectifier K+ channel family 1.A.3 Ryanodine-inositol-1,4,5-trisphosphate receptor Ca2+ channel family 1.A.4 Transient receptor potential Ca2+ channel family 1.A.5 Polycystin cation channel family 1.A.6 Epithelial Na+ channel family 1.A.7 ATP-gated P2X receptor cation channel family 1.A.8 Major intrinsic protein superfamily 1.A.9 Neurotransmitter receptor, Cys loop, ligand-gated ion channel family 1.A.10 Glutamate-gated ion channel family of neurotransmitter receptors 1.A.11 Ammonium channel transporter family 1.A.12 Intracellular chloride channel family 1.A.13 Epithelial chloride channel family 1.A.14 Testis-enhanced gene transfer family 1.A.15 Nonselective cation channel-2 family 1.A.16 Formate-nitrite transporter family 1.A.17 Calcium-dependent chloride channel family 1.A.18 Chloroplast envelope anion-channel-forming Tic110 family 1.A.19 Type A influenza virus matrix-2 channel family 1.A.20 BCL2/Adenovirus E1B-interacting protein 3 family 1.A.21 Bcl-2 family 1.A.22 Large-conductance mechanosensitive ion channel 1.A.23 Small-conductance mechanosensitive ion channel 1.A.24 Gap-junction-forming connexin family 1.A.25 Gap-junction-forming innexin family 1.A.26 Mg2+ transporter-E family 1.A.27 Phospholemman family 1.A.28 Urea transporter family 1.A.29 Urea/amide channel family 1.A.30 H+- or Na+-translocating bacterial MotAB flagellar motor/ExbBD outer-membrane transport energizer superfamily 1.A.31 Annexin family 1.A.32 Type B influenza virus NB channel family 1.A.33 Cation-channel-forming heat shock protein 70 family 1.A.34 B
https://en.wikipedia.org/wiki/Specific%20modulus
Specific modulus is a materials property consisting of the elastic modulus per mass density of a material. It is also known as the stiffness to weight ratio or specific stiffness. High specific modulus materials find wide application in aerospace applications where minimum structural weight is required. The dimensional analysis yields units of distance squared per time squared. The equation can be written as: where is the elastic modulus and is the density. The utility of specific modulus is to find materials which will produce structures with minimum weight, when the primary design limitation is deflection or physical deformation, rather than load at breaking—this is also known as a "stiffness-driven" structure. Many common structures are stiffness-driven over much of their use, such as airplane wings, bridges, masts, and bicycle frames. To emphasize the point, consider the issue of choosing a material for building an airplane. Aluminum seems obvious because it is "lighter" than steel, but steel is stronger than aluminum, so one could imagine using thinner steel components to save weight without sacrificing (tensile) strength. The problem with this idea is that there would be a significant sacrifice of stiffness, allowing, e.g., wings to flex unacceptably. Because it is stiffness, not tensile strength, that drives this kind of decision for airplanes, we say that they are stiffness-driven. The connection details of such structures may be more sensitive to strength (rather than stiffness) issues due to effects of stress risers. Specific modulus is not to be confused with specific strength, a term that compares strength to density. Applications Specific stiffness in tension The use of specific stiffness in tension applications is straightforward. Both stiffness in tension and total mass for a given length are directly proportional to cross-sectional area. Thus performance of a beam in tension will depend on Young's modulus divided by density. Speci
https://en.wikipedia.org/wiki/List%20of%20Adobe%20software
The following is a list of software products by Adobe Inc. Active products Software suites Experience Cloud Adobe Experience Cloud (AEC) is a collection of integrated online marketing and Web analytics solutions by Adobe Inc. It includes a set of analytics, social, advertising, media optimization, targeting, Web experience management and content management solutions. It includes: Advertising Cloud Analytics Audience Manager Campaign Commerce Cloud Experience Manager Experience Manager Assets Experience Manager Sites Experience Manager Forms Marketo Engage Primetime Target Creative Suite Adobe Creative Suite (CS) was a series of software suites of graphic design, video editing, and web development applications made or acquired by Adobe Systems. It included: Acrobat After Effects Audition Bridge Contribute Device Central Dreamweaver Dynamic Link Encore Fireworks Flash Professional Illustrator InDesign OnLocation Photoshop Premiere Pro Creative Cloud Adobe Creative Cloud is the successor to Creative Suite. It is based on a software as a service model. It includes everything in Creative Suite 6 with the exclusion of Fireworks and Encore, as both applications were discontinued. It also introduced a few new programs, including Muse, Animate, InCopy and Story CC Plus. Technical Communication Suite Adobe Technical Communication Suite is a collection of applications made by Adobe Systems for technical communicators, help authors, instructional designers, and eLearning and training design professionals. It includes: Acrobat Captivate FrameMaker Presenter RoboHelp eLearning Suite Adobe eLearning Suite was a collection of applications made by Adobe Systems for learning professionals, instructional designers, training managers, content developers, and educators. Acrobat Captivate Device Central Dreamweaver Flash Professional Photoshop Discontinued products Acrobat Approval allows users to deploy electronic forms based on the Acrobat P
https://en.wikipedia.org/wiki/Great%20Oxidation%20Event
The Great Oxidation Event (GOE) or Great Oxygenation Event, also called the Oxygen Catastrophe, Oxygen Revolution, Oxygen Crisis or Oxygen Holocaust, was a time interval during the Early Earth's Paleoproterozoic Era when the Earth's atmosphere and the shallow ocean first experienced a rise in the concentration of oxygen. This began approximately 2.460–2.426 Ga (billion years) ago, during the Siderian period, and ended approximately 2.060 Ga, during the Rhyacian. Geological, isotopic, and chemical evidence suggests that biologically-produced molecular oxygen (dioxygen or O2) started to accumulate in Earth's atmosphere and changed it from a weakly reducing atmosphere practically devoid of oxygen into an oxidizing one containing abundant free oxygen, with oxygen levels being as high as 10% of their present atmospheric level by the end of the GOE. The sudden injection of highly reactive free oxygen, toxic to the then-mostly anaerobic biosphere, may have caused the extinction of many existing organisms on Earth — then mostly archaeal colonies that used retinal to utilize green-spectrum light energy and power a form of anoxygenic photosynthesis (see Purple Earth hypothesis). Although the event is inferred to have constituted a mass extinction, due in part to the great difficulty in surveying microscopic organisms' abundances, and in part to the extreme age of fossil remains from that time, the Great Oxidation Event is typically not counted among conventional lists of "great extinctions", which are implicitly limited to the Phanerozoic eon. In any case, Isotope geochemistry data from sulfate minerals have been interpreted to indicate a decrease in the size of the biosphere of >80% associated with changes in nutrient supplies at the end of the GOE. The GOE is inferred to have been caused by cyanobacteria who evolved porphyrin-based photosynthesis, which produces dioxygen as a byproduct. The increasing oxygen level eventually depleted the reducing capacity of ferrous compo
https://en.wikipedia.org/wiki/Carpenter%27s%20rule%20problem
The carpenter's rule problem is a discrete geometry problem, which can be stated in the following manner: Can a simple planar polygon be moved continuously to a position where all its vertices are in convex position, so that the edge lengths and simplicity are preserved along the way? A closely related problem is to show that any non-self-crossing polygonal chain can be straightened, again by a continuous transformation that preserves edge distances and avoids crossings. Both problems were successfully solved by . The problem is named after the multiple-jointed wooden rulers popular among carpenters in the 19th and early 20th centuries before improvements to metal tape measures made them obsolete. Combinatorial proof Subsequently to their work, Ileana Streinu provided a simplified combinatorial proof formulated in the terminology of robot arm motion planning. Both the original proof and Streinu's proof work by finding non-expansive motions of the input, continuous transformations such that no two points ever move towards each other. Streinu's version of the proof adds edges to the input to form a pointed pseudotriangulation, removes one added convex hull edge from this graph, and shows that the remaining graph has a one-parameter family of motions in which all distances are nondecreasing. By repeatedly applying such motions, one eventually reaches a state in which no further expansive motions are possible, which can only happen when the input has been straightened or convexified. provide an application of this result to the mathematics of paper folding: they describe how to fold any single-vertex origami shape using only simple non-self-intersecting motions of the paper. Essentially, this folding process is a time-reversed version of the problem of convexifying a polygon of length smaller than π, but on the surface of a sphere rather than in the Euclidean plane. This result was extended by for spherical polygons of edge length smaller than 2π. Generalization
https://en.wikipedia.org/wiki/List%20of%20SIP%20software
This list of SIP software documents notable software applications which use Session Initiation Protocol (SIP) as a voice over IP (VoIP) protocol. Servers Free and open-source license A SIP server, also known as a SIP proxy, manages all SIP calls within a network and takes responsibility for receiving requests from user agents for the purpose of placing and terminating calls. Asterisk ejabberd FreeSWITCH FreePBX GNU SIP Witch Issabel, fork of Elastix Kamailio, formerly OpenSER Mobicents Platform (JSLEE[2] 1.0 compliant and SIP Servlets 1.1 compliant application server) OpenSIPS, fork of OpenSER SailFin SIP Express Router (SER) Enterprise Communications System sipXecs Yate Proprietary license 3Com VCX IP telephony module: back-to-back user agent SIP PBX 3CX Phone System, for Windows, Debian 8 GNU/Linux Aastra 5000, 800, MX-ONE Alcatel-Lucent 5060 IP Call server Aricent SIP UA stack, B2BUA, proxy, VoLTE/RCS Client AskoziaPBX Avaya Application Server 5300 (AS5300), JITC certified ASSIP VoIP Bicom Systems IP PBX for telecoms Brekeke PBX, SIP PBX for service providers and enterprises Cisco SIP Proxy Server, Cisco unified border element (CUBE), Cisco Unified Communication Manager (CUCM) CommuniGate Pro, virtualized PBX for IP Centrex hosting, voicemail services, self-care, ... Comverse Technology softswitch, media applications, SIP registrars Creacode SIP Application Server Real-time SIP call controller and IVR product for carrier-class VoIP networks Dialogic Corporation Powermedia Media Servers, audio and video SIP IVR, media and conferencing servers for Enterprise and Carriers. Dialexia VoIP Softswitches, IP PBX for medium and enterprise organizations, billing servers. IBM WebSphere Application Server - Converged HTTP and SIP container JEE Application Server Interactive Intelligence Windows-based IP PBX for small, medium and enterprise organizations Kerio Operator, IP PBX for small and medium enterprises Microsoft Lync Server 2010
https://en.wikipedia.org/wiki/Dynamic%20modulus
Dynamic modulus (sometimes complex modulus) is the ratio of stress to strain under vibratory conditions (calculated from data obtained from either free or forced vibration tests, in shear, compression, or elongation). It is a property of viscoelastic materials. Viscoelastic stress–strain phase-lag Viscoelasticity is studied using dynamic mechanical analysis where an oscillatory force (stress) is applied to a material and the resulting displacement (strain) is measured. In purely elastic materials the stress and strain occur in phase, so that the response of one occurs simultaneously with the other. In purely viscous materials, there is a phase difference between stress and strain, where strain lags stress by a 90 degree ( radian) phase lag. Viscoelastic materials exhibit behavior somewhere in between that of purely viscous and purely elastic materials, exhibiting some phase lag in strain. Stress and strain in a viscoelastic material can be represented using the following expressions: Strain: Stress: where where is frequency of strain oscillation, is time, is phase lag between stress and strain. The stress relaxation modulus is the ratio of the stress remaining at time after a step strain was applied at time : , which is the time-dependent generalization of Hooke's law. For visco-elastic solids, converges to the equilibrium shear modulus: . The fourier transform of the shear relaxation modulus is (see below). Storage and loss modulus The storage and loss modulus in viscoelastic materials measure the stored energy, representing the elastic portion, and the energy dissipated as heat, representing the viscous portion. The tensile storage and loss moduli are defined as follows: Storage: Loss: Similarly we also define shear storage and shear loss moduli, and . Complex variables can be used to express the moduli and as follows: where is the imaginary unit. Ratio between loss and storage modulus The ratio of the loss modulus to storag
https://en.wikipedia.org/wiki/Speech%20technology
Speech technology relates to the technologies designed to duplicate and respond to the human voice. They have many uses. These include aid to the voice-disabled, the hearing-disabled, and the blind, along with communication with computers without a keyboard. They enhance game software and aid in marketing goods or services by telephone. The subject includes several subfields: Speech synthesis Speech recognition Speaker recognition Speaker verification Speech encoding Multimodal interaction See also Communication aids Language technology Speech interface guideline Speech processing Speech Technology (magazine) External links Speech processing da:Taleteknologi fi:Puheteknologia th:การประมวลผลคำพูด
https://en.wikipedia.org/wiki/BioLinux
BioLinux is a term used in a variety of projects involved in making access to bioinformatics software on a Linux platform easier using one or more of the following methods: Provision of complete systems Provision of bioinformatics software repositories Addition of bioinformatics packages to standard distributions Live DVD/CDs with bioinformatics software added Community building and support systems There are now various projects with similar aims, on both Linux systems and other Unices, and a selection of these are given below. There is also an overview in the Canadian Bioinformatics Helpdesk Newsletter that details some of the Linux-based projects. Package repositories Apple/Mac Many Linux packages are compatible with Mac OS X and there are several projects which attempt to make it easy to install selected Linux packages (including bioinformatics software) on a computer running Mac OS X. (source?) BioArchLinux BioArchLinux repository contain more than 3,770 packages for Arch Linux and Arch Linux based distribution. Debian Debian is another very popular Linux distribution in use in many academic institutions, and some bioinformaticians have made their own software packages available for this distribution in the deb format. Red Hat Package repositories are generally specific to the distribution of Linux the bioinformatician is using. A number of Linux variants are prevalent in bioinformatics work. Fedora is a freely-distributed version of the commercial Red Hat system. Red Hat is widely used in the corporate world as they offer commercial support and training packages. Fedora Core is a community supported derivative of Red Hat and is popular amongst those who like Red Hat's system but don't require commercial support. Many users of bioinformatics applications have produced RPMs (Red Hat's package format) designed to work with Fedora, which you can potentially also install on Red Hat Enterprise Linux systems. Other distributions such as Mandriv
https://en.wikipedia.org/wiki/Complement%20component%205a
C5a is a protein fragment released from cleavage of complement component C5 by protease C5-convertase into C5a and C5b fragments. C5b is important in late events of the complement cascade, an orderly series of reactions which coordinates several basic defense mechanisms, including formation of the membrane attack complex (MAC), one of the most basic weapons of the innate immune system, formed as an automatic response to intrusions from foreign particles and microbial invaders. It essentially pokes microscopic pinholes in these foreign objects, causing loss of water and sometimes death. C5a, the other cleavage product of C5, acts as a highly inflammatory peptide, encouraging complement activation, formation of the MAC, attraction of innate immune cells, and histamine release involved in allergic responses. The origin of C5 is in the hepatocyte, but its synthesis can also be found in macrophages, where it may cause local increase of C5a. C5a is a chemotactic agent and an anaphylatoxin; it is essential in the innate immunity but it is also linked with the adaptive immunity. The increased production of C5a is connected with a number of inflammatory diseases. Structure Human polypeptide C5a contains 74 amino acids and has 11kDa. NMR spectroscopy proved that the molecule is composed of four helices and connected by peptide loops with three disulphide bonds between helix IV and II, III. There is a short 1.5 turn helix on N-terminus but all agonist activity take place in the C-terminus. C5a is rapidly metabolised by a serum enzyme carboxypeptidase B to a 72 amino acid form C5a des-Arg without C terminal arginine. Functions C5a is an anaphylatoxin, causing increased expression of adhesion molecules on endothelium, contraction of smooth muscle, and increased vascular permeability. C5a des-Arg is a much less potent anaphylatoxin. Both C5a and C5a des-Arg can trigger mast cell degranulation, releasing proinflammatory molecules histamine and TNF-α. C5a is also an effective c
https://en.wikipedia.org/wiki/Open-access%20network
An open-access network (OAN) refers to a horizontally layered network architecture in telecommunications, and the business model that separates the physical access to the network from the delivery of services. In an OAN, the owner or manager of the network does not supply services for the network; these services must be supplied by separate retail service providers. There are two different open-access network models: the two- and three-layer models. "Open Access" refers to a specialised and focused business model, in which a network infrastructure provider limits its activities to a fixed set of value layers in order to avoid conflicts of interest. The network infrastructure provider creates an open market and a platform for internet service providers (ISPs) to add value. The Open Access provider remains neutral and independent and offers standard and transparent pricing to ISPs on its network. It never competes with the ISPs. History In the 20th century, analog telephone and cable television networks were designed around the limitations of the prevailing technology. The copper-wired twisted pair telephone networks were not able to carry television programming, and copper-wired coaxial cable television networks were not able to carry voice telephony. Towards the end of the twentieth century, with the rise of packet switching—as used on the Internet—and IP-based and wireless technologies, it became possible to design, build, and operate a single high performance network capable of delivering hundreds of services from multiple, competing providers. Two models An OAN uses a different business model than traditional telecommunications networks. Regardless of whether the two- or three-layer model is used, an open-access network fundamentally means that there is an "organisational separation" of each of the layers. In other words, the network owner/operator cannot also be a retailer on that network. Two-layer model In the two-layer OAN model, there is a network owner
https://en.wikipedia.org/wiki/History%20of%20chemical%20engineering
Chemical engineering is a discipline that was developed out of those practicing "industrial chemistry" in the late 19th century. Before the Industrial Revolution (18th century), industrial chemicals and other consumer products such as soap were mainly produced through batch processing. Batch processing is labour-intensive and individuals mix predetermined amounts of ingredients in a vessel, heat, cool or pressurize the mixture for a predetermined length of time. The product may then be isolated, purified and tested to achieve a saleable product. Batch processes are still performed today on higher value products, such as pharmaceutical intermediates, speciality and formulated products such as perfumes and paints, or in food manufacture such as pure maple syrups, where a profit can still be made despite batch methods being slower and inefficient in terms of labour and equipment usage. Due to the application of Chemical Engineering techniques during manufacturing process development, larger volume chemicals are now produced through continuous "assembly line" chemical processes. The Industrial Revolution was when a shift from batch to more continuous processing began to occur. Today commodity chemicals and petrochemicals are predominantly made using continuous manufacturing processes whereas speciality chemicals, fine chemicals and pharmaceuticals are made using batch processes. Origin The Industrial Revolution led to an unprecedented escalation in demand, both with regard to quantity and quality, for bulk chemicals such as soda ash. This meant two things: one, the size of the activity and the efficiency of operation had to be enlarged, and two, serious alternatives to batch processing, such as continuous operation, had to be examined. The first chemical engineer Industrial chemistry was being practiced in the 1800s, and its study at British universities began with the publication by Friedrich Ludwig Knapp, Edmund Ronalds and Thomas Richardson of the important book
https://en.wikipedia.org/wiki/OpenCable%20Application%20Platform
The OpenCable Application Platform, or OCAP, is an operating system layer designed for consumer electronics that connect to a cable television system, the Java-based middleware portion of the platform. Unlike operating systems on a personal computer, the cable company controls what OCAP programs run on the consumer's machine. Designed by CableLabs for the cable networks of North America, OCAP programs are intended for interactive services such as eCommerce, online banking, Electronic program guides, and digital video recording. Cable companies have required OCAP as part of the Cablecard 2.0 specification, a proposal that is controversial and has not been approved by the Federal Communications Commission. Cable companies have stated that two-way communications by third party devices on their networks will require them to support OCAP. The Consumer Electronics Association and other groups argue OCAP is intended to block features that compete with cable company provided services and that consumers should be entitled to add, delete and otherwise control programs as on their personal computers. On January 8, 2008 CableLabs announced the Tru2Way brand for the OpenCable platform, including OCAP as the application platform. Technical overview OCAP is the Java based software/middleware portion of the OpenCable initiative. OCAP is based on the Globally Executable MHP (GEM)-standard, and was defined by CableLabs. Because OCAP is based on GEM, it has a lot in common with the Multimedia Home Platform (MHP)-standard defined by the DVB project. At present two versions of the OCAP standard exist: OCAP v1.0 OCAP v2.0 See also Downloadable Conditional Access System (DCAS) Embedded Java Java Platform, Micro Edition ARIB Interactive digital cable ready OEDN References External links Sun Microsystems' Java TV MHP official standards for interactive television and related interactive home entertainment. MHP tutorials MHP Knowledge Database The OCAP/EBIF D
https://en.wikipedia.org/wiki/IBM%20Advanced%20Program-to-Program%20Communication
In computing, Advanced Program to Program Communication or APPC is a protocol which computer programs can use to communicate over a network. APPC is at the application layer in the OSI model, it enables communications between programs on different computers, from portables and workstations to midrange and host computers. APPC is defined as VTAM LU 6.2 ( Logical unit type 6.2 ) APPC was developed in 1982 as a component of IBM's Systems Network Architecture (SNA). Several APIs were developed for programming languages such as COBOL, PL/I, C or REXX. APPC software is available for many different IBM and non-IBM operating systems, either as part of the operating system or as a separate software package. APPC serves as a translator between application programs and the network. When an application on your computer passes information to the APPC software, APPC translates the information and passes it to a network interface, such as a LAN adapter card. The information travels across the network to another computer, where the APPC software receives the information from the network interface. APPC translates the information back into its original format and passes it to the corresponding partner application. APPC is mainly used by IBM installations running operating systems such z/OS (formerly MVS then OS/390), z/VM (formerly VM/CMS), z/TPF, IBM i (formerly OS/400), OS/2, AIX and z/VSE (formerly DOS/VSE). Microsoft also includes SNA support in Microsoft's Host Integration Server. Major IBM software products also include support for APPC, including CICS, Db2, CIM and WebSphere MQ. Unlike TCP/IP, in which both communication partners always possess a clear role (one is always server, and others always the client), APPC is a peer-to-peer protocol. The communication partners in APPC are equal, every application can be both server and client equally. The role, and the number of the parallel sessions between the partners, is negotiated over CNOS sessions (Change Number Of Sess
https://en.wikipedia.org/wiki/Muscone
Muscone is a macrocyclic ketone, an organic compound that is the primary contributor to the odor of musk. The chemical structure of muscone was first elucidated by Leopold Ružička. It is a 15-membered ring ketone with one methyl substituent in the 3-position. It is an oily liquid that is found naturally as the (−)-enantiomer, (R)-3-methylcyclopentadecanone. Muscone has been synthesized as the pure (−)-enantiomer as well as the racemate. It is very slightly soluble in water and miscible with alcohol. Natural muscone is obtained from musk, a glandular secretion of the musk deer, which has been used in perfumery and medicine for thousands of years. Since obtaining natural musk requires killing the endangered animal, nearly all muscone used in perfumery today is synthetic. It has the characteristic smell of being "musky". One asymmetric synthesis of (−)-muscone begins with commercially available (+)-citronellal, and forms the 15-membered ring via ring-closing metathesis: A more recent enantioselective synthesis involves an intramolecular aldol addition/dehydration reaction of a macrocyclic diketone. Muscone is now produced synthetically for use in perfumes and for scenting consumer products. Isotopologues of muscone have been used in a study of the mechanism of olfaction. Global replacement of all hydrogens in muscone was achieved by heating muscone with Rh/C in D2O at 150 °C. It was found that the human musk-recognizing receptor, OR5AN1, identified using a heterologous olfactory receptor expression system and robustly responding to muscone, fails to distinguish between muscone and the so-prepared isotopologue in vitro. OR5AN1 is reported to bind to muscone and related musks such as civetone through hydrogen-bond formation from tyrosine-258 along with hydrophobic interactions with surrounding aromatic residues in the receptor. References Flavors Perfume ingredients Macrocycles Ketones
https://en.wikipedia.org/wiki/Learning%20automaton
A learning automaton is one type of machine learning algorithm studied since 1970s. Learning automata select their current action based on past experiences from the environment. It will fall into the range of reinforcement learning if the environment is stochastic and a Markov decision process (MDP) is used. History Research in learning automata can be traced back to the work of Michael Lvovitch Tsetlin in the early 1960s in the Soviet Union. Together with some colleagues, he published a collection of papers on how to use matrices to describe automata functions. Additionally, Tsetlin worked on reasonable and collective automata behaviour, and on automata games. Learning automata were also investigated by researches in the United States in the 1960s. However, the term learning automaton was not used until Narendra and Thathachar introduced it in a survey paper in 1974. Definition A learning automaton is an adaptive decision-making unit situated in a random environment that learns the optimal action through repeated interactions with its environment. The actions are chosen according to a specific probability distribution which is updated based on the environment response the automaton obtains by performing a particular action. With respect to the field of reinforcement learning, learning automata are characterized as policy iterators. In contrast to other reinforcement learners, policy iterators directly manipulate the policy π. Another example for policy iterators are evolutionary algorithms. Formally, Narendra and Thathachar define a stochastic automaton to consist of: a set X of possible inputs, a set Φ = { Φ1, ..., Φs } of possible internal states, a set α = { α1, ..., αr } of possible outputs, or actions, with r ≤ s, an initial state probability vector p(0) = ≪ p1(0), ..., ps(0) ≫, a computable function A which after each time step t generates p(t+1) from p(t), the current input, and the current state, and a function G: Φ → α which generates the outpu
https://en.wikipedia.org/wiki/Byte%20Code%20Engineering%20Library
The Byte Code Engineering Library (BCEL) is a project sponsored by the Apache Foundation previously under their Jakarta charter to provide a simple API for decomposing, modifying, and recomposing binary Java classes (I.e. bytecode). The project was conceived and developed by Markus Dahm prior to officially being donated to the Apache Jakarta foundation on 27 October 2001. Uses BCEL provides a simple library that exposes the internal aggregate components of a given Java class through its API as object constructs (as opposed to the disassembly of the lower-level opcodes). These objects also expose operations for modifying the binary bytecode, as well as generating new bytecode (via injection of new code into the existing code, or through generation of new classes altogether.) The BCEL library has been used in several diverse applications, such as: Java Bytecode Decompiling, Obfuscation, and Refactoring Performance and Profiling Instrumentation calls that capture performance metrics can be injected into Java class binaries to examine memory/coverage data. (For example, injecting instrumentation at entry/exit points.) Implementation of New Language Semantics For example, Aspect-Oriented additions to the Java language have been implemented by using BCEL to decompose class structures for point-cut identification, and then again when reconstituting the class by injecting aspect-related code back into the binary. (See: AspectJ) Static code analysis FindBugs uses BCEL to analyze Java bytecode for code idioms which indicate bugs. See also ObjectWeb ASM Javassist External links Apache Commons BCEL - The BCEL Project Home Page. BCEL-Based Project Listing - A listing of projects that make use of the BCEL Library. Apache Jakarta Home - The Apache Jakarta Home Page. AspectJ - The AspectJ Project Home Page. (One of the high-visibility projects that makes use of BCEL.) Virtualization software
https://en.wikipedia.org/wiki/PLECS
PLECS (Piecewise Linear Electrical Circuit Simulation) is a software tool for system-level simulations of electrical circuits developed by Plexim. It is especially designed for power electronics but can be used for any electrical network. PLECS includes the possibility to model controls and different physical domains (thermal, magnetic and mechanical) besides the electrical system. Most circuit simulation programs model switches as highly nonlinear elements. Due to steep voltage and current transient, the simulation becomes slow when switches are commutated. In most simplistic applications, switches are modelled as variable resistors that alternate between a very small and a very large resistance. In other cases, they are represented by a sophisticated semiconductor model. When simulating complex power electronic systems, however, the processes during switching are of little interest. In these situations it is more appropriate to use ideal switches that toggle instantaneously between a closed and an open circuit. This approach, which is implemented in PLECS, has two major advantages: Firstly, it yields systems that are piecewise-linear across switching instants, thus resolving the otherwise difficult problem of simulating the non-linear discontinuity that occurs in the equivalent-circuit at the switching instant. Secondly, to handle discontinuities at the switching instants, only two integration steps are required (one for before the instant, and one after). Both of these advantages speed up the simulation considerably, without sacrificing accuracy. Thus the software is ideally suited for modelling and simulation of complex drive systems and modular multilevel converters, for example. In recent years, PLECS has been extended to also support model-based development of controls with automatic code generation. In addition to software, the PLECS product family includes real-time simulation hardware for both hardware-in-the-loop (HIL) testing and rapid control prototy
https://en.wikipedia.org/wiki/Line%20of%20action
In physics, the line of action (also called line of application) of a force () is a geometric representation of how the force is applied. It is the straight line through the point at which the force is applied in the same direction as the vector . The concept is essential, for instance, for understanding the net effect of multiple forces applied to a body. For example, if two forces of equal magnitude act upon a rigid body along the same line of action but in opposite directions, they cancel and have no net effect. But if, instead, their lines of action are not identical, but merely parallel, then their effect is to create a moment on the body, which tends to rotate it. Calculation of torque For the simple geometry associated with the figure, there are three equivalent equations for the magnitude of the torque associated with a force directed at displacement from the axis whenever the force is perpendicular to the axis: where is the cross-product, is the component of perpendicular to , is the moment arm, and is the angle between and References Force A
https://en.wikipedia.org/wiki/Viglen
Viglen Ltd provides IT products and services, including storage systems, servers, workstations and data/voice communications equipment and services. History The company was formed in 1975, by Vigen Boyadjian. During the 1980s, the company specialised in direct sales through multi page advertisements in leading computer magazines, catering particularly, but not exclusively, to owners of Acorn computers. Viglen was acquired by Alan Sugar (later Lord Sugar)'s company Amstrad in June 1994. It was listed as a public limited company in 1997, and Amstrad plc shares were split into Viglen and Betacom shares, Betacom being renamed to Amstrad PLC. Following the sale in July 2007 of Amstrad PLC to Rupert Murdoch's BSkyB, Viglen became Sugar's sole IT establishment. Viglen used to be run by CEO Bordan Tkachuk, a longtime associate of Lord Sugar, who can be seen making special guest appearances on The Apprentice. From 1994 to 1998, the company sponsored Charlton Athletic F.C., expiring when they won promotion to the FA Premier League. In December 2005, Viglen relocated from its London headquarters in Wembley to Colney Street near St Albans, into a building which also houses its fabrication plant. , Viglen focused particularly on the education and public sectors, selling both desktop and server systems, and also had interests in other IT markets such as managed services, high performance clusters, and network attached storage. In July 2009, Lord Sugar resigned as the chairman of Viglen (and most of his other companies), handing over the reins of the company to longtime associate, Claude Littner. In January 2014, Sugar sold his interest in Viglen to the Westcoast Group, which merged it with another of its subsidiaries, XMA. The Apprentice Under its former ownership by Lord Sugar, the Viglen headquarters doubled up as one of the filming locations for the BBC programme The Apprentice, with various scenes including the infamous "job interviews" being set there. The "walk of sh
https://en.wikipedia.org/wiki/Spectrum%20auction
A spectrum auction is a process whereby a government uses an auction system to sell the rights to transmit signals over specific bands of the electromagnetic spectrum and to assign scarce spectrum resources. Depending on the specific auction format used, a spectrum auction can last from a single day to several months from the opening bid to the final winning bid. With a well-designed auction, resources are allocated efficiently to the parties that value them the most, the government securing revenue in the process. Spectrum auctions are a step toward market-based spectrum management and privatization of public airwaves, and are a way for governments to allocate scarce resources. Alternatives to auctions include administrative licensing, such as the comparative hearings conducted historically (sometimes referred to as "beauty contests"), or lotteries. Innovation In the past decade, telecommunications has turned into a highly competitive industry where companies are competing to buy valuable spectrum. This competition has been triggered by technological advancements, privatization, and liberalization. Mobile communication in particular has made many transitions since 2000, mobile technology has moved from second generation (2G) to third generation (3G) to fourth generation (4G) and is now in transition to fifth generation (5G) technology. With more providers in the mobile industry, the competition during spectrum auctions has increased due to more demand from consumers. When the United States made the transition in June 2009 from analog to digital broadcast television signals, the valuable 700 MHz spectrum became available because it was no longer being used by analog TV signals. In 2007, search giant Google announced that they would be entering the mobile business with their highly popular Android operating system and plans for a mobile broadband system. Google said that they planned to bid for the "C" block of the spectrum auction which correspond to channels 5
https://en.wikipedia.org/wiki/Internet%20Gateway%20Device%20Protocol
Internet Gateway Device (IGD) Protocol is a protocol based on Universal Plug and Play (UPnP) for mapping ports in network address translation (NAT) setups, supported by some NAT-enabled routers. It is a common communications protocol for automatically configuring port forwarding, and is part of an ISO/IEC Standard rather than an Internet Engineering Task Force standard. Usage Applications using peer-to-peer networks, multiplayer gaming, and remote assistance programs need a way to communicate through home and business gateways. Without IGD one has to manually configure the gateway to allow traffic through, a process which is error-prone and time-consuming. Universal Plug and Play (UPnP) comes with a solution for network address translation traversal (NAT traversal) that implements IGD. IGD makes it easy to do the following: Add and remove port mappings Assign lease times to mappings Enumerate existing port mappings Learn the public (external) IP address The host can allow seeking for available IGDv1/IGDv2 devices with only one M-SEARCH for IGDv1 on the network via Simple Service Discovery Protocol (SSDP) which can be controlled then with the help of a network protocol such as SOAP. A discover request is sent via HTTP and port 1900 to the IPv4 multicast address 239.255.255.250 (for the IPv6 addresses see the Simple Service Discovery Protocol (SSDP)): M-SEARCH * HTTP/1.1 HOST: 239.255.255.250:1900 MAN: "ssdp:discover" MX: 2 ST: urn:schemas-upnp-org:device:InternetGatewayDevice:1 Security risks Malware can exploit the IGD protocol to bring connected devices under the control of a foreign user. The Conficker worm is an example of a botnet created using this vector. Compatibility issues There are numerous compatibility issues due the different interpretations of the very large actually backward compatible IGDv1 and IGDv2 specifications. One of them is the UPnP IGD client integrated with current Microsoft Windows and Xbox systems with certified IGDv2 rou
https://en.wikipedia.org/wiki/Dicalcium%20phosphate
Dicalcium phosphate is the calcium phosphate with the formula CaHPO4 and its dihydrate. The "di" prefix in the common name arises because the formation of the HPO42– anion involves the removal of two protons from phosphoric acid, H3PO4. It is also known as dibasic calcium phosphate or calcium monohydrogen phosphate. Dicalcium phosphate is used as a food additive, it is found in some toothpastes as a polishing agent and is a biomaterial. Preparation Dibasic calcium phosphate is produced by the neutralization of calcium hydroxide with phosphoric acid, which precipitates the dihydrate as a solid. At 60 °C the anhydrous form is precipitated: To prevent degradation that would form hydroxyapatite, sodium pyrophosphate or trimagnesium phosphate octahydrate are added when for example, dibasic calcium phosphate dihydrate is to be used as a polishing agent in toothpaste. In a continuous process CaCl2 can be treated with (NH4)2HPO4 to form the dihydrate: A slurry of the dihydrate is then heated to around 65–70 °C to form anhydrous CaHPO4 as a crystalline precipitate, typically as flat diamondoid crystals, which are suitable for further processing. Dibasic calcium phosphate dihydrate is formed in "brushite" calcium phosphate cements (CPC's), which have medical applications. An example of the overall setting reaction in the formation of "β-TCP/MCPM" (β-tricalcium phosphate/monocalcium phosphate) calcium phosphate cements is: Structure Three forms of dicalcium phosphate are known: dihydrate, CaHPO4•2H2O ('DPCD'), the mineral brushite monohydrate, CaHPO4•H2O ('DCPM') anhydrous CaHPO4, ('DCPA'), the mineral monetite. Below pH 4.8 the dihydrate and anhydrous forms of dicalcium phosphate are the most stable (insoluble) of the calcium phosphates. The structure of the anhydrous and dihydrated forms have been determined by X-ray crystallography and the structure of the monohydrate was determined by electron crystallography. The dihydrate (shown in table above) as well as the
https://en.wikipedia.org/wiki/Match%20Day%20II
Match Day II is a football sports game part of the Match Day series released for the Amstrad CPC, Amstrad PCW, ZX Spectrum, MSX and Commodore 64 platforms. It was created in 1987 by Jon Ritman with graphics by Bernie Drummond and music and sound by Guy Stevens (except for the Commodore version, which was a line-by-line conversion by John Darnell). It is the sequel to 1984's Match Day. Gameplay The controls consist of four directions (allowing eight directions including diagonals) and a shot button. Each team has seven players, including goalkeeper and there are league and cup options available. The game is considered highly addictive due to its difficulty level, the complete control over ball direction, power and elevation (using a Diamond Deflection System), and the importance of tactics and player positioning over the field (barging if necessary), which makes it challenging to break strong defences. Was the first game to use a kickometer. Some versions of the game play the song When the Saints Go Marching In while the players are walking to their initial positions on the field at the beginning of each half. The ZX Spectrum version of the game went to number 2 in the UK sales charts, behind Out Run, and was voted the 10th best game of all time in a special issue of Your Sinclair magazine in 2004. Related games The game is similar to a previous unpublished game by Jon Ritman, Soccerama. Later, in 1995, Jon Ritman tried to release Match Day III, but the name of the game was changed to Super Match Soccer to avoid any potential legal issues. References External links Comment about Ritman, that allows his games to be distributed over the Internet Match Day II playing video at youtube.com Match Day challenge Crash - Issue 37 Martin Galway interview at c64.com Match Day II at thelegacy.de 1987 video games Association football video games Video game sequels Amstrad CPC games Commodore 64 games Amstrad PCW games MSX games Ocean Software games ZX Spectrum games V
https://en.wikipedia.org/wiki/Grading%20%28tumors%29
In pathology, grading is a measure of the cell appearance in tumors and other neoplasms. Some pathology grading systems apply only to malignant neoplasms (cancer); others apply also to benign neoplasms. The neoplastic grading is a measure of cell anaplasia (reversion of differentiation) in the sampled tumor and is based on the resemblance of the tumor to the tissue of origin. Grading in cancer is distinguished from staging, which is a measure of the extent to which the cancer has spread. Pathology grading systems classify the microscopic cell appearance abnormality and deviations in their rate of growth with the goal of predicting developments at tissue level (see also the 4 major histological changes in dysplasia). Cancer is a disorder of cell life cycle alteration that leads (non-trivially) to excessive cell proliferation rates, typically longer cell lifespans and poor differentiation. The grade score (numerical: G1 up to G4) increases with the lack of cellular differentiation - it reflects how much the tumor cells differ from the cells of the normal tissue they have originated from (see 'Categories' below). Tumors may be graded on four-tier, three-tier, or two-tier scales, depending on the institution and the tumor type. The histologic tumor grade score along with the metastatic (whole-body-level cancer-spread) staging are used to evaluate each specific cancer patient, develop their individual treatment strategy and to predict their prognosis. A cancer that is very poorly differentiated is called anaplastic. Categories Grading systems are also different for many common types of cancer, though following a similar pattern with grades being increasingly malignant over a range of 1 to 4. If no specific system is used, the following general grades are most commonly used, and recommended by the American Joint Commission on Cancer and other bodies: GX Grade cannot be assessed G1 Well differentiated (Low grade) G2 Mode
https://en.wikipedia.org/wiki/Viktor%20Bunyakovsky
Viktor Yakovlevich Bunyakovsky (, ; , Bar, Podolia Governorate, Russian Empire – , St. Petersburg, Russian Empire) was a Russian mathematician, member and later vice president of the Petersburg Academy of Sciences. Bunyakovsky was a mathematician, noted for his work in theoretical mechanics and number theory (see: Bunyakovsky conjecture), and is credited with an early discovery of the Cauchy–Schwarz inequality, proving it for the infinite dimensional case as well as for definite integrals of real-valued functions in 1859, many years prior to Hermann Schwarz's works on the subject. Biography Viktor Yakovlevich Bunyakovsky was born in Bar, Podolia Governorate, Russian Empire (now Ukraine) in 1804. Bunyakovsky was a son of Colonel Yakov Vasilievich Bunyakovsky of a cavalry regiment, who was killed in Finland in 1809. Education Bunyakovsky obtained his initial mathematical education at the home of his father's friend, Count Alexander Tormasov, in St. Petersburg. In 1820, he traveled with the count's son to a university in Coburg and subsequently to the Sorbonne in Paris to study mathematics. At the Sorbonne, Bunyakovsky had opportunity to attend lectures from Laplace and Poisson. He focused his study and research on mathematics and physics. In 1824, Bunyakovsky received his bachelor's degree from the Sorbonne. Continuing his research, he wrote three doctoral dissertations under Cauchy's supervision by the spring of 1825: The rotary motion in a resistant medium of a set of plates of constant thickness and defined contour around an axis inclined with respect to the horizon; The determination of the radius vector in elliptical motion of planets; and The propagation of heat in solids. He successfully completed his dissertation on theoretical physics, theoretical mechanics and mathematical physics, and obtained his doctorate under Cauchy's supervision. Scientific and pedagogical work After the seven years abroad, Bunyakovsky returned to St. Petersburg in 1826 and took
https://en.wikipedia.org/wiki/Line%E2%80%93plane%20intersection
In analytic geometry, the intersection of a line and a plane in three-dimensional space can be the empty set, a point, or a line. It is the entire line if that line is embedded in the plane, and is the empty set if the line is parallel to the plane but outside it. Otherwise, the line cuts through the plane at a single point. Distinguishing these cases, and determining equations for the point and line in the latter cases, have use in computer graphics, motion planning, and collision detection. Algebraic form In vector notation, a plane can be expressed as the set of points for which where is a normal vector to the plane and is a point on the plane. (The notation denotes the dot product of the vectors and .) The vector equation for a line is where is a unit vector in the direction of the line, is a point on the line, and is a scalar in the real number domain. Substituting the equation for the line into the equation for the plane gives Expanding gives And solving for gives If then the line and plane are parallel. There will be two cases: if then the line is contained in the plane, that is, the line intersects the plane at each point of the line. Otherwise, the line and plane have no intersection. If there is a single point of intersection. The value of can be calculated and the point of intersection, , is given by . Parametric form A line is described by all points that are a given direction from a point. A general point on a line passing through points and can be represented as where is the vector pointing from to . Similarly a general point on a plane determined by the triangle defined by the points , and can be represented as where is the vector pointing from to , and is the vector pointing from to . The point at which the line intersects the plane is therefore described by setting the point on the line equal to the point on the plane, giving the parametric equation: This can be rewritten as which can be expressed in matrix
https://en.wikipedia.org/wiki/Inter-processor%20interrupt
In computing, an inter-processor interrupt (IPI), also known as a shoulder tap, is a special type of interrupt by which one processor may interrupt another processor in a multiprocessor system if the interrupting processor requires action from the other processor. Actions that might be requested include: flushes of memory management unit caches, such as translation lookaside buffers, on other processors when memory mappings are changed by one processor; stopping when the system is being shut down by one processor. Notify a processor that higher priority work is available. Notify a processor of work that cannot be done on all processors due to, e.g., asymmetric access to I/O channels special features on some processors Mechanism The M65MP option of OS/360 used the Direct Control feature of the S/360 to generate an interrupt on another processor; on S/370 and its successors, including z/Architecture, the SIGNAL PROCESSOR instruction provides a more formalized interface. The documentation for some IBM operating systems refers to this as a shoulder tap. On IBM PC compatible computers that use the Advanced Programmable Interrupt Controller (APIC), IPI signaling is often performed using the APIC. When a CPU wishes to send an interrupt to another CPU, it stores the interrupt vector and the identifier of the target's local APIC in the Interrupt Command Register (ICR) of its own local APIC. A message is then sent via the APIC bus to the target's local APIC, which then issues a corresponding interrupt to its own CPU. Examples In a multiprocessor system running Microsoft Windows, a processor may interrupt another processor for the following reasons, in addition to the ones listed above: queue a DISPATCH_LEVEL interrupt to schedule a particular thread for execution; kernel debugger breakpoint. IPIs are given an IRQL of 29. See also Interrupt Interrupt handler Non-maskable interrupt (NMI) References External links Interrupts and Exceptions Interrupts
https://en.wikipedia.org/wiki/Thermodynamic%20instruments
A thermodynamic instrument is any device which facilitates the quantitative measurement of thermodynamic systems. In order for a thermodynamic parameter to be truly defined, a technique for its measurement must be specified. For example, the ultimate definition of temperature is "what a thermometer reads". The question follows – what is a thermometer? There are two types of thermodynamic instruments, the meter and the reservoir. A thermodynamic meter is any device which measures any parameter of a thermodynamic system. A thermodynamic reservoir is a system which is so large that it does not appreciably alter its state parameters when brought into contact with the test system. Overview Two general complementary tools are the meter and the reservoir. It is important that these two types of instruments are distinct. A meter does not perform its task accurately if it behaves like a reservoir of the state variable it is trying to measure. If, for example, a thermometer, were to act as a temperature reservoir it would alter the temperature of the system being measured, and the reading would be incorrect. Ideal meters have no effect on the state variables of the system they are measuring. Thermodynamic meters A meter is a thermodynamic system which displays some aspect of its thermodynamic state to the observer. The nature of its contact with the system it is measuring can be controlled, and it is sufficiently small that it does not appreciably affect the state of the system being measured. The theoretical thermometer described below is just such a meter. In some cases, the thermodynamic parameter is actually defined in terms of an idealized measuring instrument. For example, the zeroth law of thermodynamics states that if two bodies are in thermal equilibrium with a third body, they are also in thermal equilibrium with each other. This principle, as noted by James Maxwell in 1872, asserts that it is possible to measure temperature. An idealized thermometer is a sample
https://en.wikipedia.org/wiki/Thermodynamic%20process
Classical thermodynamics considers three main kinds of thermodynamic process: (1) changes in a system, (2) cycles in a system, and (3) flow processes. (1) A Thermodynamic process is a process in which the thermodynamic state of a system is changed. A change in a system is defined by a passage from an initial to a final state of thermodynamic equilibrium. In classical thermodynamics, the actual course of the process is not the primary concern, and often is ignored. A state of thermodynamic equilibrium endures unchangingly unless it is interrupted by a thermodynamic operation that initiates a thermodynamic process. The equilibrium states are each respectively fully specified by a suitable set of thermodynamic state variables, that depend only on the current state of the system, not on the path taken by the processes that produce the state. In general, during the actual course of a thermodynamic process, the system may pass through physical states which are not describable as thermodynamic states, because they are far from internal thermodynamic equilibrium. Non-equilibrium thermodynamics, however, considers processes in which the states of the system are close to thermodynamic equilibrium, and aims to describe the continuous passage along the path, at definite rates of progress. As a useful theoretical but not actually physically realizable limiting case, a process may be imagined to take place practically infinitely slowly or smoothly enough to allow it to be described by a continuous path of equilibrium thermodynamic states, when it is called a "quasi-static" process. This is a theoretical exercise in differential geometry, as opposed to a description of an actually possible physical process; in this idealized case, the calculation may be exact. A really possible or actual thermodynamic process, considered closely, involves friction. This contrasts with theoretically idealized, imagined, or limiting, but not actually possible, quasi-static processes which may oc
https://en.wikipedia.org/wiki/Network%20Data%20Representation
Network Data Representation (NDR) is an implementation of the presentation layer in the OSI model. It is used for DCE/RPC and Microsoft RPC (MSRPC). See also DCE/RPC Microsoft RPC External links NDR Specification Internet Standards Internet protocols Presentation layer protocols
https://en.wikipedia.org/wiki/Mesh%20analysis
Mesh analysis (or the mesh current method) is a method that is used to solve planar circuits for the currents (and indirectly the voltages) at any place in the electrical circuit. Planar circuits are circuits that can be drawn on a plane surface with no wires crossing each other. A more general technique, called loop analysis (with the corresponding network variables called loop currents) can be applied to any circuit, planar or not. Mesh analysis and loop analysis both make use of Kirchhoff’s voltage law to arrive at a set of equations guaranteed to be solvable if the circuit has a solution. Mesh analysis is usually easier to use when the circuit is planar, compared to loop analysis. Mesh currents and essential meshes Mesh analysis works by arbitrarily assigning mesh currents in the essential meshes (also referred to as independent meshes). An essential mesh is a loop in the circuit that does not contain any other loop. Figure 1 labels the essential meshes with one, two, and three. A mesh current is a current that loops around the essential mesh and the equations are solved in terms of them. A mesh current may not correspond to any physically flowing current, but the physical currents are easily found from them. It is usual practice to have all the mesh currents loop in the same direction. This helps prevent errors when writing out the equations. The convention is to have all the mesh currents looping in a clockwise direction. Figure 2 shows the same circuit from Figure 1 with the mesh currents labeled. Solving for mesh currents instead of directly applying Kirchhoff's current law and Kirchhoff's voltage law can greatly reduce the amount of calculation required. This is because there are fewer mesh currents than there are physical branch currents. In figure 2 for example, there are six branch currents but only three mesh currents. Setting up the equations Each mesh produces one equation. These equations are the sum of the voltage drops in a comple
https://en.wikipedia.org/wiki/Robust%20control
In control theory, robust control is an approach to controller design that explicitly deals with uncertainty. Robust control methods are designed to function properly provided that uncertain parameters or disturbances are found within some (typically compact) set. Robust methods aim to achieve robust performance and/or stability in the presence of bounded modelling errors. The early methods of Bode and others were fairly robust; the state-space methods invented in the 1960s and 1970s were sometimes found to lack robustness, prompting research to improve them. This was the start of the theory of robust control, which took shape in the 1980s and 1990s and is still active today. In contrast with an adaptive control policy, a robust control policy is static, rather than adapting to measurements of variations, the controller is designed to work assuming that certain variables will be unknown but bounded. Criteria for robustness Informally, a controller designed for a particular set of parameters is said to be robust if it also works well under a different set of assumptions. High-gain feedback is a simple example of a robust control method; with sufficiently high gain, the effect of any parameter variations will be negligible. From the closed-loop transfer function perspective, high open-loop gain leads to substantial disturbance rejection in the face of system parameter uncertainty. Other examples of robust control include sliding mode and terminal sliding mode control. The major obstacle to achieving high loop gains is the need to maintain system closed-loop stability. Loop shaping which allows stable closed-loop operation can be a technical challenge. Robust control systems often incorporate advanced topologies which include multiple feedback loops and feed-forward paths. The control laws may be represented by high order transfer functions required to simultaneously accomplish desired disturbance rejection performance with the robust closed-loop operation. High
https://en.wikipedia.org/wiki/Binary-to-text%20encoding
A binary-to-text encoding is encoding of data in plain text. More precisely, it is an encoding of binary data in a sequence of printable characters. These encodings are necessary for transmission of data when the communication channel does not allow binary data (such as email or NNTP) or is not 8-bit clean. PGP documentation () uses the term "ASCII armor" for binary-to-text encoding when referring to Base64. Overview The basic need for a binary-to-text encoding comes from a need to communicate arbitrary binary data over preexisting communications protocols that were designed to carry only English language human-readable text. Those communication protocols may only be 7-bit safe (and within that avoid certain ASCII control codes), and may require line breaks at certain maximum intervals, and may not maintain whitespace. Thus, only the 94 printable ASCII characters are "safe" to use to convey data. Description The ASCII text-encoding standard uses 7 bits to encode characters. With this it is possible to encode 128 (i.e. 27) unique values (0–127) to represent the alphabetic, numeric, and punctuation characters commonly used in English, plus a selection of Control characters which do not represent printable characters. For example, the capital letter A is represented in 7 bits as 100 00012, 0x41 (1018) , the numeral 2 is 011 00102 0x32 (628), the character } is 111 11012 0x7D (1758), and the Control character RETURN is 000 11012 0x0D (158). In contrast, most computers store data in memory organized in eight-bit bytes. Files that contain machine-executable code and non-textual data typically contain all 256 possible eight-bit byte values. Many computer programs came to rely on this distinction between seven-bit text and eight-bit binary data, and would not function properly if non-ASCII characters appeared in data that was expected to include only ASCII text. For example, if the value of the eighth bit is not preserved, the program might interpret a byte value above 1
https://en.wikipedia.org/wiki/Puzzle%20jug
A puzzle jug is a puzzle in the form of a jug, popular in the 18th and 19th centuries. Puzzle jugs of varying quality were popular in homes and taverns. An inscription typically challenges the drinker to consume the contents without spilling them, which, because the neck of the jug is perforated, is impossible to do conventionally. The solution to the puzzle is that the jug has a hidden tube, one end of which is the spout. The tube usually runs around the rim and then down the handle, with its other opening inside the jug and near the bottom. To solve the puzzle, the drinker must suck from the spout end of the tube. To make the puzzle more interesting, it was common to provide a number of additional holes along the tube, which must be closed off before the contents could be sucked. Some jugs even have a hidden hole to make the challenge still more confounding. History The earliest example in England is the Exeter puzzle jug—an example of medieval pottery in Britain. The Exeter puzzle jug dates from about AD 1300 and was originally made in Saintonge, Western France. The puzzle jug is a descendant of earlier drinking puzzles, such as the fuddling cup and the pot crown, each of which has a different solution. Known inscriptions include: Come drink of me and merry be. Come drink your fill, but do not spill. Fill me up with licker sweet / For it is good when fun us do meet. Gentlemen, now try your Skill / I'll hold your Sixpence if you Will / That you don't drink unless you spill. Here, Gentlemen, come try your skill / I'll hold a wager if you will / That you don't drink this liquor all / Without you spill and let some fall. Within this jug there is good liquor / 'tis fit for Parson or for Vicar / but how to drink and not to spill / will test the utmost of your skill See also Bridge-spouted vessel Dribble glass Fuddling cup Pythagorean cup References External links A puzzle jug Martin Homer's Puzzle Jugs Mechanical puzzles Drinkware Pottery shapes
https://en.wikipedia.org/wiki/Fuddling%20cup
A fuddling cup is a three-dimensional puzzle in the form of a drinking vessel, made of three or more cups or jugs all linked together by holes and tubes. The challenge of the puzzle is to drink from the vessel in such a way that the beverage does not spill. To do this successfully, one must drink from the cups in a specific order. Fuddling cups were especially popular in 17th- and 18th-century England. See also Dribble glass Puzzle jug Pythagorean cup References External links Mechanical puzzles Drinkware
https://en.wikipedia.org/wiki/INTERBUS
INTERBUS is a serial bus system which transmits data between control systems (e.g., PCs, PLCs, VMEbus computers, robot controllers etc.) and spatially distributed I/O modules that are connected to sensors and actuators (e.g., temperature sensors, position switches). The INTERBUS system was developed by Phoenix Contact and has been available since 1987. It is one of the leading Fieldbus systems in the automation industry and is fully standardized according to European Standard EN 50254 and IEC 61158. At the moment, more than 600 manufacturers are involved in the implementation of INTERBUS technology in control systems and field devices. Since 2011 is the INTERBUS technology hosted by the industry association Profibus and Profinet International. See also BiSS interface External links www.interbusclub.com www.phoenixcontact.com Explanation of Bit-based Sensor networks including SeriPlex Serial buses Industrial computing Industrial automation
https://en.wikipedia.org/wiki/Golvellius
is an action role-playing video game developed by Compile and originally released for the Japanese MSX home computer system in 1987. Sega licensed the franchise in 1988 and released the game for the Master System (the Mark III in Japan), featuring enhanced graphics and entirely different overworld and dungeon layouts. This version was released worldwide under the name Golvellius: Valley of Doom. Later that year (1988), Compile released yet another remake for the MSX2 system, titled . This game featured mostly the same graphics as the ones in the Sega Master System version, but the overworld and dungeon layouts are entirely different. In 2009 it was announced by DotEmu/D4 Entreprise that Golvellius was to be re-released for the iPhone OS platform. It is a port of the Master System version. The scenario is the same in all the three different versions of Golvellius. The ending promised a sequel, which was never developed/released. However, there is a spin-off game titled Super Cooks that came included in the 1989 release of the Disc Station Special Shoka Gou. Reception Computer and Video Games rated the Sega Master System version 87% in 1989. Console XS rated it 82% in 1992. References External links 1987 video games Action role-playing video games Compile (company) games Dotemu games IOS games Master System games MSX games MSX2 games Single-player video games Video games developed in Japan
https://en.wikipedia.org/wiki/Cyclostationary%20process
A cyclostationary process is a signal having statistical properties that vary cyclically with time. A cyclostationary process can be viewed as multiple interleaved stationary processes. For example, the maximum daily temperature in New York City can be modeled as a cyclostationary process: the maximum temperature on July 21 is statistically different from the temperature on December 20; however, it is a reasonable approximation that the temperature on December 20 of different years has identical statistics. Thus, we can view the random process composed of daily maximum temperatures as 365 interleaved stationary processes, each of which takes on a new value once per year. Definition There are two differing approaches to the treatment of cyclostationary processes. The stochastic approach is to view measurements as an instance of an abstract stochastic process model. As an alternative, the more empirical approach is to view the measurements as a single time series of data--that which has actually been measured in practice and, for some parts of theory, conceptually extended from an observed finite time interval to an infinite interval. Both mathematical models lead to probabilistic theories: abstract stochastic probability for the stochastic process model and the more empirical Fraction Of Time (FOT) probability for the alternative model. The FOT probability of some event associated with the time series is defined to be the fraction of time that event occurs over the lifetime of the time series. In both approaches, the process or time series is said to be cyclostationary if and only if its associated probability distributions vary periodically with time. However, in the non-stochastic time-series approach, there is an alternative but equivalent definition: A time series that contains no finite-strength additive sine-wave components is said to exhibit cyclostationarity if and only if there exists some nonlinear time-invariant transformation of the time series that pro
https://en.wikipedia.org/wiki/SAP%20Graphical%20User%20Interface
SAP GUI is the graphical user interface client in SAP ERP's 3-tier architecture of database, application server and client. It is software that runs on a Microsoft Windows, Apple Macintosh or Unix desktop, and allows a user to access SAP functionality in SAP applications such as SAP ERP and SAP Business Information Warehouse (BW). It is used for remote access to the SAP central server in a company network. Family SAP GUI for the Windows environment and Apple Macintosh SAP GUI for the Java(TM) environment SAP GUI for HTML / Internet Transaction Server (ITS) Requires Internet Explorer or Firefox as a browser; other browsers are not officially supported by SAP. Microsoft Windows releases Java releases – for other operating systems Single sign-on SAP GUI on Microsoft Windows or Internet Explorer can also be used for single sign-on. There are several portal-based authentication applications for single sign-on. SAP GUI can have single sign-on with SAP Logon Ticket as well. Single sign-on also works in the Java GUI. Criticism of using SAP GUI for authentication to SAP server access SAP is a distributed application, where client software (SAP GUI) installed on a user's workstation is used to access the central SAP server remotely over the company's network. Users need to authenticate themselves when accessing SAP. By default, however, SAP uses unencrypted communication, which allows potential company-internal attackers to get access to usernames and passwords by listening on the network. This can expose the complete SAP system, if a person is able to get access to this information for a user with extended authorization in the SAP system. Information about this feature is publicly accessible on the Internet. SAP Secure Network Communications SAP offers an option to strongly protect communication between clients and servers, called Secure Network Communications (SNC). Security In total, the vendor has released 25 security patches (aka SAP Security Notes). One of
https://en.wikipedia.org/wiki/Combined%20Cipher%20Machine
The Combined Cipher Machine (CCM) (or Combined Cypher Machine) was a common cipher machine system for securing Allied communications during World War II and, for a few years after, by NATO. The British Typex machine and the US ECM Mark II were both modified so that they were interoperable. History The British had shown their main cipher machine — Typex — to the US on their entry into the war, but the Americans were reluctant to share their machine, the ECM Mark II. There was a need for secure inter-Allied communications, and so a joint cipher machine adapted from both countries' systems was developed by the US Navy. Use The "Combined Cipher Machine" was approved in October 1942, and production began two months later. The requisite adapters, designed by Don Seiler, were all manufactured in the US, as Britain did not have sufficient manufacturing resources at the time. The CCM was initially used on a small scale for naval use from 1 November 1943, becoming operational on all US and UK armed services in April 1944. The adapter to convert the ECM into the CCM was denoted the ASAM 5 by the US Army (in 1949) and CSP 1600 by the US Navy (the Navy referred to the entire ECM machine with CCM adapter as the CSP 1700). The adapter was a replacement rotor basket, so the ECM could be easily converted for CCM use in the field. A specially converted ECM, termed the CCM Mark II, was also made available to Britain and Canada. The CCM programme cost US$6 million. SIGROD was an implementation of the CCM which, at one point, was proposed as a replacement for the ECM Mark II (Savard and Pekelney, 1999). TypeX Mark 23 was a later model of the Typex cipher machine family that was adapted for use with the Combined Cipher Machine. Security While Allied codebreakers had much success reading the equivalent German machine, the Lorenz cipher, their German counterparts, although performing some initial analysis, had no success with the CCM. However, there were security problems with th
https://en.wikipedia.org/wiki/P%20element
P elements are transposable elements that were discovered in Drosophila as the causative agents of genetic traits called hybrid dysgenesis. The transposon is responsible for the P trait of the P element and it is found only in wild flies. They are also found in many other eukaryotes. The name comes from evolutionary biologist Margaret Kidwell, who, together with James Kidwell and John Sved, researched hybrid dysgenesis in Drosophila. They referred to strains as P of paternal and M of maternal if they contributed to hybrid dysgenesis in this reproductive role. The P element encodes for an enzyme known as P transposase. Unlike laboratory-bred females, wild-type females are thought also to express an inhibitor to P transposase function, produced by the very same element. This inhibitor reduces the disruption to the genome caused by the movement of P elements, allowing fertile progeny. Evidence for this comes from crosses of laboratory females (which lack the P transposase inhibitor) with wild-type males (which have P elements). In the absence of the inhibitor, the P elements can proliferate throughout the genome, disrupting many genes and often proving lethal to progeny or rendering them sterile. P elements are commonly used as mutagenic agents in genetic experiments with Drosophila. One advantage of this approach is that the mutations are easy to locate. In hybrid dysgenesis, one strain of Drosophila mates with another strain of Drosophila, producing hybrid offspring and causing chromosomal damage known to be dysgenic. Hybrid dysgenesis requires a contribution from both parents. For example, in the P-M system, where the P strain contributes paternally and M strain contributes maternally, dysgenesis can occur. The reverse cross, with an M cytotype father and a P mother, produces normal offspring, as it crosses in a P x P or M x M manner. P male chromosomes can cause dysgenesis when crossed with an M female. Characteristics The P element is a class II transposon, an
https://en.wikipedia.org/wiki/Schick%20test
The Schick test, developed in 1913, is a skin test used to determine whether or not a person is susceptible to diphtheria. It was named after its inventor, Béla Schick (1877–1967), a Hungarian-born American pediatrician. Procedure The test is a simple procedure. A small amount (0.1 ml) of diluted (1/50 MLD) diphtheria toxin is injected intradermally into one arm of the person and a heat inactivated toxin on the other as a control. If a person does not have enough antibodies to fight it off, the skin around the injection will become red and swollen, indicating a positive result. This swelling disappears after a few days. If the person has an immunity, then little or no swelling and redness will occur, indicating a negative result. Results can be interpreted as: Positive: when the test results in a wheal of 5–10 mm diameter, reaching its peak in 4–7 days. The control arm shows no reaction. This indicates that the subject lacks antibodies against the toxin and hence is susceptible to the disease. Pseudo-positive: when there is only a red-colored inflammation (erythema) and it disappears within 4 days. This happens on both the arms since the subject is immune but hypersensitive to the toxin. Negative reaction: Indicates that the person is immune. Combined reaction: Initial picture is like that of the pseudo-reaction but the erythema fades off after 4 days only in the control arm. It progresses on the test arm to a typical positive. The subject is interpreted to be both susceptible and hypersensitive. The test was created when immunizing agents were scarce and not very safe; however, as newer and safer toxoids became available, susceptibility tests were no longer required. References Taber's Cyclopedic Medical Dictionary, 20th Ed. (2005). Skin tests Pediatrics Diphtheria Immunologic tests
https://en.wikipedia.org/wiki/Thermodynamic%20cycle
A thermodynamic cycle consists of linked sequences of thermodynamic processes that involve transfer of heat and work into and out of the system, while varying pressure, temperature, and other state variables within the system, and that eventually returns the system to its initial state. In the process of passing through a cycle, the working fluid (system) may convert heat from a warm source into useful work, and dispose of the remaining heat to a cold sink, thereby acting as a heat engine. Conversely, the cycle may be reversed and use work to move heat from a cold source and transfer it to a warm sink thereby acting as a heat pump. If at every point in the cycle the system is in thermodynamic equilibrium, the cycle is reversible. Whether carried out reversible or irreversibly, the net entropy change of the system is zero, as entropy is a state function. During a closed cycle, the system returns to its original thermodynamic state of temperature and pressure. Process quantities (or path quantities), such as heat and work are process dependent. For a cycle for which the system returns to its initial state the first law of thermodynamics applies: The above states that there is no change of the internal energy () of the system over the cycle. represents the total work and heat input during the cycle and would be the total work and heat output during the cycle. The repeating nature of the process path allows for continuous operation, making the cycle an important concept in thermodynamics. Thermodynamic cycles are often represented mathematically as quasistatic processes in the modeling of the workings of an actual device. Heat and work Two primary classes of thermodynamic cycles are power cycles and heat pump cycles. Power cycles are cycles which convert some heat input into a mechanical work output, while heat pump cycles transfer heat from low to high temperatures by using mechanical work as the input. Cycles composed entirely of quasistatic processes can operate
https://en.wikipedia.org/wiki/Ancient%20DNA
Ancient DNA (aDNA) is DNA isolated from ancient specimens. Due to degradation processes (including cross-linking, deamination and fragmentation) ancient DNA is more degraded in comparison with contemporary genetic material. Even under the best preservation conditions, there is an upper boundary of 0.4–1.5 million years for a sample to contain sufficient DNA for sequencing technologies. The oldest sample ever sequenced is estimated to be 1.65 million years old. Genetic material has been recovered from paleo/archaeological and historical skeletal material, mummified tissues, archival collections of non-frozen medical specimens, preserved plant remains, ice and from permafrost cores, marine and lake sediments and excavation dirt. On 7 December 2022, The New York Times reported that two-million year old genetic material was found in Greenland, and is currently considered the oldest DNA discovered so far. History of ancient DNA studies 1980s The first study of what would come to be called aDNA was conducted in 1984, when Russ Higuchi and colleagues at the University of California, Berkeley reported that traces of DNA from a museum specimen of the Quagga not only remained in the specimen over 150 years after the death of the individual, but could be extracted and sequenced. Over the next two years, through investigations into natural and artificially mummified specimens, Svante Pääbo confirmed that this phenomenon was not limited to relatively recent museum specimens but could apparently be replicated in a range of mummified human samples that dated as far back as several thousand years. The laborious processes that were required at that time to sequence such DNA (through bacterial cloning) were an effective brake on the study of ancient DNA (aDNA) and the field of museomics. However, with the development of the Polymerase Chain Reaction (PCR) in the late 1980s, the field began to progress rapidly. Double primer PCR amplification of aDNA (jumping-PCR) can produce highl
https://en.wikipedia.org/wiki/Tweak%20programming%20environment
Tweak is a graphical user interface (GUI) layer written by Andreas Raab for the Squeak development environment, which in turn is an integrated development environment based on the Smalltalk-80 computer programming language. Tweak is an alternative to an earlier graphic user interface layer called Morphic. Development began in 2001. Applications that use the Tweak software include Sophie (version 1), a multimedia and e-book authoring system, and a family of virtual world systems: Open Cobalt, Teleplace, OpenQwaq, 3d ICC's Immersive Terf and the Croquet Project. Influences An experimental version of Etoys, a programming environment for children, used Tweak instead of Morphic. Etoys was a major influence on a similar Squeak-based programming environment known as Scratch. References External links Tweak Programming tools Smalltalk programming language family
https://en.wikipedia.org/wiki/Victor%20Isakov
Victor Isakov (1947 – May 14, 2021) was a mathematician working in the field of inverse problems for partial differential equations and related topics (potential theory, uniqueness of continuation and Carleman estimates, nonlinear functional analysis and calculus of variation). He was a distinguished professor in the Department of Mathematics and Statistics at Wichita State University. His areas of professional interest included: Inverse problems of gravimetry (general uniqueness conditions and local solvability theorems) and related problems of imaging including prospecting active part of the brain and the source of noise of the aircraft from exterior measurements of electromagnetic and acoustical fields. Inverse problems of conductivity (uniqueness of discontinuous conductivity and numerical methods) and their applications to medical imaging and nondestructive testing of materials for cracks and inclusions. Inverse scattering problems (uniqueness and stability of penetrable and soft scatterers). Finding constitutional laws from experimental data (reconstructing nonlinear partial differential equation from all or some boundary data). Uniqueness of the continuation for hyperbolic equations and systems of mathematical physics. The inverse option pricing problem. Publications Isakov has over 90 publications in print or in preparation as of late 2005, which include: Increased stability in the continuation of solutions to the Helmholtz equation (with Tomasz Hrycak), Inverse Problems, 20(2004), 697-712. Inverse Problems for Partial Differential Equations, Applied Mathematical Sciences (Springer-Verlag), Vol 127, 2nd ed., 2006. Presentations: During the last 15 years, he delivered approximately 90 invited talks at international and national conferences and universities in Austria, Canada, China, Finland, France, Germany, Italy, Japan, Poland, Russia, Sweden, Switzerland, South Korea, Tunisia, and United Kingdom. He was a principal speaker at the summer AMS-SIAM r
https://en.wikipedia.org/wiki/BBC%20Research%20%26%20Development
BBC Research & Development is the technical research department of the BBC. Function It has responsibility for researching and developing advanced and emerging media technologies for the benefit of the corporation, and wider UK and European media industries, and is also the technical design authority for a number of major technical infrastructure transformation projects for the UK broadcasting industry. Structure BBC R&D is part of the wider BBC Design & Engineering, and is led by Jatin Aythora, Director, Research & Development. In 2011, the North Lab moved into MediaCityUK in Salford along with several other departments of the BBC, whilst the South Lab remained in White City in UK. History In April 1930 the Development section of the BBC became the Research Department. The department as it stands today was formed in 1993 from the merger of the BBC Designs Department and the BBC Research Department. From 2006 to 2008 it was known as Research and Innovation but has since reverted to its original name. BBC Research & Development has made major contributions to broadcast technology, carrying out original research in many areas, and developing items like the peak programme meter (PPM) which became the basis for many world standards. Innovations It has also been involved in many well-known consumer technologies such as teletext, DAB, NICAM and Freeview. It was at the forefront of the development of FM radio, stereo FM, and RDS. These innovations have led to Queen's Awards for Innovation in 1969, 1974, 1983, 1987, 1992, 1998, 2001 and 2011. In the 1970s, its engineers designed the famous LS3/5A studio monitor for use in outside broadcasting units. Licensed to manufacturers, the loudspeaker sold 100,000 pairs in its 20+ years' life. Closure of Kingswood Warren and move to London and Salford In early 2010 the department had approximately 135 staff based at three locations: White City in London, Kingswood Warren in Kingswood, Surrey, and the R&D (North Lab) at the
https://en.wikipedia.org/wiki/Contrast-enhanced%20ultrasound
Contrast-enhanced ultrasound (CEUS) is the application of ultrasound contrast medium to traditional medical sonography. Ultrasound contrast agents rely on the different ways in which sound waves are reflected from interfaces between substances. This may be the surface of a small air bubble or a more complex structure. Commercially available contrast media are gas-filled microbubbles that are administered intravenously to the systemic circulation. Microbubbles have a high degree of echogenicity (the ability of an object to reflect ultrasound waves). There is a great difference in echogenicity between the gas in the microbubbles and the soft tissue surroundings of the body. Thus, ultrasonic imaging using microbubble contrast agents enhances the ultrasound backscatter, (reflection) of the ultrasound waves, to produce a sonogram with increased contrast due to the high echogenicity difference. Contrast-enhanced ultrasound can be used to image blood perfusion in organs, measure blood flow rate in the heart and other organs, and for other applications. Targeting ligands that bind to receptors characteristic of intravascular diseases can be conjugated to microbubbles, enabling the microbubble complex to accumulate selectively in areas of interest, such as diseased or abnormal tissues. This form of molecular imaging, known as targeted contrast-enhanced ultrasound, will only generate a strong ultrasound signal if targeted microbubbles bind in the area of interest. Targeted contrast-enhanced ultrasound may have many applications in both medical diagnostics and medical therapeutics. However, the targeted technique has not yet been approved by the FDA for clinical use in the United States. Contrast-enhanced ultrasound is regarded as safe in adults, comparable to the safety of MRI contrast agents, and better than radiocontrast agents used in contrast CT scans. The more limited safety data in children suggests that such use is as safe as in the adult population. Bubble echocard
https://en.wikipedia.org/wiki/Biotic%20potential
Biotic potential is described by the unrestricted growth of populations resulting in the maximum growth of that population.   Biotic potential is the highest possible vital index of a species; therefore, when the species has its highest birthrate and lowest mortality rate. Quantitative Expression The biotic potential is the quantitative expression of the ability of a species to face natural selection in any environment. The main equilibrium of a particular population is described by the equation: Number of Individuals = Biotic Potential/Resistance of the Environment (Biotic and Abiotic) Chapman also relates to a "vital index", regarding a ratio to find the rate of surviving members of a species, whereas; Vital Index = (number of births/number of deaths)*100. Components According to the ecologist R.N. Chapman (1928), the biotic potential could be divided into a reproductive and survival potential. The survival potential could in turn be divided into nutritive and protective potentials. Reproductive potential (potential natality) is the upper limit to biotic potential in the absence of mortality. Survival potential is the reciprocal of mortality. Because reproductive potential does not account for the number of gametes surviving, survival potential is a necessary component of biotic potential. In the absence of mortality, biotic potential = reproductive potential. Chapman also identified two additional components of nutritive and protective potentials as divisions of the survival potential. Nutritive potential is the ability to acquire and use food for growth and energy. Protective potential is described by the ability of the organism to protect itself against the dynamic forces of environment in order to insure successful reproduction and offspring. Full expression of the biotic potential of an organism is restricted by environmental resistance, any condition that inhibits the increase in number of the population. It is generally only reached when environmen
https://en.wikipedia.org/wiki/Celestial%20Emporium%20of%20Benevolent%20Knowledge
Celestial Emporium of Benevolent Knowledge () is a fictitious taxonomy of animals described by the writer Jorge Luis Borges in his 1942 essay "The Analytical Language of John Wilkins" (). Overview Wilkins, a 17th-century philosopher, had proposed a universal language based on a classification system that would encode a description of the thing a word describes into the word itself—for example, Zi identifies the genus beasts; Zit denotes the "difference" rapacious beasts of the dog kind; and finally Zitα specifies dog. In response to this proposal and in order to illustrate the arbitrariness and cultural specificity of any attempt to categorize the world, Borges describes this example of an alternate taxonomy, supposedly taken from an ancient Chinese encyclopaedia entitled Celestial Emporium of Benevolent Knowledge. The list divides all animals into 14 categories. Borges claims that the list was discovered in its Chinese source by the translator Franz Kuhn. In his essay, Borges compares this classification with one allegedly used at the time by the Institute of Bibliography in Brussels, which he considers similarly chaotic. Borges says the Institute divides the universe in 1000 sections, of which number 262 is about the Pope, ironically classified apart from section 264, that on the Roman Catholic Church. Meanwhile section 294 encompasses all four of Hinduism, Shinto, Buddhism and Taoism. He also finds excessive heterogeneity in section 179, which includes animal cruelty, suicide, mourning, and an assorted group of vices and virtues. Borges concludes: "there is no description of the universe that isn't arbitrary and conjectural for a simple reason: we don't know what the universe is". Nevertheless, he finds Wilkins' language to be clever (ingenioso) in its design, as arbitrary as it may be. He points out that in a language with a divine scheme of the universe, beyond human capabilities, the name of an object would include the details of its entire past and futur
https://en.wikipedia.org/wiki/Future%20Evolution
Future Evolution is a book written by paleontologist Peter Ward and illustrated by Alexis Rockman. He addresses his own opinion of future evolution and compares it with Dougal Dixon's After Man: A Zoology of the Future and H. G. Wells's The Time Machine. According to Ward, humanity may exist for a long time. Nevertheless, we are impacting our planet. He splits his book in different chronologies, starting with the near future (the next 1,000 years). Humanity would be struggling to support a massive population of 11 billion. Global warming raises sea levels. The ozone layer weakens. Most of the available land is devoted to agriculture due to the demand for food. Despite all this, the oceanic wildlife remains untethered by most of these impacts, specifically the commercial farmed fish. This is, according to Ward, an era of extinction that would last about 10 million years (note that many human-caused extinctions have already occurred). After that, Earth gets stranger. Ward labels the species that have the potential to survive in a human-infested world. These include dandelions, raccoons, owls, pigs, cattle, rats, snakes, and crows to name but a few. In the human-infested ecosystem, those preadapted to live amongst man survived and prospered. Ward describes garbage dumps 10 million years in the future infested with multiple species of rats, a snake with a sticky frog-like tongue to snap up rodents, and pigs with snouts specialized for rooting through garbage. The story's time traveller who views this new refuse-covered habitat is gruesomely attacked by ravenous flesh-eating crows. Ward then questions the potential for humanity to evolve into a new species. According to him, this is incredibly unlikely. For this to happen a human population must isolate itself and interbreed until it becomes a new species. Then he questions if humanity would survive or extinguish itself by climate change, nuclear war, disease, or the posing threat of nanotechnology as terrorist weapon
https://en.wikipedia.org/wiki/Rip%20van%20Winkle%20cipher
In cryptography, the Rip van Winkle cipher is a provably secure cipher with a finite key, assuming the attacker has only finite storage. The cipher requires a broadcaster (perhaps a numbers station) publicly transmitting a series of random numbers. The sender encrypts a plaintext message by XORing it with the random numbers, then holding it some length of time T. At the end of that time, the sender finally transmits the encrypted message. The receiver holds the random numbers the same length of time T. As soon as the receiver gets the encrypted message, he XORs it with the random numbers he remembers were transmitted T ago, to recover the original plaintext message. The delay T represents the "key" and must be securely communicated only once. Ueli Maurer says the original Rip van Winkle cipher is completely impractical, but it motivated a new approach to provable security. Sources J.L. Massey and I. Ingemarsson. The Rip van Winkle cipher - a simple and provably computationally secure cipher with a finite key. In Proc. IEEE Int. Symp. Information Theory (Abstracts), page 146, 1985. Cryptographic algorithms
https://en.wikipedia.org/wiki/Copper%20loss
Copper loss is the term often given to heat produced by electrical currents in the conductors of transformer windings, or other electrical devices. Copper losses are an undesirable transfer of energy, as are core losses, which result from induced currents in adjacent components. The term is applied regardless of whether the windings are made of copper or another conductor, such as aluminium. Hence the term winding loss is often preferred. The term load loss is used in electricity delivery to describe the portion of the electricity lost between the generator and the consumer that is related to the load power (is proportional to the square thereof), as opposed to the no-load loss. Calculations Copper losses result from Joule heating and so are also referred to as "I squared R losses", in reference to Joule's First Law. This states that the energy lost each second, or power, increases as the square of the current through the windings and in proportion to the electrical resistance of the conductors. where I is the current flowing in the conductor and R is the resistance of the conductor. With I in amperes and R in ohms, the calculated power loss is given in watts. Joule heating has a coefficient of performance of 1.0, meaning that every 1 watt of electrical power is converted to 1 Joule of heat. Therefore, the energy lost due to copper loss is: where t is the time in seconds the current is maintained. Effect of frequency For low-frequency applications, the power loss can be minimized by employing conductors with a large cross-sectional area, made from low-resistivity metals. With high-frequency currents, the proximity effect and skin effect cause the current to be unevenly distributed across the conductor, increasing its effective resistance, and making loss calculations more difficult. Litz wire is a type of wire constructed to force the current to be distributed uniformly, thereby reducing Joule heating. Reducing copper loss Among other measures, the electric
https://en.wikipedia.org/wiki/Twelvefold%20way
In combinatorics, the twelvefold way is a systematic classification of 12 related enumerative problems concerning two finite sets, which include the classical problems of counting permutations, combinations, multisets, and partitions either of a set or of a number. The idea of the classification is credited to Gian-Carlo Rota, and the name was suggested by Joel Spencer. Overview Let and be finite sets. Let and be the cardinality of the sets. Thus is an -set, and is an -set. The general problem we consider is the enumeration of equivalence classes of functions . The functions are subject to one of the three following restrictions: No condition: each in may be sent by to any in , and each may occur multiple times. is injective: each value for in must be distinct from every other, and so each in may occur at most once in the image of . is surjective: for each in there must be at least one in such that , thus each will occur at least once in the image of . (The condition " is bijective" is only an option when ; but then it is equivalent to both " is injective" and " is surjective".) There are four different equivalence relations which may be defined on the set of functions from to : equality; equality up to a permutation of ; equality up to a permutation of ; equality up to permutations of and . The three conditions on the functions and the four equivalence relations can be paired in ways. The twelve problems of counting equivalence classes of functions do not involve the same difficulties, and there is not one systematic method for solving them. Two of the problems are trivial (the number of equivalence classes is 0 or 1), five problems have an answer in terms of a multiplicative formula of n and x, and the remaining five problems have an answer in terms of combinatorial functions (Stirling numbers and the partition function for a given number of parts). The incorporation of classical enumeration problems into this setting is a
https://en.wikipedia.org/wiki/Robot%20learning
Robot learning is a research field at the intersection of machine learning and robotics. It studies techniques allowing a robot to acquire novel skills or adapt to its environment through learning algorithms. The embodiment of the robot, situated in a physical embedding, provides at the same time specific difficulties (e.g. high-dimensionality, real time constraints for collecting data and learning) and opportunities for guiding the learning process (e.g. sensorimotor synergies, motor primitives). Example of skills that are targeted by learning algorithms include sensorimotor skills such as locomotion, grasping, active object categorization, as well as interactive skills such as joint manipulation of an object with a human peer, and linguistic skills such as the grounded and situated meaning of human language. Learning can happen either through autonomous self-exploration or through guidance from a human teacher, like for example in robot learning by imitation. Robot learning can be closely related to adaptive control, reinforcement learning as well as developmental robotics which considers the problem of autonomous lifelong acquisition of repertoires of skills. While machine learning is frequently used by computer vision algorithms employed in the context of robotics, these applications are usually not referred to as "robot learning". Projects Maya Cakmak, assistant professor of computer science and engineering at the University of Washington, is trying to create a robot that learns by imitating - a technique called "programming by demonstration". A researcher shows it a cleaning technique for the robot's vision system and it generalizes the cleaning motion from the human demonstration as well as identifying the "state of dirt" before and after cleaning. Similarly the Baxter industrial robot can be taught how to do something by grabbing its arm and showing it the desired movements. It can also use deep learning to teach itself to grasp an unknown object. Sha
https://en.wikipedia.org/wiki/Language%2C%20Truth%2C%20and%20Logic
Language, Truth and Logic is a 1936 book about meaning by the philosopher Alfred Jules Ayer, in which the author defines, explains, and argues for the verification principle of logical positivism, sometimes referred to as the criterion of significance or criterion of meaning. Ayer explains how the principle of verifiability may be applied to the problems of philosophy. Language, Truth and Logic brought some of the ideas of the Vienna Circle and the logical empiricists to the attention of the English-speaking world. Historical background According to Ayer's autobiographical book, Part of My Life, it was work he started in the summer and autumn of 1933 that eventually led to Language, Truth and Logic, specifically Demonstration of the Impossibility of Metaphysics—later published in Mind under the editorship of G.E. Moore. The title of the book was taken ("To some extent plagiarized" according to Ayer) from Friedrich Waismann's Logik, Sprache, Philosophie. Criterion of meaning According to Ayer, analytic statements are tautologies. A tautology is a statement that is necessarily true, true by definition, and true under any conditions. A tautology is a repetition of the meaning of a statement, using different words or symbols. According to Ayer, the statements of logic and mathematics are tautologies. Tautologies are true by definition, and thus their validity does not depend on empirical testing. Synthetic statements, or empirical propositions, assert or deny something about the real world. The validity of synthetic statements is not established merely by the definition of the words or symbols they contain. According to Ayer, if a statement expresses an empirical proposition, then the validity of the proposition is established by its empirical verifiability. Propositions are statements that have conditions under which they can be verified. By the verification principle, meaningful statements have conditions under which their validity can be affirmed or denied. Sta
https://en.wikipedia.org/wiki/Diesel%20generator
A diesel generator (DG) (also known as a diesel genset) is the combination of a diesel engine with an electric generator (often an alternator) to generate electrical energy. This is a specific case of engine generator. A diesel compression-ignition engine is usually designed to run on diesel fuel, but some types are adapted for other liquid fuels or natural gas (CNG). Diesel generating sets are used in places without connection to a power grid or as an emergency power supply if the grid fails, as well as for more complex applications such as peak-lopping, grid support, and export to the power grid. Diesel generator size is crucial to minimize low load or power shortages. Sizing is complicated by the characteristics of modern electronics, specifically non-linear loads. In size ranges around 50 MW and above, an open cycle gas turbine is more efficient at full load than an array of diesel engines, and far more compact, with comparable capital costs; but for regular part-loading, even at these power levels, diesel arrays are sometimes preferred to open cycle gas turbines, due to their superior efficiencies. Diesel generator set The packaged combination of a diesel engine, a generator, and various ancillary devices (such as base, canopy, sound attenuation, control systems, circuit breakers, jacket water heaters, and starting system) is referred to as a "generating set" or a "genset" for short. Set sizes range from 8 to 30 kW (also 8 to 30 kVA single phase) for homes, small shops, and offices, with the larger industrial generators from 8 kW (11 kVA) up to 2,000 kW (2,500 kVA three phase) used for office complexes, factories, and other industrial facilities. A 2,000 kW set can be housed in a ISO container with a fuel tank, controls, power distribution equipment and all other equipment needed to operate as a standalone power station or as a standby backup to grid power. These units, referred to as power modules, are gensets on large triple axle trailers weighing or
https://en.wikipedia.org/wiki/Category%201%20cable
Category 1 cable, also known as Cat 1, Level 1, or voice-grade copper, is a grade of unshielded twisted pair cabling designed for telephone communications, and at one time was the most common on-premises wiring. The maximum frequency suitable for transmission over Cat 1 cable is 1 MHz, but Cat 1 is not currently considered adequate for data transmission (though it was at one time used for that purpose on the Apple Macintosh starting in the late 1980s in the form of Farallon Computing's//NetTopia's PhoneNet, an implementation of Apple's LocalTalk networking hardware standard). Although not an official category standard established by TIA/EIA, Category 1 has become the de facto name given to Level 1 cables originally defined by Anixter International, the distributor. Cat 1 cable was typically used for networks that carry only voice traffic, for example telephones. Official TIA/EIA-568 standards have only been established for cables of Category 3 ratings or above. See also Category 2 cable Category 3 cable Category 4 cable Category 5 cable References External links CCNA: Network Media Types Signal cables Local loop
https://en.wikipedia.org/wiki/Don%27t%20repeat%20yourself
"Don't repeat yourself" (DRY) is a principle of software development aimed at reducing repetition of information which is likely to change, replacing it with abstractions that are less likely to change, or using data normalization which avoids redundancy in the first place. The DRY principle is stated as "Every piece of knowledge must have a single, unambiguous, authoritative representation within a system". The principle has been formulated by Andy Hunt and Dave Thomas in their book The Pragmatic Programmer. They apply it quite broadly to include database schemas, test plans, the build system, even documentation. When the DRY principle is applied successfully, a modification of any single element of a system does not require a change in other logically unrelated elements. Additionally, elements that are logically related all change predictably and uniformly, and are thus kept in sync. Besides using methods and subroutines in their code, Thomas and Hunt rely on code generators, automatic build systems, and scripting languages to observe the DRY principle across layers. Single choice principle A particular case of DRY is the single choice principle. It was defined by Bertrand Meyer as: "Whenever a software system must support a set of alternatives, one and only one module in the system should know their exhaustive list." It was applied when designing Eiffel. Alternatives WET The opposing view to DRY is called WET, a backronym commonly taken to stand for write everything twice (alternatively write every time, we enjoy typing or waste everyone's time). WET solutions are common in multi-tiered architectures where a developer may be tasked with, for example, adding a comment field on a form in a web application. The text string "comment" might be repeated in the label, the HTML tag, in a read function name, a private variable, database DDL, queries, and so on. A DRY approach eliminates that redundancy by using frameworks that reduce or eliminate all those editing t
https://en.wikipedia.org/wiki/Certified%20software%20development%20professional
Certified Software Development Professional (CSDP) is a vendor-neutral professional certification in software engineering developed by the IEEE Computer Society for experienced software engineering professionals. This certification was offered globally since 2001 through Dec. 2014. The certification program constituted an element of the Computer Society's major efforts in the area of Software engineering professionalism, along with the IEEE-CS and ACM Software Engineering 2004 (SE2004) Undergraduate Curricula Recommendations, and The Guide to the Software Engineering Body of Knowledge (SWEBOK Guide 2004), completed two years later. As a further development of these elements, to facilitate the global portability of the software engineering certification, since 2005 through 2008 the International Standard ISO/IEC 24773:2008 "Software engineering -- Certification of software engineering professionals -- Comparison framework" has been developed. (Please, see an overview of this ISO/IEC JTC 1 and IEEE standardization effort in the article published by Stephen B. Seidman, CSDP. ) The standard was formulated in such a way, that it allowed to recognize the CSDP certification scheme as basically aligned with it, soon after the standard's release date, 2008-09-01. Several later revisions of the CSDP certification were undertaken with the aim of making the alignment more complete. In 2019, ISO/IEC 24773:2008 has been withdrawn and revised (by ISO/IEC 24773-1:2019 ). The certification was initially offered by the IEEE Computer Society to experienced software engineering and software development practitioners globally in 2001 in the course of the certification examination beta-testing. The CSDP certification program has been officially approved in 2002. After December 2014 this certification program has been discontinued, all issued certificates are recognized as valid forever. A number of new similar certifications were introduced by the IEEE Computer Society, includi
https://en.wikipedia.org/wiki/Verificationism
Verificationism, also known as the verification principle or the verifiability criterion of meaning, is the philosophical doctrine which asserts that a statement is meaningful only if it is either empirically verifiable (i.e. confirmed through the senses) or a truth of logic (e.g., tautologies). Verificationism rejects statements of metaphysics, theology, ethics, and aesthetics, as cognitively meaningless. Such statements may be meaningful in influencing emotions or behavior, but not in terms of conveying truth value, information, or factual content. Verificationism was a central theme of logical positivism, a movement in analytic philosophy that emerged in the 1920s by philosophers who sought to unify philosophy and science under a common naturalistic theory of knowledge. Origins Although earlier philosophical principles which aim to ground scientific theory in some verifiable experience are found within the work of American pragmatist C.S. Peirce and that of French conventionalist Pierre Duhem, who fostered instrumentalism, the project of verificationism was launched by the logical positivists who, emerging from the Berlin Circle and the Vienna Circle in the 1920s, sought an epistemology whereby philosophical discourse would be, in their perception, as authoritative and meaningful as an empirical science. Logical positivists garnered the verifiability criterion of cognitive meaningfulness from Ludwig Wittgenstein's philosophy of language posed in his 1921 book Tractatus, and, led by Bertrand Russell, sought to reformulate the analytic–synthetic distinction in a way that would reduce mathematics and logic to semantical conventions. This would be pivotal to verificationism, in that logic and mathematics would otherwise be classified as synthetic a priori knowledge and defined as "meaningless" under verificationism. Seeking grounding in such empiricism as of David Hume, Auguste Comte, and Ernst Mach—along with the positivism of the latter two—they borrowed some p
https://en.wikipedia.org/wiki/Hazard%20analysis
A hazard analysis is used as the first step in a process used to assess risk. The result of a hazard analysis is the identification of different types of hazards. A hazard is a potential condition and exists or not (probability is 1 or 0). It may, in single existence or in combination with other hazards (sometimes called events) and conditions, become an actual Functional Failure or Accident (Mishap). The way this exactly happens in one particular sequence is called a scenario. This scenario has a probability (between 1 and 0) of occurrence. Often a system has many potential failure scenarios. It also is assigned a classification, based on the worst case severity of the end condition. Risk is the combination of probability and severity. Preliminary risk levels can be provided in the hazard analysis. The validation, more precise prediction (verification) and acceptance of risk is determined in the risk assessment (analysis). The main goal of both is to provide the best selection of means of controlling or eliminating the risk. The term is used in several engineering specialties, including avionics, food safety, occupational safety and health, process safety, reliability engineering. Hazards and risk A hazard is defined as a "Condition, event, or circumstance that could lead to or contribute to an unplanned or undesirable event." Seldom does a single hazard cause an accident or a functional failure. More often an accident or operational failure occurs as the result of a sequence of causes. A hazard analysis will consider system state, for example operating environment, as well as failures or malfunctions. While in some cases, safety or reliability risk can be eliminated, in most cases a certain degree of risk must be accepted. In order to quantify expected costs before the fact, the potential consequences and the probability of occurrence must be considered. Assessment of risk is made by combining the severity of consequence with the likelihood of occurrence in a m
https://en.wikipedia.org/wiki/Adam%20Logan
Adam Logan (born 1975 in Kingston, Ontario) is a research mathematician and a top Canadian Scrabble player. He won the World Scrabble Championship in 2005, beating Pakorn Nemitrmansuk of Thailand 3–0 in the final. He is the only player to have won the Canadian Scrabble Championship five times (1996, 2005, 2008, 2013 and 2016). He was also the winner of the 1996 National Scrabble Championship, North America's top rated player in 1997, and the winner of the Collins division of the 2014 North American Scrabble Championship. Since his competitive career began in 1985, Logan has played nearly 2200 tournament games, compiling a winning percentage of over 68%, and earning just $100,000 in prize money. He was a Putnam Fellow in 1992 and 1993. Logan completed his first degree, in mathematics, at Princeton University in 1995 and received a PhD from Harvard University in 1999. He completed his Post-doctoral work at McGill University between 2002 through 2003. From 2008 to 2009 he was employed as a Quantitative Analyst at D. E. Shaw & Co. in New York City. He works for the Tutte Institute for Mathematics and Computing in Ottawa, Ontario, Canada. References External links Adam Logan's NSA player profile Adam Logan's professional home page (legacy; no longer in this position) 1975 births Living people Canadian mathematicians Canadian Scrabble players Harvard University alumni Number theorists Scientists from Ontario Sportspeople from Kingston, Ontario Princeton University alumni World Scrabble Championship winners International Mathematical Olympiad participants Lisgar Collegiate Institute alumni Putnam Fellows
https://en.wikipedia.org/wiki/Ethyl%20heptanoate
Ethyl heptanoate is the ester resulting from the condensation of heptanoic acid and ethanol. It is used in the flavor industry because of its odor that is similar to grape. References Enanthate esters Ethyl esters Flavors
https://en.wikipedia.org/wiki/Manufacture%20of%20cheddar%20cheese
The manufacture of Cheddar cheese includes the process of cheddaring, which makes this cheese unique. Cheddar cheese is named for the village of Cheddar in Somerset in South West England where it was originally manufactured. The manufacturing of this cheese has since spread around the world and thus the name has become generically known. Food ingredients used during manufacture Milk In general, the milk is raw milk (whole or 3.3%). The milk must be "ripened" before adding in the rennet. The term ripening means allowing the lactic acid bacteria (LAB) to turn lactose into lactic acid, which lowers the pH of the solution, greatly aiding in the coagulation of the milk. This is vital for the production of cheese curds that are later formed into cheddar. Rennet/chymosin/rennin Rennet is an enzyme, originally collected from the stomach of a milk-fed calf (natural rennet). This enzyme is responsible for the coagulation of the milk proteins to produce curds. Cheese produced this way is neither vegetarian nor kosher. Coagulation can also be achieved using acids, but this method yields lower-quality cheddar. The two key components of natural rennet are chymosin and bovine pepsin. Extracts from plants such as nettles were found to produce similar effects and have been used in some types of cheese-making (vegetable rennet). When calf-rennet grew scarce in the 1960s, scientists developed a synthesized type of chymosin by fermenting certain bacteria or fungi (microbial rennet), but this was also not useful for all types of cheese-making. A solution using recombinant-gene (GMO microbial rennet) technology was developed and approved by the U.S. Food and Drug Administration in 1990. This splices the calf-gene for producing chymosin into the genes of certain bacteria, yeasts, or fungi, producing pure chymosin. Equipment Stainless steel knives are used to uniformly cut the curds at various points during the process. The device is a stainless steel frame with stainless st
https://en.wikipedia.org/wiki/DNA%20machine
A DNA machine is a molecular machine constructed from DNA. Research into DNA machines was pioneered in the late 1980s by Nadrian Seeman and co-workers from New York University. DNA is used because of the numerous biological tools already found in nature that can affect DNA, and the immense knowledge of how DNA works previously researched by biochemists. DNA machines can be logically designed since DNA assembly of the double helix is based on strict rules of base pairing that allow portions of the strand to be predictably connected based on their sequence. This "selective stickiness" is a key advantage in the construction of DNA machines. An example of a DNA machine was reported by Bernard Yurke and co-workers at Lucent Technologies in the year 2000, who constructed molecular tweezers out of DNA. The DNA tweezers contain three strands: A, B and C. Strand A latches onto half of strand B and half of strand C, and so it joins them all together. Strand A acts as a hinge so that the two "arms" — AB and AC — can move. The structure floats with its arms open wide. They can be pulled shut by adding a fourth strand of DNA (D) "programmed" to stick to both of the dangling, unpaired sections of strands B and C. The closing of the tweezers was proven by tagging strand A at either end with light-emitting molecules that do not emit light when they are close together. To re-open the tweezers add a further strand (E) with the right sequence to pair up with strand D. Once paired up, they have no connection to the machine BAC, so float away. The DNA machine can be opened and closed repeatedly by cycling between strands D and E. These tweezers can be used for removing drugs from inside fullerenes as well as from a self assembled DNA tetrahedron. The state of the device can be determined by measuring the separation between donor and acceptor fluorophores using FRET. DNA walkers are another type of DNA machine. See also DNA nanotechnology References DNA nanotechnology Genetic
https://en.wikipedia.org/wiki/Decimal%20representation
A decimal representation of a non-negative real number is its expression as a sequence of symbols consisting of decimal digits traditionally written with a single separator: Here is the decimal separator, is a nonnegative integer, and are digits, which are symbols representing integers in the range 0, ..., 9. Commonly, if The sequence of the —the digits after the dot—is generally infinite. If it is finite, the lacking digits are assumed to be 0. If all are , the separator is also omitted, resulting in a finite sequence of digits, which represents a natural number. The decimal representation represents the infinite sum: Every nonnegative real number has at least one such representation; it has two such representations (with if ) if and only if one has a trailing infinite sequence of , and the other has a trailing infinite sequence of . For having a one-to-one correspondence between nonnegative real numbers and decimal representations, decimal representations with a trailing infinite sequence of are sometimes excluded. Integer and fractional parts The natural number , is called the integer part of , and is denoted by in the remainder of this article. The sequence of the represents the number which belongs to the interval and is called the fractional part of (except when all are ). Finite decimal approximations Any real number can be approximated to any desired degree of accuracy by rational numbers with finite decimal representations. Assume . Then for every integer there is a finite decimal such that: Proof: Let , where . Then , and the result follows from dividing all sides by . (The fact that has a finite decimal representation is easily established.) Non-uniqueness of decimal representation and notational conventions Some real numbers have two infinite decimal representations. For example, the number 1 may be equally represented by 1.000... as by 0.999... (where the infinite sequences of trailing 0's or 9's, respectively, are repre
https://en.wikipedia.org/wiki/Ethyl%20salicylate
Ethyl salicylate is the ester formed by the condensation of salicylic acid and ethanol. It is a clear liquid that is sparingly soluble in water, but soluble in alcohol and ether. It has a pleasant odor resembling wintergreen and is used in perfumery and artificial flavors. See also Methyl salicylate Isopropyl salicylate References Flavors Perfume ingredients Ethyl esters Salicylate esters 3-Hydroxypropenals
https://en.wikipedia.org/wiki/Vactrain
A vactrain (or vacuum tube train) is a proposed design for very-high-speed rail transportation. It is a maglev (magnetic levitation) line using partly evacuated tubes or tunnels. Reduced air resistance could permit vactrains to travel at very high (hypersonic) speeds with relatively little power—up to . This is 5–6 times the speed of sound in Earth's atmosphere at sea level. 18th century In 1799, George Medhurst of London conceived of and patented an atmospheric railway that could convey people or cargo through pressurized or evacuated tubes. The early atmospheric railways and pneumatic tube transport systems (such as the Dalkey Atmospheric Railway) relied on steam power for propulsion. 19th century In 1888, Michel Verne, son of Jules Verne, imagined a submarine pneumatic tube transport system that could propel a passenger capsule at speeds up to under the Atlantic Ocean (a transatlantic tunnel) in a short story called "An Express of the Future". 20th century The vactrain proper was invented by Robert H. Goddard as a freshman at Worcester Polytechnic Institute in the United States in 1904. Goddard subsequently refined the idea in a 1906 short story called "The High-Speed Bet" which was summarized and published in a Scientific American editorial in 1909 called "The Limit of Rapid Transit". Esther, his wife, was granted a US patent for the vactrain in 1950, five years after his death. In 1909, Russian professor built the world's first model of his proposed version of the vactrain at Tomsk Polytechnic University. He later published a vactrain concept in 1914 in the book Motion without friction (airless electric way). In 1955, Polish science-fiction writer Stanisław Lem in a novel The Magellan Nebula wrote about intercontinental vactrain called "organowiec", which moved in a transparent tube at a speed higher than . Later in April 1962, the vactrain appears in the story "Mercenary" by Mack Reynolds, where he mentions Vacuum Tube Transport in passing. During
https://en.wikipedia.org/wiki/Phantasie%20III
Phantasie III: The Wrath of Nikademus is the third video game in the Phantasie series. Gameplay The "final" installment of the Phantasie trilogy was based around fighting the evil Nikademus and finishing him for good. Released in 1987, this time Nikademus was attempting to take over the entire world and it was up to the party to stop him. Phantasie III maintained the style of the original two and improved upon the graphics on all platforms except the DOS version. The combat engine also saw a few upgrades, adding specific wound locations, with characters now able to have their head, torso, or a limb specifically injured, broken, or removed. It was also now possible to have a more tactical battle line-up, with the ability to move characters to the front, middle, or rear of the party. The game also improved upon the spell list and added a larger variety of weapons and equipment. The game also had two possible endings depending on whether the characters chose to fight Nikademus or join him. Reception Phantasie III sold 46,113 copies. Computer Gaming World stated that "there are a few new wrinkles" in the game. The magazine's Scorpia was pleased by Phantasie III improving the trading interface and combat, and by the "grand ending" to the game and the trilogy, but called the game "by far the weakest in the series" and criticized its short length. Phantasie III was reviewed in 1988 in Dragon #130 by Hartley, Patricia, and Kirk Lesser in "The Role of Computers" column. The reviewers gave the game 4 out of 5 stars. Phantasie I, Phantasie III, and Questron II were later re-released together, and reviewed in 1994 in Dragon #203 by Sandy Petersen in the "Eye of the Monitor" column. Petersen gave the compilation 2 out of 5 stars. References External links Review in Info 1987 video games Amiga games Apple II games Atari ST games Commodore 64 games DOS games Fantasy video games FM-7 games MSX2 games NEC PC-8801 games NEC PC-9801 games Role-playing video games Sharp X1 gam
https://en.wikipedia.org/wiki/Somatic%20effort
Somatic effort refers to the total investments of an organism in its own development, differentiation, and maintenance which consequently increases its reproductive potential. References Behavioral ecology Reproduction