source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/SEAC%20%28computer%29
SEAC (Standards Eastern Automatic Computer or Standards Electronic Automatic Computer) was a first-generation electronic computer, built in 1950 by the U.S. National Bureau of Standards (NBS) and was initially called the National Bureau of Standards Interim Computer, because it was a small-scale computer designed to be built quickly and put into operation while the NBS waited for more powerful computers to be completed (the DYSEAC). The team that developed SEAC was organized by Samuel N. Alexander. SEAC was demonstrated in April 1950 and was dedicated in June 1950; it is claimed to be the first fully operational stored-program electronic computer in the US. Description Based on EDVAC, SEAC used only 747 vacuum tubes (a small number for the time) eventually expanded to 1,500 tubes. It had 10,500 germanium diodes which performed all of the logic functions (see the article diode–transistor logic for the working principles of diode logic), later expanded to 16,000 diodes. It was the first computer to do most of its logic with solid-state devices. The tubes were used for amplification, inversion and storing information in dynamic flip-flops. The machine used 64 acoustic delay lines to store 512 words of memory, with each word being 45 bits in size. The clock rate was kept low (1 MHz). The computer's instruction set consisted of only 11 types of instructions: fixed-point addition, subtraction, multiplication, and division; comparison, and input & output. It eventually expanded to 16 instructions. The addition time was 864 microseconds and the multiplication time was 2,980 microseconds (i.e. close to 3 milliseconds). Weight: (central machine). Applications On some occasions SEAC was used by a remote teletype. This makes it one of the first computers to be used remotely. With many modifications, it was used until 1964. Some of the problems run on it dealt with: digital imaging, led by Russell A. Kirsch computer animation of the city traffic simulation meteorol
https://en.wikipedia.org/wiki/Hilbert%27s%20twenty-first%20problem
The twenty-first problem of the 23 Hilbert problems, from the celebrated list put forth in 1900 by David Hilbert, concerns the existence of a certain class of linear differential equations with specified singular points and monodromic group. Statement The original problem was stated as follows (English translation from 1902): Proof of the existence of linear differential equations having a prescribed monodromic group In the theory of linear differential equations with one independent variable z, I wish to indicate an important problem one which very likely Riemann himself may have had in mind. This problem is as follows: To show that there always exists a linear differential equation of the Fuchsian class, with given singular points and monodromic group. The problem requires the production of n functions of the variable z, regular throughout the complex z-plane except at the given singular points; at these points the functions may become infinite of only finite order, and when z describes circuits about these points the functions shall undergo the prescribed linear substitutions. The existence of such differential equations has been shown to be probable by counting the constants, but the rigorous proof has been obtained up to this time only in the particular case where the fundamental equations of the given substitutions have roots all of absolute magnitude unity. has given this proof, based upon Poincaré's theory of the Fuchsian zeta-functions. The theory of linear differential equations would evidently have a more finished appearance if the problem here sketched could be disposed of by some perfectly general method. Definitions In fact it is more appropriate to speak not about differential equations but about linear systems of differential equations: in order to realise any monodromy by a differential equation one has to admit, in general, the presence of additional apparent singularities, i.e. singularities with trivial local monodromy. In more modern langua
https://en.wikipedia.org/wiki/SWAC%20%28computer%29
The SWAC (Standards Western Automatic Computer) was an early electronic digital computer built in 1950 by the U.S. National Bureau of Standards (NBS) in Los Angeles, California. It was designed by Harry Huskey. Overview Like the SEAC which was built about the same time, the SWAC was a small-scale interim computer designed to be built quickly and put into operation while the NBS waited for more powerful computers to be completed (in particular, the RAYDAC by Raytheon). The machine used 2,300 vacuum tubes. It had 256 words of memory, using Williams tubes, with each word being 37 bits. It had only seven basic operations: add, subtract, and fixed-point multiply; comparison, data extraction, input and output. Several years later, drum memory was added. When the SWAC was completed in August 1950, it was the fastest computer in the world. It continued to hold that status until the IAS computer was completed a year later. It could add two numbers and store the result in 64 microseconds. A similar multiplication took 384 microseconds. It was used by the NBS until 1954 when the Los Angeles office was closed, and then by UCLA until 1967 (with modifications). It was charged out there for $40 per hour. In January 1952, Raphael M. Robinson used the SWAC to discover five Mersenne primes—the largest prime numbers known at the time, with 157, 183, 386, 664 and 687 digits. Additionally, the SWAC was vital in doing the intense calculation required for the X-ray analysis of the structure of vitamin B12 done by Dorothy Hodgkin. This was fundamental in Hodgkin receiving the Nobel Prize in Chemistry in 1964. See also List of vacuum tube computers References Williams, Michael R. (1997). A History of Computing Technology. IEEE Computer Society. Further reading External links IEEE Transcript: SWAC—Standards Western Automatic Computer: The Pioneer Day Session at NCC July 1978 Oral history interview with Alexandra Forsythe, Charles Babbage Institute, University of Minnesota.
https://en.wikipedia.org/wiki/149%20%28number%29
149 (one hundred [and] forty-nine) is the natural number between 148 and 150. In mathematics 149 is a prime number, the first prime whose difference from the previous prime is exactly 10, an emirp, and an irregular prime. After 1 and 127, it is the third smallest de Polignac number, an odd number that cannot be represented as a prime plus a power of two. More strongly, after 1, it is the second smallest number that is not a sum of two prime powers. It is a tribonacci number, being the sum of the three preceding terms, 24, 44, 81. There are exactly 149 integer points in a closed circular disk of radius 7, and exactly 149 ways of placing six queens (the maximum possible) on a 5 × 5 chess board so that each queen attacks exactly one other. The barycentric subdivision of a tetrahedron produces an abstract simplicial complex with exactly 149 simplices. See also The year AD 149 or 149 BC List of highways numbered 149 References External links Integers
https://en.wikipedia.org/wiki/Hydrolyzed%20protein
Hydrolyzed protein is a solution derived from the hydrolysis of a protein into its component amino acids and peptides. While many means of achieving this exist, most common is prolonged heating with hydrochloric acid, sometimes with an enzyme such as pancreatic protease to simulate the naturally occurring hydrolytic process. Uses Protein hydrolysis is a useful route to the isolation of individual amino acids. Examples include cystine from hydrolysis of hair, tryptophane from casein, histidine from red blood cells, and arginine from gelatin. Common hydrolyzed products used in food are hydrolyzed vegetable protein and yeast extract, which are used as flavor enhancers because the hydrolysis of the protein produces free glutamic acid. Some hydrolyzed beef protein powders are used for specialized diets. Protein hydrolysis can be used to modify the allergenic properties of infant formula. Reducing the size of cow milk proteins in the formula makes it more suitable for consumption by babies suffering from milk protein intolerance. The US FDA has approved a label for this usage of partially-hydrolyzed proteins in 2017, but a meta-analysis published the same year shows insufficient evidence for this use. Hydrolyzed protein is also used in certain specially formulated hypoallergenic pet foods, notably dog foods for dogs and puppies that suffer from allergies caused by certain protein types in standard commercial dog food brands. The protein contents of the foods are split into peptides which reduces the likelihood for an animal's immune system recognizing an allergic threat. Hydrolyzed protein diets for cats are often recommended for felines with food allergies and certain types of digestive issues. See also Acceptable daily intake Acid-hydrolyzed vegetable protein E number Food allergy Food intolerance Food labeling regulations Glutamic acid Monosodium glutamate Protein allergy References Food additives Protein structure Umami enhancers
https://en.wikipedia.org/wiki/Beau%27s%20lines
Beau's lines are deep grooved lines that run from side to side on the fingernail or the toenail. They may look like indentations or ridges in the nail plate. This condition of the nail was named by a French physician, Joseph Honoré Simon Beau (1806–1865), who first described it in 1846. Signs and symptoms Beau's lines are horizontal, going across the nailline, and should not be confused with vertical ridges going from the bottom (cuticle) of the nail out to the fingertip. These vertical lines are usually a natural consequence of aging and are harmless. Beau's lines should also be distinguished from Muehrcke's lines of the fingernails. While Beau's lines are actual ridges and indentations in the nail plate, Muehrcke lines are areas of hypopigmentation without palpable ridges; they affect the underlying nail bed, and not the nail itself. Beau's lines should also be distinguished from Mees' lines of the fingernails, which are areas of discoloration in the nail plate. As the nail grows out, the ridge in the nail can be seen to move upwards until it reaches the fingertip. When it reaches this point the fingertips can become sore for a few days as the nail bed is exposed by the misshapen nail. Causes There are several causes of Beau's lines. It is believed that there is a temporary cessation of cell division in the nail matrix. This may be caused by an infection or problem in the nail fold, where the nail begins to form, or it may be caused by an injury to that area. Some other reasons for these lines include trauma, coronary occlusion, hypocalcaemia, and skin disease. They may be a sign of systemic disease, or may also be caused by an illness of the body, as well as drugs used in chemotherapy, or malnutrition. Beau's lines can also be seen one to two months after the onset of fever in children with Kawasaki disease. Conditions also associated with Beau's lines include uncontrolled diabetes and peripheral vascular disease, as well as illnesses associated
https://en.wikipedia.org/wiki/Statistical%20time-division%20multiplexing
Statistical multiplexing is a type of communication link sharing, very similar to dynamic bandwidth allocation (DBA). In statistical multiplexing, a communication channel is divided into an arbitrary number of variable bitrate digital channels or data streams. The link sharing is adapted to the instantaneous traffic demands of the data streams that are transferred over each channel. This is an alternative to creating a fixed sharing of a link, such as in general time division multiplexing (TDM) and frequency division multiplexing (FDM). When performed correctly, statistical multiplexing can provide a link utilization improvement, called the statistical multiplexing gain. Statistical multiplexing is facilitated through packet mode or packet-oriented communication, which among others is utilized in packet switched computer networks. Each stream is divided into packets that normally are delivered asynchronously in a first-come first-served fashion. In alternative fashion, the packets may be delivered according to some scheduling discipline for fair queuing or differentiated and/or guaranteed quality of service. Statistical multiplexing of an analog channel, for example a wireless channel, is also facilitated through the following schemes: Random frequency-hopping orthogonal frequency division multiple access (RFH-OFDMA) Code-division multiple access (CDMA), where different amount of spreading codes or spreading factors can be assigned to different users. Statistical multiplexing normally implies "on-demand" service rather than one that preallocates resources for each data stream. Statistical multiplexing schemes do not control user data transmissions. Comparison with static TDM Time domain statistical multiplexing (packet mode communication) is similar to time-division multiplexing (TDM), except that, rather than assigning a data stream to the same recurrent time slot in every TDM, each data stream is assigned time slots (of fixed length) or data fra
https://en.wikipedia.org/wiki/Banks%E2%80%93Zaks%20fixed%20point
In quantum chromodynamics (and also N = 1 super quantum chromodynamics) with massless flavors, if the number of flavors, Nf, is sufficiently small (i.e. small enough to guarantee asymptotic freedom, depending on the number of colors), the theory can flow to an interacting conformal fixed point of the renormalization group. If the value of the coupling at that point is less than one (i.e. one can perform perturbation theory in weak coupling), then the fixed point is called a Banks–Zaks fixed point. The existence of the fixed point was first reported in 1974 by Belavin and Migdal and by Caswell, and later used by Banks and Zaks in their analysis of the phase structure of vector-like gauge theories with massless fermions. The name Caswell–Banks–Zaks fixed point is also used. More specifically, suppose that we find that the beta function of a theory up to two loops has the form where and are positive constants. Then there exists a value such that : If we can arrange to be smaller than , then we have . It follows that when the theory flows to the IR it is a conformal, weakly coupled theory with coupling . For the case of a non-Abelian gauge theory with gauge group and Dirac fermions in the fundamental representation of the gauge group for the flavored particles we have where is the number of colors and the number of flavors. Then should lie just below in order for the Banks–Zaks fixed point to appear. Note that this fixed point only occurs if, in addition to the previous requirement on (which guarantees asymptotic freedom), where the lower bound comes from requiring . This way remains positive while is still negative (see first equation in article) and one can solve with real solutions for . The coefficient was first correctly computed by Caswell, while the earlier paper by Belavin and Migdal has a wrong answer. See also Beta function References T. J. Hollowood, "Renormalization Group and Fixed Points in Quantum Field Theory", Springer
https://en.wikipedia.org/wiki/Isotropic%20bands
In physiology, isotropic bands (better known as I bands) are the lighter bands of skeletal muscle cells (a.k.a. muscle fibers). Isotropic bands contain only actin-containing thin filaments. The darker bands are called anisotropic bands (A bands). Together the I-bands and A-bands contribute to the striated appearance of skeletal muscle. Isotropic bands indicate the behavior of polarized light as it passes through I bands. Diagrams provide an indication of what I and A Bands look like, through a microscope. References Muscular system
https://en.wikipedia.org/wiki/Passive%20infrared%20sensor
A passive infrared sensor (PIR sensor) is an electronic sensor that measures infrared (IR) light radiating from objects in its field of view. They are most often used in PIR-based motion detectors. PIR sensors are commonly used in security alarms and automatic lighting applications. PIR sensors detect general movement, but do not give information on who or what moved. For that purpose, an imaging IR sensor is required. PIR sensors are commonly called simply "PIR", or sometimes "PID", for "passive infrared detector". The term passive refers to the fact that PIR devices do not radiate energy for detection purposes. They work entirely by detecting infrared radiation (radiant heat) emitted by or reflected from objects. Operating principles All objects with a temperature above absolute zero emit heat energy in the form of electromagnetic radiation. Usually this radiation isn't visible to the human eye because it radiates at infrared wavelengths, but it can be detected by electronic devices designed for such a purpose. PIR-based motion detector A PIR-based motion detector is used to sense movement of people, animals, or other objects. They are commonly used in burglar alarms and automatically activated lighting systems. Operation A PIR sensor can detect changes in the amount of infrared radiation impinging upon it, which varies depending on the temperature and surface characteristics of the objects in front of the sensor. When an object, such as a person, passes in front of the background, such as a wall, the temperature at that point in the sensor's field of view will rise from room temperature to body temperature, and then back again. The sensor converts the resulting change in the incoming infrared radiation into a change in the output voltage, and this triggers the detection. Objects of similar temperature but different surface characteristics may also have a different infrared emission pattern, and thus moving them with respect to the background may trig
https://en.wikipedia.org/wiki/Tuple%20space
A tuple space is an implementation of the associative memory paradigm for parallel/distributed computing. It provides a repository of tuples that can be accessed concurrently. As an illustrative example, consider that there are a group of processors that produce pieces of data and a group of processors that use the data. Producers post their data as tuples in the space, and the consumers then retrieve data from the space that match a certain pattern. This is also known as the blackboard metaphor. Tuple space may be thought as a form of distributed shared memory. Tuple spaces were the theoretical underpinning of the Linda language developed by David Gelernter and Nicholas Carriero at Yale University in 1986. Implementations of tuple spaces have also been developed for Java (JavaSpaces), Lisp, Lua, Prolog, Python, Ruby, Smalltalk, Tcl, and the .NET Framework. Object Spaces Object Spaces is a paradigm for development of distributed computing applications. It is characterized by the existence of logical entities, called Object Spaces. All the participants of the distributed application share an Object Space. A provider of a service encapsulates the service as an Object, and puts it in the Object Space. Clients of a service then access the Object Space, find out which object provides the needed service, and have the request serviced by the object. Object Spaces, as a computing paradigm, was put forward in the 1980s by David Gelernter at Yale University. Gelernter developed a language called Linda to support the concept of global object coordination. Object Space can be thought of as a virtual repository, shared amongst providers and accessors of network services, which are themselves abstracted as objects. Processes communicate among each other using these shared objects — by updating the state of the objects as and when needed. An object, when deposited into a space, needs to be registered with an Object Directory in the Object Space. Any processes can then i
https://en.wikipedia.org/wiki/NetStumbler
NetStumbler (also known as Network Stumbler) was a tool for Windows that facilitates detection of Wireless LANs using the 802.11b, 802.11a and 802.11g WLAN standards. It runs on Microsoft Windows operating systems from Windows 2000 to Windows XP. A trimmed-down version called MiniStumbler is available for the handheld Windows CE operating system. Netstumbler has become one of the most popular programs for wardriving and wireless reconnaissance, although it has a disadvantage. It can be detected easily by most intrusion detection system, because it actively probes a network to collect information. Netstumbler has integrated support for a GPS unit. With this support, Netstumbler displays GPS coordinate information next to the information about each discovered network, which can be useful for finding specific networks again after having sorted out collected data. The program is commonly used for: Verifying network configurations Finding locations with poor coverage in a WLAN Detecting causes of wireless interference Detecting unauthorized ("rogue") access points Aiming directional antennas for long-haul WLAN links No updated version has been developed since 2004. See also InSSIDer was created as an alternative to Network Stumbler for the current generation of Windows operating system Kismet for Linux, FreeBSD, NetBSD, OpenBSD, and Mac OS X KisMAC for Mac OS X References External links NetStumbler's author's website (stumbler.net) Wireless networking Computer network security
https://en.wikipedia.org/wiki/Red%20edge
Red edge refers to the region of rapid change in reflectance of vegetation in the near infrared range of the electromagnetic spectrum. Chlorophyll contained in vegetation absorbs most of the light in the visible part of the spectrum but becomes almost transparent at wavelengths greater than 700 nm. The cellular structure of the vegetation then causes this infrared light to be reflected because each cell acts something like an elementary corner reflector. The change can be from 5% to 50% reflectance going from 680 nm to 730 nm. This is an advantage to plants in avoiding overheating during photosynthesis. For a more detailed explanation and a graph of the photosynthetically active radiation (PAR) spectral region, see . The phenomenon accounts for the brightness of foliage in infrared photography and is extensively utilized in the form of so-called vegetation indices (e.g. Normalized difference vegetation index). It is used in remote sensing to monitor plant activity, and it has been suggested that it could be useful to detect light-harvesting organisms on distant planets. See also References Photosynthesis Remote sensing Astrobiology
https://en.wikipedia.org/wiki/Whitehead%20link
In knot theory, the Whitehead link, named for J. H. C. Whitehead, is one of the most basic links. It can be drawn as an alternating link with five crossings, from the overlay of a circle and a figure-eight shaped loop. Structure A common way of describing this knot is formed by overlaying a figure-eight shaped loop with another circular loop surrounding the crossing of the figure-eight. The above-below relation between these two unknots is then set as an alternating link, with the consecutive crossings on each loop alternating between under and over. This drawing has five crossings, one of which is the self-crossing of the figure-eight curve, which does not count towards the linking number. Because the remaining crossings have equal numbers of under and over crossings on each loop, its linking number is 0. It is not isotopic to the unlink, but it is link homotopic to the unlink. Although this construction of the knot treats its two loops differently from each other, the two loops are topologically symmetric: it is possible to deform the same link into a drawing of the same type in which the loop that was drawn as a figure eight is circular and vice versa. Alternatively, there exist realizations of this knot in three dimensions in which the two loops can be taken to each other by a geometric symmetry of the realization. In braid theory notation, the link is written Its Jones polynomial is This polynomial and are the two factors of the Jones polynomial of the L10a140 link. Notably, is the Jones polynomial for the mirror image of a link having Jones polynomial . Volume The hyperbolic volume of the complement of the Whitehead link is times Catalan's constant, approximately 3.66. The Whitehead link complement is one of two two-cusped hyperbolic manifolds with the minimum possible volume, the other being the complement of the pretzel link with parameters . Dehn filling on one component of the Whitehead link can produce the sibling manifold of the complement of
https://en.wikipedia.org/wiki/5-Nitro-2-propoxyaniline
5-Nitro-2-propoxyaniline, also known as P-4000 and Ultrasüss, is about 4,000 times the intensity of sucrose (hence its alternate name, P-4000). It is an orange solid that is only slightly soluble in water. It is stable in boiling water and dilute acids. 5-Nitro-2-propoxyaniline was once used as an artificial sweetener but has been banned in the United States because of its possible toxicity. In the US, food containing any added or detectable level of 5-nitro-2-propoxyaniline is deemed to be adulterated in violation of the act based upon an order published in the Federal Register of January 19, 1950 (15 FR 321). References External links Aromatic amines Food additives Sugar substitutes Nitro compounds Ethers
https://en.wikipedia.org/wiki/Circular%20sector
A circular sector, also known as circle sector or disk sector (symbol: ⌔), is the portion of a disk (a closed region bounded by a circle) enclosed by two radii and an arc, with the smaller area being known as the minor sector and the larger being the major sector. In the diagram, is the central angle, the radius of the circle, and is the arc length of the minor sector. The angle formed by connecting the endpoints of the arc to any point on the circumference that is not in the sector is equal to half the central angle. Types A sector with the central angle of 180° is called a half-disk and is bounded by a diameter and a semicircle. Sectors with other central angles are sometimes given special names, such as quadrants (90°), sextants (60°), and octants (45°), which come from the sector being one 4th, 6th or 8th part of a full circle, respectively. Confusingly, the arc of a quadrant (a circular arc) can also be termed a quadrant. Compass Traditionally wind directions on the compass rose are given as one of the 8 octants (N, NE, E, SE, S, SW, W, NW) because that is more precise than merely giving one of the 4 quadrants, and the wind vane typically does not have enough accuracy to allow more precise indication. The name of the instrument "octant" comes from the fact that it is based on 1/8th of the circle. Most commonly, octants are seen on the compass rose. Area The total area of a circle is . The area of the sector can be obtained by multiplying the circle's area by the ratio of the angle θ (expressed in radians) and (because the area of the sector is directly proportional to its angle, and is the angle for the whole circle, in radians): The area of a sector in terms of L can be obtained by multiplying the total area r by the ratio of L to the total perimeter 2r. Another approach is to consider this area as the result of the following integral: Converting the central angle into degrees gives Perimeter The length of the perimeter of a sector is the sum
https://en.wikipedia.org/wiki/Facial%20symmetry
Facial symmetry is one specific measure of bodily symmetry. Along with traits such as averageness and youthfulness it influences judgments of aesthetic traits of physical attractiveness and beauty. For instance, in mate selection, people have been shown to have a preference for symmetry. Facial bilateral symmetry is typically defined as fluctuating asymmetry of the face comparing random differences in facial features of the two sides of the face. The human face also has systematic, directional asymmetry: on average, the face (mouth, nose and eyes) sits systematically to the left with respect to the axis through the ears, the so-called aurofacial asymmetry. Directional asymmetry Directional asymmetry is systematic. The average across the population is not "symmetric", but statistically significantly biased on one direction. That means, that individuals of a species can be symmetric, or even asymmetric to the opposite side (see, e.g., handedness), but most individuals are asymmetric to the same side. The relation between directional and fluctuating asymmetry is comparable to the concepts of accuracy and precision in empirical measurements. There are examples from the brain (Yakovlevian torque and spine, and inner organs (see axial twist theory), but also from various animals (see Symmetry in biology). Aurofacial asymmetry Aurofacial asymmetry (from Latin auris 'ear' and faciēs 'face') is an example of directed asymmetry of the face. It refers to the left-sided offset of the face (i.e. eyes, nose, and mouth) with respect to the ears. On average, the face's offset is slightly to the left, meaning that the right side of the face appears larger than the left side. The offset is larger in newborns and reduces gradually during growth. Anatomy and definition In contrast to fluctuating asymmetry, directional asymmetry is systematic, i.e. across the population it is systematically more often in one direction than in the other. It means that across the population a devi
https://en.wikipedia.org/wiki/Social%20behavior
Social behavior is behavior among two or more organisms within the same species, and encompasses any behavior in which one member affects the other. This is due to an interaction among those members. Social behavior can be seen as similar to an exchange of goods, with the expectation that when you give, you will receive the same. This behavior can be affected by both the qualities of the individual and the environmental (situational) factors. Therefore, social behavior arises as a result of an interaction between the two—the organism and its environment. This means that, in regards to humans, social behavior can be determined by both the individual characteristics of the person, and the situation they are in. A major aspect of social behavior is Personal information, which is the basis for survival and reproduction. Social behavior is said to be determined by two different processes, that can either work together or oppose one another. The dual-systems model of reflective and impulsive determinants of social behavior came out of the realization that behavior cannot just be determined by one single factor. Instead, behavior can arise by those consciously behaving (where there is an awareness and intent), or by pure impulse. These factors that determine behavior can work in different situations and moments, and can even oppose one another. While at times one can behave with a specific goal in mind, other times they can behave without rational control, and driven by impulse instead. There are also distinctions between different types of social behavior, such as mundane versus defensive social behavior. Mundane social behavior is a result of interactions in day-to-day life, and are behaviors learned as one is exposed to those different situations. On the other hand, defensive behavior arises out of impulse, when one is faced with conflicting desires. The development of social behavior Social behavior constantly changes as one continues to grow and develop, reaching
https://en.wikipedia.org/wiki/Radiosensitivity
Radiosensitivity is the relative susceptibility of cells, tissues, organs or organisms to the harmful effect of ionizing radiation. Cells types affected Cells are least sensitive when in the S phase, then the G1 phase, then the G2 phase, and most sensitive in the M phase of the cell cycle. This is described by the 'law of Bergonié and Tribondeau', formulated in 1906: X-rays are more effective on cells which have a greater reproductive activity. From their observations, they concluded that quickly dividing tumor cells are generally more sensitive than the majority of body cells. This is not always true. Tumor cells can be hypoxic and therefore less sensitive to X-rays because most of their effects are mediated by the free radicals produced by ionizing oxygen. It has meanwhile been shown that the most sensitive cells are those that are undifferentiated, well nourished, dividing quickly and highly active metabolically. Amongst the body cells, the most sensitive are spermatogonia and erythroblasts, epidermal stem cells, gastrointestinal stem cells. The least sensitive are nerve cells and muscle fibers. Very sensitive cells are also oocytes and lymphocytes, although they are resting cells and do not meet the criteria described above. The reasons for their sensitivity are not clear. There also appears to be a genetic basis for the varied vulnerability of cells to ionizing radiation. This has been demonstrated across several cancer types and in normal tissues. Cell damage classification The damage to the cell can be lethal (the cell dies) or sublethal (the cell can repair itself). Cell damage can ultimately lead to health effects which can be classified as either Tissue Reactions or Stochastic Effects according to the International Commission on Radiological Protection. Tissue reactions Tissue reactions have a threshold of irradiation under which they do not appear and above which they typically appear. Fractionation of dose, dose rate, the application of antioxidan
https://en.wikipedia.org/wiki/Large%20sieve
The large sieve is a method (or family of methods and related ideas) in analytic number theory. It is a type of sieve where up to half of all residue classes of numbers are removed, as opposed to small sieves such as the Selberg sieve wherein only a few residue classes are removed. The method has been further heightened by the larger sieve which removes arbitrarily many residue classes. Name Its name comes from its original application: given a set such that the elements of S are forbidden to lie in a set Ap ⊂ Z/p Z modulo every prime p, how large can S be? Here Ap is thought of as being large, i.e., at least as large as a constant times p; if this is not the case, we speak of a small sieve. History The early history of the large sieve traces back to work of Yu. B. Linnik, in 1941, working on the problem of the least quadratic non-residue. Subsequently Alfréd Rényi worked on it, using probability methods. It was only two decades later, after quite a number of contributions by others, that the large sieve was formulated in a way that was more definitive. This happened in the early 1960s, in independent work of Klaus Roth and Enrico Bombieri. It is also around that time that the connection with the duality principle became better understood. In the mid-1960s, the Bombieri–Vinogradov theorem was proved as a major application of large sieves using estimations of mean values of Dirichlet characters. In the late 1960s and early 1970s, many of the key ingredients and estimates were simplified by Patrick X. Gallagher. Development Large-sieve methods have been developed enough that they are applicable to small-sieve situations as well. Something is commonly seen as related to the large sieve not necessarily in terms of whether it is related to the kind of situation outlined above, but, rather, if it involves one of the two methods of proof traditionally used to yield a large-sieve result: Approximate Plancherel inequality If a set S is ill-distributed modulo p (by vir
https://en.wikipedia.org/wiki/Department%20of%20Defense%20Architecture%20Framework
The Department of Defense Architecture Framework (DoDAF) is an architecture framework for the United States Department of Defense (DoD) that provides visualization infrastructure for specific stakeholders concerns through viewpoints organized by various views. These views are artifacts for visualizing, understanding, and assimilating the broad scope and complexities of an architecture description through tabular, structural, behavioral, ontological, pictorial, temporal, graphical, probabilistic, or alternative conceptual means. The current release is DoDAF 2.02. This Architecture Framework is especially suited to large systems with complex integration and interoperability challenges, and it is apparently unique in its employment of "operational views". These views offer overview and details aimed to specific stakeholders within their domain and in interaction with other domains in which the system will operate. Overview The DoDAF provides a foundational framework for developing and representing architecture descriptions that ensure a common denominator for understanding, comparing, and integrating architectures across organizational, joint, and multinational boundaries. It establishes data element definitions, rules, and relationships and a baseline set of products for consistent development of systems, integrated, or federated architectures. These architecture descriptions may include families of systems (FoS), systems of systems (SoS), and net-centric capabilities for interoperating and interacting in the non-combat environment. DoD Components are expected to conform to DoDAF to the maximum extent possible in development of architectures within the department. Conformance ensures that reuse of information, architecture artifacts, models, and viewpoints can be shared with common understanding. All major U.S. DoD weapons and information technology system acquisitions are required to develop and document an enterprise architecture (EA) using the views prescribed i
https://en.wikipedia.org/wiki/Data%20General/One
The Data General/One (DG-1) was a laptop introduced in 1984 by Data General. Description The nine-pound battery-powered 1984 Data General/One ran MS-DOS and had dual 3.5" diskettes, a 79-key full-stroke keyboard, 128 KB to 512 KB of RAM, and a monochrome LCD screen capable of either the standard 80×25 characters or full CGA graphics (640×200). It was a laptop comparable in capabilities to desktops of the era. History The Data General/One offered several features in comparison with contemporary portable computers. For instance, the popular 1983 Radio Shack TRS-80 Model 100, a non-PC-compatible machine, was comparably sized. It was a small battery-operated computer resting in one's lap, but had a 40×8 character (240×64 pixel) screen, a rudimentary ROM-based menu in lieu of a full OS, and no built-in floppy. IBM's 1984 Portable PC was comparable in capability with desktops, but was not battery operable and, being much larger and heavier, was by no means a laptop. Drawbacks The DG-1 was only a modest success. One problem was its use of 3.5" diskettes. Popular software titles were thus not widely available (5.25" being still the standard), a serious issue since then-common diskette copy-protection schemes made it difficult for users to copy software into that format. The device achieved moderate success in a large OEM deal with Allen-Bradley, where it was private labelled as a T-45 "programming terminal" and was resold from 1987 to 1991 with thousands of units sold. The CPU was a CMOS version of the 8086, compatible with the IBM PC's 8088 except it ran slightly slower, at 4.0 MHz instead of the standard 4.77 MHz. Unlike the Portable PC, the DG-1 laptop could not take regular PC/XT expansion cards. RS-232 serial ports were built-in, but the CMOS (low battery consumption) serial I-O chip available at design time, a CMOS version of the Intel 8251, was register incompatible with the 8250 serial IC standard for the IBM PC. As a result, software written for the PC ser
https://en.wikipedia.org/wiki/Cartan%E2%80%93K%C3%A4hler%20theorem
In mathematics, the Cartan–Kähler theorem is a major result on the integrability conditions for differential systems, in the case of analytic functions, for differential ideals . It is named for Élie Cartan and Erich Kähler. Meaning It is not true that merely having contained in is sufficient for integrability. There is a problem caused by singular solutions. The theorem computes certain constants that must satisfy an inequality in order that there be a solution. Statement Let be a real analytic EDS. Assume that is a connected, -dimensional, real analytic, regular integral manifold of with (i.e., the tangent spaces are "extendable" to higher dimensional integral elements). Moreover, assume there is a real analytic submanifold of codimension containing and such that has dimension for all . Then there exists a (locally) unique connected, -dimensional, real analytic integral manifold of that satisfies . Proof and assumptions The Cauchy-Kovalevskaya theorem is used in the proof, so the analyticity is necessary. References Jean Dieudonné, Eléments d'analyse, vol. 4, (1977) Chapt. XVIII.13 R. Bryant, S. S. Chern, R. Gardner, H. Goldschmidt, P. Griffiths, Exterior Differential Systems, Springer Verlag, New York, 1991. External links R. Bryant, "Nine Lectures on Exterior Differential Systems", 1999 E. Cartan, "On the integration of systems of total differential equations," transl. by D. H. Delphenich E. Kähler, "Introduction to the theory of systems of differential equations," transl. by D. H. Delphenich Partial differential equations Theorems in analysis
https://en.wikipedia.org/wiki/Extended%20affix%20grammar
In computer science, extended affix grammars (EAGs) are a formal grammar formalism for describing the context free and context sensitive syntax of language, both natural language and programming languages. EAGs are a member of the family of two-level grammars; more specifically, a restriction of Van Wijngaarden grammars with the specific purpose of making parsing feasible. Like Van Wijngaarden grammars, EAGs have hyperrules that form a context-free grammar except in that their nonterminals may have arguments, known as affixes, the possible values of which are supplied by another context-free grammar, the metarules. EAGs were introduced and studied by D.A. Watt in 1974; recognizers were developed at the University of Nijmegen between 1985 and 1995. The EAG compiler developed there will generate either a recogniser, a transducer, a translator, or a syntax directed editor for a language described in the EAG formalism. The formalism is quite similar to Prolog, to the extent that it borrowed its cut operator. EAGs have been used to write grammars of natural languages such as English, Spanish, and Hungarian. The aim was to verify the grammars by making them parse corpora of text (corpus linguistics); hence, parsing had to be sufficiently practical. However, the parse tree explosion problem that ambiguities in natural language tend to produce in this type of approach is worsened for EAGs because each choice of affix value may produce a separate parse, even when several different values are equivalent. The remedy proposed was to switch to the much simpler Affix Grammar over a Finite Lattice (AGFL) instead, in which metagrammars can only produce simple finite languages. See also Affix grammar Corpus linguistics External links Informal introduction to the Extended Affix Grammar formalism and its compiler, by Marc Seutter, University of Nijmegen EAG project website, University of Nijmegen public announcement of the EAG software release, in comp.compilers, by
https://en.wikipedia.org/wiki/Hypervelocity
Hypervelocity is very high velocity, approximately over 3,000 meters per second (6,700 mph, 11,000 km/h, 10,000 ft/s, or Mach 8.8). In particular, hypervelocity is velocity so high that the strength of materials upon impact is very small compared to inertial stresses. Thus, metals and fluids behave alike under hypervelocity impact. Extreme hypervelocity results in vaporization of the impactor and target. For structural metals, hypervelocity is generally considered to be over 2,500 m/s (5,600 mph, 9,000 km/h, 8,200 ft/s, or Mach 7.3). Meteorite craters are also examples of hypervelocity impacts. Overview The term "hypervelocity" refers to velocities in the range from a few kilometers per second to some tens of kilometers per second. This is especially relevant in the field of space exploration and military use of space, where hypervelocity impacts (e.g. by space debris or an attacking projectile) can result in anything from minor component degradation to the complete destruction of a spacecraft or missile. The impactor, as well as the surface it hits, can undergo temporary liquefaction. The impact process can generate plasma discharges, which can interfere with spacecraft electronics. Hypervelocity usually occurs during meteor showers and deep space reentries, as carried out during the Zond, Apollo and Luna programs. Given the intrinsic unpredictability of the timing and trajectories of meteors, space capsules are prime data gathering opportunities for the study of thermal protection materials at hypervelocity (in this context, hypervelocity is defined as greater than escape velocity). Given the rarity of such observation opportunities since the 1970s, the Genesis and Stardust Sample Return Capsule (SRC) reentries as well as the recent Hayabusa SRC reentry have spawned observation campaigns, most notably at NASA's Ames Research Center. Hypervelocity collisions can be studied by examining the results of naturally occurring collisions (between micrometeorites and
https://en.wikipedia.org/wiki/Video%20game%20clone
A video game clone is either a video game or a video game console very similar to, or heavily inspired by, a previous popular game or console. Clones are typically made to take financial advantage of the popularity of the cloned game or system, but clones may also result from earnest attempts to create homages or expand on game mechanics from the original game. An additional motivation unique to the medium of games as software with limited compatibility, is the desire to port a simulacrum of a game to platforms that the original is unavailable for or unsatisfactorily implemented on. The legality of video game clones is governed by copyright and patent law. In the 1970s, Magnavox controlled several patents to the hardware for Pong, and pursued action against unlicensed Pong clones that led to court rulings in their favor, as well as legal settlements for compensation. As game production shifted to software on discs and cartridges, Atari sued Philips under copyright law, allowing them to shut down several clones of Pac-Man. By the end of the 1980s, courts had ruled in favor of a few alleged clones, and the high costs of a lawsuit meant that most disputes with alleged clones were ignored or settled through to the mid-2000s. In 2012, courts ruled against alleged clones in both Tetris Holding, LLC v. Xio Interactive, Inc. and Spry Fox, LLC v. Lolapps, Inc., due to explicit similarities between the games' expressive elements. Legal scholars agree that these cases establish that general game ideas, game mechanics, and stock scenes cannot be protected by copyright – only the unique expression of those ideas. However, the high cost of a lawsuit combined with the fact-specific nature of each dispute has made it difficult to predict which game developers can protect their games' look and feel from clones. Other methods like patents, trademarks, and industry regulation have played a role in shaping the prevalence of clones. Overview Cloning a game in digital marketplaces is
https://en.wikipedia.org/wiki/Percentage%20point
A percentage point or percent point is the unit for the arithmetic difference between two percentages. For example, moving up from 40 percent to 44 percent is an increase of 4 percentage points (although it is a 10-percent increase in the quantity being measured, if the total amount remains the same). In written text, the unit (the percentage point) is usually either written out, or abbreviated as pp, p.p., or %pt. to avoid confusion with percentage increase or decrease in the actual quantity. After the first occurrence, some writers abbreviate by using just "point" or "points". Differences between percentages and percentage points Consider the following hypothetical example: In 1980, 50 percent of the population smoked, and in 1990 only 40 percent of the population smoked. One can thus say that from 1980 to 1990, the prevalence of smoking decreased by 10 percentage points (or by 10 percent of the population) or by 20 percent when talking about smokers only – percentages indicate proportionate part of a total. Percentage-point differences are one way to express a risk or probability. Consider a drug that cures a given disease in 70 percent of all cases, while without the drug, the disease heals spontaneously in only 50 percent of cases. The drug reduces absolute risk by 20 percentage points. Alternatives may be more meaningful to consumers of statistics, such as the reciprocal, also known as the number needed to treat (NNT). In this case, the reciprocal transform of the percentage-point difference would be 1/(20pp) = 1/0.20 = 5. Thus if 5 patients are treated with the drug, one could expect to cure one more patient than would have occurred in the absence of the drug. For measurements involving percentages as a unit, such as, growth, yield, or ejection fraction, statistical deviations and related descriptive statistics, including the standard deviation and root-mean-square error, the result should be expressed in units of percentage points instead of percentage
https://en.wikipedia.org/wiki/IP%20traceback
IP traceback is any method for reliably determining the origin of a packet on the Internet. The IP protocol does not provide for the authentication of the source IP address of an IP packet, enabling the source address to be falsified in a strategy called IP address spoofing, and creating potential internet security and stability problems. Use of false source IP addresses allows denial-of-service attacks (DoS) or one-way attacks (where the response from the victim host is so well known that return packets need not be received to continue the attack). IP traceback is critical for identifying sources of attacks and instituting protection measures for the Internet. Most existing approaches to this problem have been tailored toward DoS attack detection. Such solutions require high numbers of packets to converge on the attack path(s). Probabilistic packet marking Savage et al. suggested probabilistically marking packets as they traverse routers through the Internet. They propose that the router mark the packet with either the router’s IP address or the edges of the path that the packet traversed to reach the router. For the first alternative, marking packets with the router's IP address, analysis shows that in order to gain the correct attack path with 95% accuracy as many as 294,000 packets are required. The second approach, edge marking, requires that the two nodes that make up an edge mark the path with their IP addresses along with the distance between them. This approach would require more state information in each packet than simple node marking but would converge much faster. They suggest three ways to reduce the state information of these approaches into something more manageable. The first approach is to XOR each node forming an edge in the path with each other. Node a inserts its IP address into the packet and sends it to b. Upon being detected at b (by detecting a 0 in the distance), b XORs its address with the address of a. This new data entity is ca
https://en.wikipedia.org/wiki/Fluctuating%20asymmetry
Fluctuating asymmetry (FA), is a form of biological asymmetry, along with anti-symmetry and direction asymmetry. Fluctuating asymmetry refers to small, random deviations away from perfect bilateral symmetry. This deviation from perfection is thought to reflect the genetic and environmental pressures experienced throughout development, with greater pressures resulting in higher levels of asymmetry. Examples of FA in the human body include unequal sizes (asymmetry) of bilateral features in the face and body, such as left and right eyes, ears, wrists, breasts, testicles, and thighs. Research has exposed multiple factors that are associated with FA. As measuring FA can indicate developmental stability, it can also suggest the genetic fitness of an individual. This can further have an effect on mate attraction and sexual selection, as less asymmetry reflects greater developmental stability and subsequent fitness. Human physical health is also associated with FA. For example, young men with greater FA report more medical conditions than those with lower levels of FA. Multiple other factors can be linked to FA, such as intelligence and personality traits. Measurement Fluctuating asymmetry (FA) can be measured by the equation: Mean FA = mean absolute value of left sides - mean absolute value of right sides. The closer the mean value is to zero, the lower the levels of FA, indicating more symmetrical features. By taking many measurements of multiple traits per individual, this increases the accuracy in determining that individual's developmental stability. However, these traits must be chosen carefully, as different traits are affected by different selection pressures. This equation can further be used to study the distribution of asymmetries at population levels, to distinguish between traits that show FA, directional asymmetry, and anti-symmetry. The distribution of FA around a mean point of zero suggests that FA is not an adaptive trait, where symmetry is ideal. Dire
https://en.wikipedia.org/wiki/Professional%20Graphics%20Controller
Professional Graphics Controller (PGC, often called Professional Graphics Adapter and sometimes Professional Graphics Array) is a graphics card manufactured by IBM for PCs. It consists of three interconnected PCBs, and contains its own processor and memory. The PGC was, at the time of its release, the most advanced graphics card for the IBM XT and aimed for tasks such as CAD. Introduced in 1984, the Professional Graphics Controller offered a maximum resolution of 640 × 480 with 256 colors on an analog RGB monitor, at a refresh rate of 60 hertz—a higher resolution and color depth than CGA and EGA supported. This mode is not BIOS-supported. It was intended for the computer-aided design market and included 320 KB of display RAM and an on-board Intel 8088 microprocessor. The 8088 ran software routines such as "draw polygon" and "fill area" from an on-board 64 KB ROM so that the host CPU didn't need to load and run these routines itself. While never widespread in consumer-class personal computers, its list price, plus $1,295 display, compared favorably to US$50,000 dedicated CAD workstations of the time (even when the $4,995 price of a PC XT Model 87 was included). It was discontinued in 1987 with the arrival of VGA and 8514. Software support The board was targeted at the CAD market, therefore limited software support is to be expected. The only software known to support the PGC are IBM's Graphical Kernel System, P-CAD 4.5, Canyon State Systems CompuShow and AutoCAD 2.5. Output capabilities PGC supports: 640 × 480 with 256 colors from a palette of 4,096 (12-bit RGB palette, or 4 bits per color component). Color Graphics Adapter text and graphics modes. Text modes use a font with 8×16-pixel character cells and have 400 rows of pixels. The are six possible color arrangements: Default 256-colour palette - Low 4 bits intensity, high 4 bits colour; 16-colour palette - Makes the PGC behave as two 16-colour planes. If high 4 bits are 0, low 4 bits are colour; otherw
https://en.wikipedia.org/wiki/Upper%20memory%20area
In DOS memory management, the upper memory area (UMA) is the memory between the addresses of 640 KB and 1024 KB (0xA0000–0xFFFFF) in an IBM PC or compatible. IBM reserved the uppermost 384 KB of the 8088 CPU's 1024 KB address space for BIOS ROM, Video BIOS, Option ROMs, video RAM, RAM on peripherals, memory-mapped I/O, and obsoleted ROM BASIC. However, even with video RAM, the ROM BIOS, the Video BIOS, the Option ROMs, and I/O ports for peripherals, much of this 384 KB of address space was unused. As the 640 KB memory restriction became ever more of an obstacle, techniques were found to fill the empty areas with RAM. These areas were referred to as upper memory blocks (UMBs). Usage The next stage in the evolution of DOS was for the operating system to use upper memory blocks (UMBs) and the high memory area (HMA). This occurred with the release of DR DOS 5.0 in 1990. DR DOS' built-in memory manager, EMM386.EXE, could perform most of the basic functionality of QEMM and comparable programs. The advantage of DR DOS 5.0 over the combination of an older DOS plus QEMM was that the DR DOS kernel itself and almost all of its data structures could be loaded into high memory. This left virtually all the base memory free, allowing configurations with up to 620 KB out of 640 KB free. Configuration was not automatic - free UMBs had to be identified by hand, manually included in the line that loaded EMM386 from CONFIG.SYS, and then drivers and so on had to be manually loaded into UMBs from CONFIG.SYS and AUTOEXEC.BAT. This configuration was not a trivial process. As it was largely automated by the installation program of QEMM, this program survived on the market; indeed, it worked well with DR DOS' own HMA and UMB support and went on to be one of the best-selling utilities for the PC. This functionality was copied by Microsoft with the release of MS-DOS 5.0 in June 1991. Eventually, even more DOS data structures were moved out of conventional memory, allowing up to 631 KB ou
https://en.wikipedia.org/wiki/NEC%20SX
NEC SX describes a series of vector supercomputers designed, manufactured, and marketed by NEC. This computer series is notable for providing the first computer to exceed 1 gigaflop, as well as the fastest supercomputer in the world between 1992–1993, and 2002–2004. The current model, as of 2018, is the SX-Aurora TSUBASA. History The first models, the SX-1 and SX-2, were announced in April 1983, and released in 1985. The SX-2 was the first computer to exceed 1 gigaflop. The SX-1 and SX-1E were less powerful models offered by NEC. The SX-3 was announced in 1989, and shipped in 1990. The SX-3 allows parallel computing using both SIMD and MIMD. It also switched from the ACOS-4 based SX-OS, to the AT&T System V UNIX-based SUPER-UX operating system. In 1992 an improved variant, the SX-3R, was announced. A SX-3/44 variant was the fastest computer in the world between 1992-1993 on the TOP500 list. It had LSI integrated circuits with 20,000 gates per IC with a per-gate delay time of 70 picoseconds, could house 4 arithmetic processors with up to 4 sharing the same main memory, and up to several processors to achieve up to 22 GFLOPS of performance, with 1.37 GFLOPS of performance with a single processor. 100 LSI ICs were housed in a single multi chip module to achieve 2 million gates per module. The modules were watercooled. The SX-4 series was announced in 1994, and first shipped in 1995. Since the SX-4, SX series supercomputers are constructed in a doubly parallel manner. A number of central processing units (CPUs) are arranged into a parallel vector processing node. These nodes are then installed in a regular SMP arrangement. The SX-5 was announced and shipped in 1998, with the SX-6 following in 2001, and the SX-7 in 2002. Starting in 2001, Cray marketed the SX-5 and SX-6 exclusively in the US, and non-exclusively elsewhere for a short time. The Earth Simulator, built from SX-6 nodes, was the fastest supercomputer from June 2002 to June 2004 on the LINPACK benchmark
https://en.wikipedia.org/wiki/List%20of%20backup%20software
This is a list of notable backup software that performs data backups. Archivers, transfer protocols, and version control systems are often used for backups but only software focused on backup is listed here. See Comparison of backup software for features. Free and open-source software Commercial and closed-source software Defunct software See also Comparison of file synchronization software Comparison of online backup services Data recovery File synchronization List of data recovery software Remote backup service Tape management system Notes References Backup software
https://en.wikipedia.org/wiki/Saucisson%20%28pyrotechnics%29
In early military engineering, a saucisson (French for a large, dry-filled sausage) was a primitive type of fuse, consisting of a long tube or hose of cloth or leather, typically about an inch and half in diameter (37 mm), damp-proofed with pitch and filled with black powder. It was normally laid in a protective wooden trough, and ignited by use of a torch or slow match. Saucissons were used to fire fougasses, petards, mines and camouflets. Very long fascines were also called saucissons. Later, in early 20th century mining jargon, a saucisson referred to the flexible casings used for explosives in mine operations. Explosives
https://en.wikipedia.org/wiki/Aptamer
Aptamers are short sequences of artificial DNA, RNA, XNA, or peptide that bind a specific target molecule, or family of target molecules. They exhibit a range of affinities (KD in the pM to μM range), with variable levels of off-target binding and are sometimes classified as chemical antibodies. Aptamers and antibodies can be used in many of the same applications, but the nucleic acid-based structure of aptamers, which are mostly oligonucleotides, is very different from the amino acid-based structure of antibodies, which are proteins. This difference can make aptamers a better choice than antibodies for some purposes (see antibody replacement). Aptamers are used in biological lab research and medical tests. If multiple aptamers are combined into a single assay, they can measure large numbers of different proteins in a sample. They can be used to identify molecular markers of disease, or can function as drugs, drug delivery systems and controlled drug release systems. They also find use in other molecular engineering tasks. Most aptamers originate from SELEX, a family of test-tube experiments for finding useful aptamers in a massive pool of different DNA sequences. This process is much like natural selection, directed evolution or artificial selection. In SELEX, the researcher repeatedly selects for the best aptamers from a starting DNA library made of about a quadrillion different randomly generated pieces of DNA or RNA. After SELEX, the researcher might mutate or change the chemistry of the aptamers and do another selection, or might use rational design processes to engineer improvements. Non-SELEX methods for discovering aptamers also exist. Researchers optimize aptamers to achieve a variety of beneficial features. The most important feature is specific and sensitive binding to the chosen target. When aptamers are exposed to bodily fluids, as in serum tests or aptamer therapeutics, it is often important for them to resist digestion by DNA- and RNA-destroying pr
https://en.wikipedia.org/wiki/Weil%E2%80%93Ch%C3%A2telet%20group
In arithmetic geometry, the Weil–Châtelet group or WC-group of an algebraic group such as an abelian variety A defined over a field K is the abelian group of principal homogeneous spaces for A, defined over K. named it for who introduced it for elliptic curves, and , who introduced it for more general groups. It plays a basic role in the arithmetic of abelian varieties, in particular for elliptic curves, because of its connection with infinite descent. It can be defined directly from Galois cohomology, as , where is the absolute Galois group of K. It is of particular interest for local fields and global fields, such as algebraic number fields. For K a finite field, proved that the Weil–Châtelet group is trivial for elliptic curves, and proved that it is trivial for any connected algebraic group. See also The Tate–Shafarevich group of an abelian variety A defined over a number field K consists of the elements of the Weil–Châtelet group that become trivial in all of the completions of K. The Selmer group, named after Ernst S. Selmer, of A with respect to an isogeny of abelian varieties is a related group which can be defined in terms of Galois cohomology as where Av[f] denotes the f-torsion of Av and is the local Kummer map . References English translation in his collected mathematical papers. Number theory
https://en.wikipedia.org/wiki/Selmer%20group
In arithmetic geometry, the Selmer group, named in honor of the work of by , is a group constructed from an isogeny of abelian varieties. The Selmer group of an isogeny The Selmer group of an abelian variety A with respect to an isogeny f : A → B of abelian varieties can be defined in terms of Galois cohomology as where Av[f] denotes the f-torsion of Av and is the local Kummer map . Note that is isomorphic to . Geometrically, the principal homogeneous spaces coming from elements of the Selmer group have Kv-rational points for all places v of K. The Selmer group is finite. This implies that the part of the Tate–Shafarevich group killed by f is finite due to the following exact sequence 0 → B(K)/f(A(K)) → Sel(f)(A/K) → Ш(A/K)[f] → 0. The Selmer group in the middle of this exact sequence is finite and effectively computable. This implies the weak Mordell–Weil theorem that its subgroup B(K)/f(A(K)) is finite. There is a notorious problem about whether this subgroup can be effectively computed: there is a procedure for computing it that will terminate with the correct answer if there is some prime p such that the p-component of the Tate–Shafarevich group is finite. It is conjectured that the Tate–Shafarevich group is in fact finite, in which case any prime p would work. However, if (as seems unlikely) the Tate–Shafarevich group has an infinite p-component for every prime p, then the procedure may never terminate. has generalized the notion of Selmer group to more general p-adic Galois representations and to p-adic variations of motives in the context of Iwasawa theory. The Selmer group of a finite Galois module More generally one can define the Selmer group of a finite Galois module M (such as the kernel of an isogeny) as the elements of H1(GK,M) that have images inside certain given subgroups of H1(GKv,M). References See also Wiles's proof of Fermat's Last Theorem Number theory
https://en.wikipedia.org/wiki/Electronic%20organizer
An electronic organizer (or electric organizer) is a small calculator-sized computer, often with an built-in diary application and other functions such as an address book and calendar, replacing paper-based personal organizers. Typically, it has a small alphanumeric keypad and an LCD screen of one, two, or three lines. The electronic diary or organizer was invented by Indian businessman Satyan Pitroda in 1975, who is regarded as one of the earliest pioneers of hand-held computing because of his invention of the Electronic Diary in 1975. They were very popular especially with businessmen during the 1990s, but because of the advent of palmtop PCs in the 1990s, personal digital assistants in the 2000s, and smartphones in the 2010s, all of which have a larger set of features, electronic organizers are mostly seen today for research purposes. One of the leading research topics being the study of how electronics can help people with mental disabilities use this type of equipment to aid their daily life. Electronic organizers have more recently been used to support people with Alzheimer's disease to have a visual representation of a schedule. Casio digital diary Casio digital diaries were produced by Casio in the early and mid 1990s, but have since been entirely superseded by Mobile Phones and PDAs. Other electronic organizers While Casio was a major role player in the field of electronic organizers there were many different ideas, patent requests, and manufacturers of electronic organizers. Rolodex, widely known for their index card holders in the 1980s, Sharp Electronics, mostly known for their printers and audio visual equipment, and lastly Royal electronics were all large contributors to the electronic organizer in its heyday. Features Telephone directory Schedule keeper: Keep track of appointments. Memo function: Store text data such as price lists, airplane schedules, and more. To do list: Keep track of daily tasks, checking off items as you complete them.
https://en.wikipedia.org/wiki/Bidirectional%20Forwarding%20Detection
Bidirectional Forwarding Detection (BFD) is a network protocol that is used to detect faults between two routers or switches connected by a link. It provides low-overhead detection of faults even on physical media that doesn't support failure detection of any kind, such as Ethernet, virtual circuits, tunnels and MPLS label-switched paths. BFD establishes a session between two endpoints over a particular link. If more than one link exists between two systems, multiple BFD sessions may be established to monitor each one of them. The session is established with a three-way handshake, and is torn down the same way. Authentication may be enabled on the session. A choice of simple password, MD5 or SHA1 authentication is available. BFD does not have a discovery mechanism; sessions must be explicitly configured between endpoints. BFD may be used on many different underlying transport mechanisms and layers, and operates independently of all of these. Therefore, it needs to be encapsulated by whatever transport it uses. For example, monitoring MPLS LSPs involves piggybacking session establishment on LSP-Ping packets. Protocols that support some form of adjacency setup, such as OSPF, IS-IS, BGP or RIP may also be used to bootstrap a BFD session. These protocols may then use BFD to receive faster notification of failing links than would normally be possible using the protocol's own keepalive mechanism. A session may operate in one of two modes: asynchronous mode and demand mode. In asynchronous mode, both endpoints periodically send Hello packets to each other. If a number of those packets are not received, the session is considered down. In demand mode, no Hello packets are exchanged after the session is established; it is assumed that the endpoints have another way to verify connectivity to each other, perhaps on the underlying physical layer. However, either host may still send Hello packets if needed. Regardless of which mode is in use, either endpoint may also initiat
https://en.wikipedia.org/wiki/Secretary%20problem
The secretary problem demonstrates a scenario involving optimal stopping theory that is studied extensively in the fields of applied probability, statistics, and decision theory. It is also known as the marriage problem, the sultan's dowry problem, the fussy suitor problem, the googol game, and the best choice problem. The basic form of the problem is the following: imagine an administrator who wants to hire the best secretary out of rankable applicants for a position. The applicants are interviewed one by one in random order. A decision about each particular applicant is to be made immediately after the interview. Once rejected, an applicant cannot be recalled. During the interview, the administrator gains information sufficient to rank the applicant among all applicants interviewed so far, but is unaware of the quality of yet unseen applicants. The question is about the optimal strategy (stopping rule) to maximize the probability of selecting the best applicant. If the decision can be deferred to the end, this can be solved by the simple maximum selection algorithm of tracking the running maximum (and who achieved it), and selecting the overall maximum at the end. The difficulty is that the decision must be made immediately. The shortest rigorous proof known so far is provided by the odds algorithm. It implies that the optimal win probability is always at least (where e is the base of the natural logarithm), and that the latter holds even in a much greater generality. The optimal stopping rule prescribes always rejecting the first applicants that are interviewed and then stopping at the first applicant who is better than every applicant interviewed so far (or continuing to the last applicant if this never occurs). Sometimes this strategy is called the stopping rule, because the probability of stopping at the best applicant with this strategy is about already for moderate values of . One reason why the secretary problem has received so much attention is tha
https://en.wikipedia.org/wiki/Hierarchical%20storage%20management
Hierarchical storage management (HSM), also known as Tiered storage, is a data storage and Data management technique that automatically moves data between high-cost and low-cost storage media. HSM systems exist because high-speed storage devices, such as solid state drive arrays, are more expensive (per byte stored) than slower devices, such as hard disk drives, optical discs and magnetic tape drives. While it would be ideal to have all data available on high-speed devices all the time, this is prohibitively expensive for many organizations. Instead, HSM systems store the bulk of the enterprise's data on slower devices, and then copy data to faster disk drives when needed. The HSM system monitors the way data is used and makes best guesses as to which data can safely be moved to slower devices and which data should stay on the fast devices. HSM may also be used where more robust storage is available for long-term archiving, but this is slow to access. This may be as simple as an off-site backup, for protection against a building fire. HSM is a long-established concept, dating back to the beginnings of commercial data processing. The techniques used though have changed significantly as new technology becomes available, for both storage and for long-distance communication of large data sets. The scale of measures such as 'size' and 'access time' have changed dramatically. Despite this, many of the underlying concepts keep returning to favour years later, although at much larger or faster scales. Implementation In a typical HSM scenario, data which is frequently used are stored on warm storage device, such as solid state disk (SSD). Data that is infrequently accessed is, after some time migrated to a slower, high capacity cold storage tier. If a user does access data which is on the cold storage tier, it is automatically moved back to warm storage. The advantage is that the total amount of stored data can be much larger than the capacity of the warm storage device,
https://en.wikipedia.org/wiki/De%20architectura
(On architecture, published as Ten Books on Architecture) is a treatise on architecture written by the Roman architect and military engineer Marcus Vitruvius Pollio and dedicated to his patron, the emperor Caesar Augustus, as a guide for building projects. As the only treatise on architecture to survive from antiquity, it has been regarded since the Renaissance as the first known book on architectural theory, as well as a major source on the canon of classical architecture. It contains a variety of information on Greek and Roman buildings, as well as prescriptions for the planning and design of military camps, cities, and structures both large (aqueducts, buildings, baths, harbours) and small (machines, measuring devices, instruments). Since Vitruvius published before the development of cross vaulting, domes, concrete, and other innovations associated with Imperial Roman architecture, his ten books give no information on these distinctive innovations of Roman building design and technology. From references to them in the text, we know that there were at least a few illustrations in original copies (perhaps eight), but none of these survived in medieval manuscript copies. This deficiency was remedied in 16th-century printed editions, which became illustrated with many large plates. Origin and contents Probably written between 30-20 BC, it combines the knowledge and views of many antique writers, Greek and Roman, on architecture, the arts, natural history and building technology. Vitruvius cites many authorities throughout the text, often praising Greek architects for their development of temple building and the orders (Doric, Ionic and Corinthian), and providing key accounts of the origins of building in the primitive hut. Though often cited for his famous "triad" of characteristics associated with architectureutilitas, firmitas and venustas (utility, strength and beauty)the aesthetic principles that influenced later treatise writers were outlined in Book III.
https://en.wikipedia.org/wiki/Scattering%20parameters
Scattering parameters or S-parameters (the elements of a scattering matrix or S-matrix) describe the electrical behavior of linear electrical networks when undergoing various steady state stimuli by electrical signals. The parameters are useful for several branches of electrical engineering, including electronics, communication systems design, and especially for microwave engineering. The S-parameters are members of a family of similar parameters, other examples being: Y-parameters, Z-parameters, H-parameters, T-parameters or ABCD-parameters. They differ from these, in the sense that S-parameters do not use open or short circuit conditions to characterize a linear electrical network; instead, matched loads are used. These terminations are much easier to use at high signal frequencies than open-circuit and short-circuit terminations. Contrary to popular belief, the quantities are not measured in terms of power (except in now-obsolete six-port network analyzers). Modern vector network analyzers measure amplitude and phase of voltage traveling wave phasors using essentially the same circuit as that used for the demodulation of digitally modulated wireless signals. Many electrical properties of networks of components (inductors, capacitors, resistors) may be expressed using S-parameters, such as gain, return loss, voltage standing wave ratio (VSWR), reflection coefficient and amplifier stability. The term 'scattering' is more common to optical engineering than RF engineering, referring to the effect observed when a plane electromagnetic wave is incident on an obstruction or passes across dissimilar dielectric media. In the context of S-parameters, scattering refers to the way in which the traveling currents and voltages in a transmission line are affected when they meet a discontinuity caused by the insertion of a network into the transmission line. This is equivalent to the wave meeting an impedance differing from the line's characteristic impedance. Although appli
https://en.wikipedia.org/wiki/Three-state%20logic
In digital electronics, a tri-state or three-state buffer is a type of digital buffer that has three stable states: a high output state, a low output state, and a high-impedance state. In the high-impedance state, the output of the buffer is disconnected from the output bus, allowing other devices to drive the bus without interference from the tri-state buffer. This can be useful in situations where multiple devices are connected to the same bus and need to take turns accessing it. Systems implementing three-state logic on their bus are known as a three-state bus or tri-state bus. Tri-state buffers are commonly used in bus-based systems, where multiple devices are connected to the same bus and need to share it. For example, in a computer system, multiple devices such as the CPU, memory, and peripherals may be connected to the same data bus. To ensure that only one device can transmit data on the bus at a time, each device is equipped with a tri-state buffer. When a device wants to transmit data, it activates its tri-state buffer, which connects its output to the bus and allows it to transmit data. When the transmission is complete, the device deactivates its tri-state buffer, which disconnects its output from the bus and allows another device to access the bus. Tri-state buffers can be implemented using gates, flip-flops, or other digital logic circuits. They are useful for reducing crosstalk and noise on a bus, and for allowing multiple devices to share the same bus without interference. Uses The basic concept of the third state, high impedance (Hi-Z), is to effectively remove the device's influence from the rest of the circuit. If more than one device is electrically connected to another device, putting an output into the Hi-Z state is often used to prevent short circuits, or one device driving high (logical 1) against another device driving low (logical 0). Three-state buffers can also be used to implement efficient multiplexers, especially those with large
https://en.wikipedia.org/wiki/Cryptochrome
Cryptochromes (from the Greek κρυπτός χρώμα, "hidden colour") are a class of flavoproteins found in plants and animals that are sensitive to blue light. They are involved in the circadian rhythms and the sensing of magnetic fields in a number of species. The name cryptochrome was proposed as a portmanteau combining the chromatic nature of the photoreceptor, and the cryptogamic organisms on which many blue-light studies were carried out. The genes Cry1 and Cry2 encode the two cryptochrome proteins CRY1 and CRY2, respectively. Cryptochromes are classified into plant Cry and animal Cry. Animal Cry can be further categorized into insect type (Type I) and mammal-like (Type II). CRY1 is a circadian photoreceptor whereas CRY2 is a clock repressor which represses Clock/Cycle (Bmal1) complex in insects and vertebrates. In plants, blue-light photoreception can be used to cue developmental signals. Besides chlorophylls, cryptochromes are the only proteins known to form photoinduced radical-pairs in vivo. These appear to enable some animals to detect magnetic fields. Cryptochromes have been the focus of several current efforts in optogenetics. Employing transfection, initial studies on yeast have capitalized on the potential of CRY2 heterodimerization to control cellular processes, including gene expression, by light. Discovery Although Charles Darwin first documented plant responses to blue light in the 1880s, it was not until the 1980s that research began to identify the pigment responsible. In 1980, researchers discovered that the HY4 gene of the plant Arabidopsis thaliana was necessary for the plant's blue light sensitivity, and, when the gene was sequenced in 1993, it showed high sequence homology with photolyase, a DNA repair protein activated by blue light. Reference sequence analysis of cryptochrome-1 isoform d shows two conserved domains with photolyase proteins. Isoform d nucleotide positions 6 through 491 show a conserved domain with deoxyribodipyrimidine photol
https://en.wikipedia.org/wiki/Phototropin
Phototropins are photoreceptor proteins (more specifically, flavoproteins) that mediate phototropism responses in various species of algae, fungi and higher plants. Phototropins can be found throughout the leaves of a plant. Along with cryptochromes and phytochromes they allow plants to respond and alter their growth in response to the light environment. Phototropins may also be important for the opening of stomata and the movement of chloroplasts. These blue light receptors are seen across the entire green plant lineage. When Phototropins are hit with blue light, they induce a signal transduction pathway that alters the plant cells' functions in different ways. Phototropins are part of the phototropic sensory system in plants that causes various environmental responses in plants. Phototropins specifically will cause stems to bend towards light and stomata to open. Phototropins have been shown to impact the movement of chloroplast inside the cell. In addition phototropins mediate the first changes in stem elongation in blue light prior to cryptochrome activation. Phototropin is also required for blue light mediated transcript destabilization of specific mRNAs in the cell. They are present in the guard cell. References Other sources Sensory receptors Signal transduction Biological pigments Integral membrane proteins Molecular biology Plant physiology EC 2.7.11
https://en.wikipedia.org/wiki/N%2CN%27-Dicyclohexylcarbodiimide
{{DISPLAYTITLE:N,N'''-Dicyclohexylcarbodiimide}} is an organic compound with the chemical formula (C6H11N)2C. It is a waxy white solid with a sweet odor. Its primary use is to couple amino acids during artificial peptide synthesis. The low melting point of this material allows it to be melted for easy handling. It is highly soluble in dichloromethane, tetrahydrofuran, acetonitrile and dimethylformamide, but insoluble in water. Structure and spectroscopy The C−N=C=N−C core of carbodiimides (N=C=N) is linear, being related to the structure of allene. The molecule has idealized C2 symmetry. The N=C=N moiety gives characteristic IR spectroscopic signature at 2117 cm−1. The 15N NMR spectrum shows a characteristic shift of 275 ppm upfield of nitric acid and the 13C NMR spectrum features a peak at about 139 ppm downfield from TMS. Preparation DCC is produced by the decarboxylation of cyclohexylisocyanate using phosphine oxides as a catalyst: 2 C6H11NCO → (C6H11N)2C + CO2 Alternative catalysts for this conversion include the highly nucleophilic OP(MeNCH2CH2)3N. Other methods Of academic interest, palladium acetate, iodine, and oxygen can be used to couple cyclohexyl amine and cyclohexyl isocyanide. Yields of up to 67% have been achieved using this route: C6H11NC + C6H11NH2 + O2 → (C6H11N)2C + H2O DCC has also been prepared from dicyclohexylurea using a phase transfer catalyst. The disubstituted urea, arenesulfonyl chloride, and potassium carbonate react in toluene in the presence of benzyl triethylammonium chloride to give DCC in 50% yield. Reactions Amide, peptide, and ester formation DCC is a dehydrating agent for the preparation of amides, ketones, and nitriles. In these reactions, DCC hydrates to form dicyclohexylurea (DCU), a compound that is nearly insoluble in most organic solvents and insoluble in water. The majority of the DCU is thus readily removed by filtration, although the last traces can be difficult to eliminate from non-polar products. DCC
https://en.wikipedia.org/wiki/Hydroxyl%20tagging%20velocimetry
Hydroxyl tagging velocimetry (HTV) is a velocimetry method used in humid air flows. The method is often used in high-speed combusting flows because the high velocity and temperature accentuate its advantages over similar methods. HTV uses a laser (often an argon-fluoride excimer laser operating at ~193 nm) to dissociate the water in the flow into H + OH. Before entering the flow optics are used to create a grid of laser beams. The water in the flow is dissociated only where beams of sufficient energy pass through the flow, thus creating a grid in the flow where the concentrations of hydroxyl (OH) are higher than in the surrounding flow. Another laser beam (at either ~248 nm or ~308 nm) in the form of a sheet is also passed through the flow in the same plane as the grid. This laser beam is tuned to a wavelength that causes the hydroxyl molecules to fluoresce in the UV spectrum. The fluorescence is then captured by a charge-coupled device (CCD) camera. Using electronic timing methods the picture of the grid can be captured at nearly the same instant that the grid is created. By delaying the pulse of the fluorescence laser and the camera shot, an image of the grid that has now displaced downstream can be captured. Computer programs are then used to compare the two images and determine the displacement of the grid. By dividing the displacement by the known time delay the two dimensional velocity field (in the plane of the grid) can be determined. Flow ratios, however, are shown to affect the impingement locations, where increased air flow ratios can reduce the required combustor size by isolating reaction products solely within the secondary cavity. Other molecular tagging velocimetry (MTV) methods have used ozone (O3), excited oxygen and nitric oxide as the tag instead of hydroxyl. In the case of ozone the method is known as ozone tagging velocimetry or OTV. OTV has been developed and tested in many room air temperature applications with very accurate tes
https://en.wikipedia.org/wiki/Anti-replay
Anti-replay is a sub-protocol of IPsec that is part of Internet Engineering Task Force (IETF). The main goal of anti-replay is to avoid hackers injecting or making changes in packets that travel from a source to a destination. Anti-replay protocol uses a unidirectional security association in order to establish a secure connection between two nodes in the network. Once a secure connection is established, the anti-replay protocol uses packet sequence numbers to defeat replay attacks as follows: When the source sends a message, it adds a sequence number to its packet; the sequence number starts at 0 and is incremented by 1 for each subsequent packet. The destination maintains a 'sliding window' record of the sequence numbers of validated received packets; it rejects all packets which have a sequence number which is lower than the lowest in the sliding window (i.e. too old) or already appears in the sliding window (i.e. duplicates/replays). Accepted packets, once validated, update the sliding window (displacing the lowest sequence number out of the window if it was already full). See also Cryptanalysis Man in the middle attack Replay attack Session ID Transport Layer Security References Internet layer protocols Cryptographic protocols Tunneling protocols Network layer protocols
https://en.wikipedia.org/wiki/IPSANET
IPSANET was a packet switching network written by I. P. Sharp Associates (IPSA). Operation began in May 1976. It initially used the IBM 3705 Communications Controller and Computer Automation LSI-2 computers as nodes. An Intel 80286 based-node was added in 1987. It was called the Beta node. The original purpose was to connect low-speed dumb terminals to a central time sharing host in Toronto. It was soon modified to allow a terminal to connect to an alternate host running the SHARP APL software under license. Terminals were initially either 2741-type machines based on the 14.8 characters/s IBM Selectric typewriter or 30 character/s ASCII machines. Link speed was limited to 9600 bit/s until about 1984. Other services including 2780/3780 Bisync support, remote printing, X.25 gateway and SDLC pipe lines were added in the 1978 to 1984 era. There was no general purpose data transport facility until the introduction of Network Shared Variable Processor (NSVP) in 1984. This allowed APL programs running on different hosts to communicate via Shared Variables. The Beta node improved performance and provided new services not tied to APL. An X.25 interface was the most important of these. It allowed connection to a host which was not running SHARP APL. IPSANET allowed for the development of an early yet advanced e-mail service, 666 BOX, which also became a major product for some time, originally hosted on IPSA's system, and later sold to end users to run on their own machines. NSVP allowed these remote e-mail systems to exchange traffic. The network reached its maximum size of about 300 nodes before it was shut down in 1993. External links IPSANET Archives Computer networking Packets (information technology)
https://en.wikipedia.org/wiki/Wilson%20quotient
The Wilson quotient W(p) is defined as: If p is a prime number, the quotient is an integer by Wilson's theorem; moreover, if p is composite, the quotient is not an integer. If p divides W(p), it is called a Wilson prime. The integer values of W(p) are : W(2) = 1 W(3) = 1 W(5) = 5 W(7) = 103 W(11) = 329891 W(13) = 36846277 W(17) = 1230752346353 W(19) = 336967037143579 ... It is known that where is the k-th Bernoulli number. Note that the first relation comes from the second one by subtraction, after substituting and . See also Fermat quotient References External links MathWorld: Wilson Quotient Integer sequences
https://en.wikipedia.org/wiki/ATA%20over%20Ethernet
ATA over Ethernet (AoE) is a network protocol developed by the Brantley Coile Company, designed for simple, high-performance access of block storage devices over Ethernet networks. It is used to build storage area networks (SANs) with low-cost, standard technologies. Protocol description AoE runs on layer 2 Ethernet. AoE does not use Internet Protocol (IP); it cannot be accessed over the Internet or other IP networks. In this regard it is more comparable to Fibre Channel over Ethernet than iSCSI. With fewer protocol layers, this approach makes AoE fast and lightweight. It also makes the protocol relatively easy to implement and offers linear scalability with high performance. The AoE specification is 12 pages compared with iSCSI's 257 pages. AoE Header Format: 0 1 2 3 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 2 3 4 5 6 7 8 9 0 1 +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 0 | Ethernet Destination MAC Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 4 | Ethernet Destination (cont) | Ethernet Source MAC Address | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 8 | Ethernet Source MAC Address (cont) | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 12 | Ethernet Type (0x88A2) | Ver | Flags | Error | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 16 | Major | Minor | Command | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 20 | Tag | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ 24 | Arg | +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+ Ao
https://en.wikipedia.org/wiki/Depletion%20region
In semiconductor physics, the depletion region, also called depletion layer, depletion zone, junction region, space charge region or space charge layer, is an insulating region within a conductive, doped semiconductor material where the mobile charge carriers have been diffused away, or have been forced away by an electric field. The only elements left in the depletion region are ionized donor or acceptor impurities. This region of uncovered positive and negative ions is called the depletion region due to the depletion of carriers in this region, leaving none to carry a current. Understanding the depletion region is key to explaining modern semiconductor electronics: diodes, bipolar junction transistors, field-effect transistors, and variable capacitance diodes all rely on depletion region phenomena. Formation in a p–n junction A depletion region forms instantaneously across a p–n junction. It is most easily described when the junction is in thermal equilibrium or in a steady state: in both of these cases the properties of the system do not vary in time; they have been called dynamic equilibrium. Electrons and holes diffuse into regions with lower concentrations of them, much as ink diffuses into water until it is uniformly distributed. By definition, the N-type semiconductor has an excess of free electrons (in the conduction band) compared to the P-type semiconductor, and the P-type has an excess of holes (in the valence band) compared to the N-type. Therefore, when N-doped and P-doped semiconductors are placed together to form a junction, free electrons in the N-side conduction band migrate (diffuse) into the P-side conduction band, and holes in the P-side valence band migrate into the N-side valence band. Following transfer, the diffused electrons come into contact with holes and are eliminated by recombination in the P-side. Likewise, the diffused holes are recombined with free electrons so eliminated in the N-side. The net result is that the diffused elect
https://en.wikipedia.org/wiki/Belief%E2%80%93desire%E2%80%93intention%20software%20model
The belief–desire–intention software model (BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer. Overview In order to achieve this separation, the BDI software model implements the principal aspects of Michael Bratman's theory of human practical reasoning (also referred to as Belief-Desire-Intention, or BDI). That is to say, it implements the notions of belief, desire and (in particular) intention, in a manner inspired by Bratman. For Bratman, desire and intention are both pro-attitudes (mental attitudes concerned with action). He identifies commitment as the distinguishing factor between desire and intention, noting that it leads to (1) temporal persistence in plans and (2) further plans being made on the basis of those to which it is already committed. The BDI software model partially addresses these issues. Temporal persistence, in the sense of explicit reference to time, is not explored. The hierarchical nature of plans is more easily implemented: a plan consists of a number of steps, some of which may invoke other plans. The hierarchical definition of plans itself implies a kind of temporal persistence, since the overarching plan remains in effect while subsidiary plans are being executed. An important aspect of the BDI software model (in terms of its research relevance) is the ex
https://en.wikipedia.org/wiki/Aquatic%20ecosystem
An aquatic ecosystem is an ecosystem found in and around a body of water, in contrast to land-based terrestrial ecosystems. Aquatic ecosystems contain communities of organisms—aquatic life—that are dependent on each other and on their environment. The two main types of aquatic ecosystems are marine ecosystems and freshwater ecosystems. Freshwater ecosystems may be lentic (slow moving water, including pools, ponds, and lakes); lotic (faster moving water, for example streams and rivers); and wetlands (areas where the soil is saturated or inundated for at least part of the time). Types Marine ecosystems Marine coastal ecosystem Marine surface ecosystem Freshwater ecosystems Lentic ecosystem (lakes) Lotic ecosystem (rivers) Wetlands Functions Aquatic ecosystems perform many important environmental functions. For example, they recycle nutrients, purify water, attenuate floods, recharge ground water and provide habitats for wildlife. The biota of an aquatic ecosystem contribute to its self-purification, most notably microorganisms, phytoplankton, higher plants, invertebrates, fish, bacteria, protists, aquatic fungi, and more. These organisms are actively involved in multiple self-purification processes, including organic matter destruction and water filtration. It is crucial that aquatic ecosystems are reliably self-maintained, as they also provide habitats for species that reside in them. In addition to environmental functions, aquatic ecosystems are also used for human recreation, and are very important to the tourism industry, especially in coastal regions. They are also used for religious purposes, such as the worshipping of the Jordan River by Christians, and educational purposes, such as the usage of lakes for ecological study. Biotic characteristics (living components) The biotic characteristics are mainly determined by the organisms that occur. For example, wetland plants may produce dense canopies that cover large areas of sediment—or snails or geese
https://en.wikipedia.org/wiki/Generalized%20method%20of%20moments
In econometrics and statistics, the generalized method of moments (GMM) is a generic method for estimating parameters in statistical models. Usually it is applied in the context of semiparametric models, where the parameter of interest is finite-dimensional, whereas the full shape of the data's distribution function may not be known, and therefore maximum likelihood estimation is not applicable. The method requires that a certain number of moment conditions be specified for the model. These moment conditions are functions of the model parameters and the data, such that their expectation is zero at the parameters' true values. The GMM method then minimizes a certain norm of the sample averages of the moment conditions, and can therefore be thought of as a special case of minimum-distance estimation. The GMM estimators are known to be consistent, asymptotically normal, and most efficient in the class of all estimators that do not use any extra information aside from that contained in the moment conditions. GMM were advocated by Lars Peter Hansen in 1982 as a generalization of the method of moments, introduced by Karl Pearson in 1894. However, these estimators are mathematically equivalent to those based on "orthogonality conditions" (Sargan, 1958, 1959) or "unbiased estimating equations" (Huber, 1967; Wang et al., 1997). Description Suppose the available data consists of T observations , where each observation Yt is an n-dimensional multivariate random variable. We assume that the data come from a certain statistical model, defined up to an unknown parameter . The goal of the estimation problem is to find the “true” value of this parameter, θ0, or at least a reasonably close estimate. A general assumption of GMM is that the data Yt be generated by a weakly stationary ergodic stochastic process. (The case of independent and identically distributed (iid) variables Yt is a special case of this condition.) In order to apply GMM, we need to have "moment conditions",
https://en.wikipedia.org/wiki/Skew%20lines
In three-dimensional geometry, skew lines are two lines that do not intersect and are not parallel. A simple example of a pair of skew lines is the pair of lines through opposite edges of a regular tetrahedron. Two lines that both lie in the same plane must either cross each other or be parallel, so skew lines can exist only in three or more dimensions. Two lines are skew if and only if they are not coplanar. General position If four points are chosen at random uniformly within a unit cube, they will almost surely define a pair of skew lines. After the first three points have been chosen, the fourth point will define a non-skew line if, and only if, it is coplanar with the first three points. However, the plane through the first three points forms a subset of measure zero of the cube, and the probability that the fourth point lies on this plane is zero. If it does not, the lines defined by the points will be skew. Similarly, in three-dimensional space a very small perturbation of any two parallel or intersecting lines will almost certainly turn them into skew lines. Therefore, any four points in general position always form skew lines. In this sense, skew lines are the "usual" case, and parallel or intersecting lines are special cases. Formulas Testing for skewness If each line in a pair of skew lines is defined by two points that it passes through, then these four points must not be coplanar, so they must be the vertices of a tetrahedron of nonzero volume. Conversely, any two pairs of points defining a tetrahedron of nonzero volume also define a pair of skew lines. Therefore, a test of whether two pairs of points define skew lines is to apply the formula for the volume of a tetrahedron in terms of its four vertices. Denoting one point as the 1×3 vector whose three elements are the point's three coordinate values, and likewise denoting , , and for the other points, we can check if the line through and is skew to the line through and by seeing if the tet
https://en.wikipedia.org/wiki/Flue%20gas
Flue gas is the gas exiting to the atmosphere via a flue, which is a pipe or channel for conveying exhaust gases, as from a fireplace, oven, furnace, boiler or steam generator. It often refers to the exhaust gas of combustion at power plants. Technology is available to remove pollutants from flue gas at power plants. Combustion of fossil fuels is a common source of flue gas. They are usually combusted with ambient air, with the largest part of the flue gas from most fossil-fuel combustion being nitrogen, carbon dioxide, and water vapor. Description Flue gas is the gas exiting to the atmosphere via a flue, which is a pipe or channel for conveying exhaust gases from combustion, as from a fireplace, oven, furnace, boiler or steam generator. Power plants Quite often, the flue gas refers to the combustion exhaust gas produced at power plants. Its composition depends on what is being burned, but it will usually consist of mostly nitrogen (typically more than two-thirds) derived from the combustion of air, carbon dioxide (), and water vapor as well as excess oxygen (also derived from the combustion air). It further contains a small percentage of a number of pollutants, such as particulate matter (like soot), carbon monoxide, nitrogen oxides, and sulfur oxides. Scrubbing At power plants, flue gas is often treated with a series of chemical processes and scrubbers, which remove pollutants. Electrostatic precipitators or fabric filters remove particulate matter and flue-gas desulfurization captures the sulfur dioxide produced by burning fossil fuels, particularly coal. Nitrogen oxides are treated either by modifications to the combustion process to prevent their formation, or by high temperature or catalytic reaction with ammonia or urea. In either case, the aim is to produce nitrogen gas, rather than nitrogen oxides. In the United States, there is a rapid deployment of technologies to remove mercury from flue gas—typically by absorption on sorbents or by capture in in
https://en.wikipedia.org/wiki/Evil%20bit
The evil bit is a fictional IPv4 packet header field proposed in a humorous April Fools' Day RFC from 2003, authored by Steve Bellovin. The Request for Comments recommended that the last remaining unused bit, the "Reserved Bit" in the IPv4 packet header, be used to indicate whether a packet had been sent with malicious intent, thus making computer security engineering an easy problem simply ignore any messages with the evil bit set and trust the rest. Influence The evil bit has become a synonym for all attempts to seek simple technical solutions for difficult human social problems which require the willing participation of malicious actors, in particular efforts to implement Internet censorship using simple technical solutions. As a joke, FreeBSD implemented support for the evil bit that day but removed the changes the next day. A Linux patch implementing the iptables module "ipt_evil" was posted the next year. Furthermore, a patch for FreeBSD 7 is available and is kept up-to-date. There is extension for XMPP protocol "XEP-0076: Malicious Stanzas", inspired by evil bit. This RFC has also been quoted in the otherwise completely serious RFC 3675, ".sex Considered Dangerous", which may have caused the proponents of .xxx to wonder whether the Internet Engineering Task Force (IETF) was commenting on their application for a top-level domain (TLD) the document was not related to their application. For April Fool's 2010, Google added an &evil=true parameter to requests through the Ajax APIs. See also Technological fix Do Not Track HTTP 451 References 2003 in computing April Fools' Day jokes Computer network security Computer humor Censorship 2003 hoaxes
https://en.wikipedia.org/wiki/Divergent%20evolution
Divergent evolution or divergent selection is the accumulation of differences between closely related populations within a species, sometimes leading to speciation. Divergent evolution is typically exhibited when two populations become separated by a geographic barrier (such as in allopatric or peripatric speciation) and experience different selective pressures that drive adaptations to their new environment. After many generations and continual evolution, the populations become less able to interbreed with one another. The American naturalist J. T. Gulick (1832–1923) was the first to use the term "divergent evolution", with its use becoming widespread in modern evolutionary literature. Classic examples of divergence in nature are the adaptive radiation of the finches of the Galapagos or the coloration differences in populations of a species that live in different habitats such as with pocket mice and fence lizards. The term can also be applied in molecular evolution, such as to proteins that derive from homologous genes. Both orthologous genes (resulting from a speciation event) and paralogous genes (resulting from gene duplication) can illustrate divergent evolution. Through gene duplication, it is possible for divergent evolution to occur between two genes within a species. Similarities between species that have diverged are due to their common origin, so such similarities are homologies. In contrast, convergent evolution arises when an adaptation has arisen independently, creating analogous structures such as the wings of birds and of insects. Creation, definition, and usage The term divergent evolution is believed to have been first used by J. T. Gulick. Divergent evolution is commonly defined as what occurs when two groups of the same species evolve different traits within those groups in order to accommodate for differing environmental and social pressures. Various examples of such pressures can include predation, food supplies, and competition for mates
https://en.wikipedia.org/wiki/Littlewood%E2%80%93Offord%20problem
In mathematical field of combinatorial geometry, the Littlewood–Offord problem is the problem of determining the number of subsums of a set of vectors that fall in a given convex set. More formally, if V is a vector space of dimension d, the problem is to determine, given a finite subset of vectors S and a convex subset A, the number of subsets of S whose summation is in A. The first upper bound for this problem was proven (for d = 1 and d = 2) in 1938 by John Edensor Littlewood and A. Cyril Offord. This Littlewood–Offord lemma states that if S is a set of n real or complex numbers of absolute value at least one and A is any disc of radius one, then not more than of the 2n possible subsums of S fall into the disc. In 1945 Paul Erdős improved the upper bound for d = 1 to using Sperner's theorem. This bound is sharp; equality is attained when all vectors in S are equal. In 1966, Kleitman showed that the same bound held for complex numbers. In 1970, he extended this to the setting when V is a normed space. Suppose S = {v1, …, vn}. By subtracting from each possible subsum (that is, by changing the origin and then scaling by a factor of 2), the Littlewood–Offord problem is equivalent to the problem of determining the number of sums of the form that fall in the target set A, where takes the value 1 or −1. This makes the problem into a probabilistic one, in which the question is of the distribution of these random vectors, and what can be said knowing nothing more about the vi. References Combinatorics Probability problems Lemmas Mathematical problems
https://en.wikipedia.org/wiki/IEEE%20Transactions%20on%20Information%20Theory
IEEE Transactions on Information Theory is a monthly peer-reviewed scientific journal published by the IEEE Information Theory Society. It covers information theory and the mathematics of communications. It was established in 1953 as IRE Transactions on Information Theory. The editor-in-chief is Muriel Médard (Massachusetts Institute of Technology). As of 2007, the journal allows the posting of preprints on arXiv. According to Jack van Lint, it is the leading research journal in the whole field of coding theory. A 2006 study using the PageRank network analysis algorithm found that, among hundreds of computer science-related journals, IEEE Transactions on Information Theory had the highest ranking and was thus deemed the most prestigious. ACM Computing Surveys, with the highest impact factor, was deemed the most popular. References External links List of past editors-in-chief Engineering journals Information theory Transactions on Information Theory Computer science journals Cryptography journals Academic journals established in 1953
https://en.wikipedia.org/wiki/Impunity%20game
The impunity game is a simple game in experimental economics, similar to the Dictator Game. The first player "the proposer" chooses between two possible divisions of some endowment (such as a cash prize): The first choice will be a very unequal division, giving most of the endowment to herself, and sharing little with the second player (the partner or the "responder"). The second choice is a more even division, giving a "fair" proportion of the initial pie to the responder, and keeping the rest for herself. The second and final move of the game is in the hands of the responder: he can accept or reject the amount offered. Unlike the ultimatum game, this has no effect on the proposer, who always keeps the share she originally awarded herself. This game has been studied less intensively than the other standards of experimental economics, but appears to produce the interesting result that proposers typically take the "least fair" option, keeping most of the reward for themselves, a conclusion sharply in contrast to that implied by the ultimatum or dictator games. Notes 1998 Bolton, Katok, Zwick, International Journal of Game Theory, for a dictator game comparison. References Bolton, Gary E., Elena Katok, Rami Zwick. «Dictator game giving: Rules of fairness versus acts of kindness». International Journal of Game Theory 27, n.º 2 (1998): 269-299. Takagishi, Haruto, Taiki Takahashi, Akira Toyomura, Nina Takashino, Michiko Koizumi, Toshio Yamagishi. «Neural Correlates of the Rejection of Unfair Offers in the Impunity Game». Neuro Endocrinology Letters 30, n.o 4 (2009): 496-500. Non-cooperative games
https://en.wikipedia.org/wiki/Simple%20Network%20Paging%20Protocol
Simple Network Paging Protocol (SNPP) is a protocol that defines a method by which a pager can receive a message over the Internet. It is supported by most major paging providers, and serves as an alternative to the paging modems used by many telecommunications services. The protocol was most recently described in . It is a fairly simple protocol that may run over TCP port 444 and sends out a page using only a handful of well-documented commands. Connecting and using SNPP servers It is relatively easy to connect to a SNPP server only requiring a telnet client and the address of the SNPP server. The port 444 is standard for SNPP servers, and it is free to use from the sender's point of view. Maximum message length can be carrier-dependent. Once connected, a user can simply enter the commands to send a message to a pager connected to that network. For example, you could then issue the PAGE command with the number of the device to which you wish to send the message. After that issue the MESS command with the text of the message you wish to send following it. You can then issue the SEND command to send out the message to the pager and then QUIT, or send another message to a different device. The protocol also allows you to issue multiple PAGE commands, stacking them one after the other, per message effectively allowing you to send the same message to several devices on that network with one MESS and SEND command pair. References External links rfc-editor.org - RFC 1861 Network protocols
https://en.wikipedia.org/wiki/Rapoport%27s%20rule
Rapoport's rule is an ecogeographical rule that states that latitudinal ranges of plants and animals are generally smaller at lower latitudes than at higher latitudes. Background Stevens (1989) named the rule after Eduardo H. Rapoport, who had earlier provided evidence for the phenomenon for subspecies of mammals (Rapoport 1975, 1982). Stevens used the rule to "explain" greater species diversity in the tropics in the sense that latitudinal gradients in species diversity and the rule have identical exceptional data and so must have the same underlying cause. Narrower ranges in the tropics would facilitate more species to coexist. He later extended the rule to altitudinal gradients, claiming that altitudinal ranges are greatest at greater altitudes (Stevens 1992), and to depth gradients in the oceans (Stevens 1996). The rule has been the focus of intense discussion and given much impetus to exploring distributional patterns of plants and animals. Stevens' original paper has been cited about 330 times in the scientific literature. Generality Support for the generality of the rule is at best equivocal. For example, marine teleost fishes have the greatest latitudinal ranges at low latitudes. In contrast, freshwater fishes do show the trend, although only above a latitude of about 40 degrees North. Some subsequent papers have found support for the rule; others, probably even more numerous, have found exceptions to it. For most groups that have been shown to follow the rule, it is restricted to or at least most distinct above latitudes of about 40–50 degrees. Rohde therefore concluded that the rule describes a local phenomenon. Computer simulations using the Chowdhury Ecosystem Model did not find support for the rule. Explanations Rohde (1996) explained the fact that the rule is restricted to very high latitudes by effects of glaciations which have wiped out species with narrow ranges, a view also expressed by Brown (1995). Another explanation of Rapoport's rule is
https://en.wikipedia.org/wiki/Crossed%20module
In mathematics, and especially in homotopy theory, a crossed module consists of groups and , where acts on by automorphisms (which we will write on the left, , and a homomorphism of groups that is equivariant with respect to the conjugation action of on itself: and also satisfies the so-called Peiffer identity: Origin The first mention of the second identity for a crossed module seems to be in footnote 25 on p. 422 of J. H. C. Whitehead's 1941 paper cited below, while the term 'crossed module' is introduced in his 1946 paper cited below. These ideas were well worked up in his 1949 paper 'Combinatorial homotopy II', which also introduced the important idea of a free crossed module. Whitehead's ideas on crossed modules and their applications are developed and explained in the book by Brown, Higgins, Sivera listed below. Some generalisations of the idea of crossed module are explained in the paper of Janelidze. Examples Let be a normal subgroup of a group . Then, the inclusion is a crossed module with the conjugation action of on . For any group G, modules over the group ring are crossed G-modules with d = 0. For any group H, the homomorphism from H to Aut(H) sending any element of H to the corresponding inner automorphism is a crossed module. Given any central extension of groups the surjective homomorphism together with the action of on defines a crossed module. Thus, central extensions can be seen as special crossed modules. Conversely, a crossed module with surjective boundary defines a central extension. If (X,A,x) is a pointed pair of topological spaces (i.e. is a subspace of , and is a point in ), then the homotopy boundary from the second relative homotopy group to the fundamental group, may be given the structure of crossed module. The functor satisfies a form of the van Kampen theorem, in that it preserves certain colimits. The result on the crossed module of a pair can also be phrased as: if is a pointed fibration
https://en.wikipedia.org/wiki/Conic%20constant
In geometry, the conic constant (or Schwarzschild constant, after Karl Schwarzschild) is a quantity describing conic sections, and is represented by the letter K. The constant is given by where is the eccentricity of the conic section. The equation for a conic section with apex at the origin and tangent to the y axis is alternately where R is the radius of curvature at . This formulation is used in geometric optics to specify oblate elliptical (), spherical (), prolate elliptical (), parabolic (), and hyperbolic () lens and mirror surfaces. When the paraxial approximation is valid, the optical surface can be treated as a spherical surface with the same radius. Some non-optical design references use the letter p as the conic constant. In these cases, . References Mathematical constants Conic sections Geometrical optics
https://en.wikipedia.org/wiki/Teleoperation
Teleoperation (or remote operation) indicates operation of a system or machine at a distance. It is similar in meaning to the phrase "remote control" but is usually encountered in research, academia and technology. It is most commonly associated with robotics and mobile robots but can be applied to a whole range of circumstances in which a device or machine is operated by a person from a distance. Teleoperation can be considered a human-machine system. For example, ArduPilot provides a spectrum of autonomy ranging from manual control to full autopilot for autonomous vehicles. The term teleoperation is in use in research and technical communities as a standard term for referring to operation at a distance. This is as opposed to telepresence which is a less standard term and might refer to a whole range of existence or interaction that include a remote connotation. History The 19th century saw many inventors working on remotely operated weapons (torpedoes) including prototypes built by John Louis Lay (1872), John Ericsson (1873), Victor von Scheliha (1873), and the first practical wire guided torpedo, the Brennan torpedo, patented by Louis Brennan in 1877. In 1898, Nikola Tesla demonstrated a remotely controlled boat with a patented wireless radio guidance system that he tried to market to the United States military, but was turned down. Teleoperation is now moving into the hobby industry with first-person view (FPV) equipment. FPV equipment mounted on hobby cars, planes and helicopters give a TV-style transmission back to the operator, extending the range of the vehicle to greater than line-of-sight range. ALOHA (A Low-cost Open-source Hardware System for Bimanual Teleoperation) remote manipulator robot is used for data collection to train AI to perform learned skills. Examples There are several particular types of systems that are often controlled remotely: Entertainment systems (i.e. televisions, VCRs, DVD players etc.) are often controlled remotely vi
https://en.wikipedia.org/wiki/Tyan
Tyan Computer Corporation (泰安電腦科技股份有限公司; also known as Tyan Business Unit, or TBU) is a subsidiary of MiTAC International, and a manufacturer of computer motherboards, including models for both AMD and Intel processors. They develop and produce high-end server, SMP, and desktop barebones systems as well as provide design and production services to tier 1 global OEMs, and a number of other regional OEMs. Founding The company was founded in 1989 by Dr. T. Symon Chang, a veteran of IBM and Intel. At that time, Dr. Chang saw an empty space in the market in which there were no strong players for the SMP server space, and as such he founded Tyan in order to develop, produce and deliver such products, starting with a dual Intel Pentium-series motherboard as well as a number of other single processor motherboards all geared towards server applications. Since then, Tyan has produced a number of single and multi-processor (as well as multi-core) products using technology from many well-known companies (e.g. Intel, AMD, NVIDIA, Broadcom and many more). Notable design wins include that of Dawning corporation for the fastest supercomputer (twice); first to market with a dual AMD Athlon MP server platform; winner of the Maximum PC Kick-Ass Award (twice) for their contributions to the Dream Machine (most recently, the 2005 edition); and first to market with an eight (8) GPU server platform (the FT72-B7015). Later company history Tyan is headquartered in Taipei, Taiwan, separated between three buildings in the Nei-Hu industrial district. All three buildings belong to the parent company, MiTAC. The North American headquarter is in Newark, California, which is the same North American headquarter for MiTAC. The merger in question was with MiTAC, a Taiwanese OEM which develops and produces a range of products (including servers, notebooks, consumer electronics products, networking and educational products - as well as providing contract manufacturing services), was announced in Ma
https://en.wikipedia.org/wiki/SIGTRAN
SIGTRAN is the name, derived from signaling transport, of the former Internet Task Force (I) working group that produced specifications for a family of protocols that provide reliable datagram service and user layer adaptations for Signaling System and ISDN communications protocols. The SIGTRAN protocols are an extension of the SS7 protocol family, and they support the same application and call management paradigms as SS7. However, the SIGTRAN protocols use an Internet Protocol (IP) transport called Stream Control Transmission Protocol (SCTP), instead of TCP or UDP. Indeed, the most significant protocol defined by the SIGTRAN group is SCTP, which is used to carry PSTN signaling over IP. The SIGTRAN group was significantly influenced by telecommunications engineers intent on using the new protocols for adapting IP networks to the PSTN with special regard to signaling applications. Recently, SCTP is finding applications beyond its original purpose wherever reliable datagram service is desired. SIGTRAN has been published in RFC 2719, under the title Framework Architecture for Signaling Transport. RFC 2719 also defines the concept of a signaling gateway (SG), which converts Common Channel Signaling (CCS) messages from SS7 to SIGTRAN. Implemented in a variety of network elements including softswitches, the SG function can provide significant value to existing common channel signaling networks, leveraging investments associated with SS7 and delivering the cost/performance values associated with IP transport. SIGTRAN protocols The SIGTRAN family of protocols includes: Stream Control Transmission Protocol (SCTP), RFC 2960, RFC 3873, RFC 4166, RFC 4960. ISDN User Adaptation (IUA), RFC 4233, RFC 5133. Message Transfer Part 2 (MTP) User Peer-to-Peer Adaptation Layer (M2PA), RFC 4165. Message Transfer Part 2 User Adaptation Layer (M2UA), RFC 3331. Message Transfer Part 3 User Adaptation Layer (M3UA), RFC 4666. Signalling Connection Control Part (SCCP) User Adaptation (SUA
https://en.wikipedia.org/wiki/CableACE%20Award
The CableACE Award (earlier known as the ACE Awards; ACE was an acronym for "Award for Cable Excellence") is a defunct award that was given by what was then the National Cable Television Association from 1978 to 1997 to honor excellence in American cable television programming. The trophy itself was shaped as a glass spade, alluding to the Ace of spades. History The CableACE was created to serve as the cable industry's counterpart to broadcast television's Primetime Emmy Awards. Until the 40th ceremony in 1988, the Emmys refused to honor cable programming. For much of its existence, the ceremony aired on a simulcast on as many as twelve cable networks in some years. The last few years found the ceremony awarded solely to one network, usually Lifetime or TBS. In 1992, the award's official name was changed from ACE to CableACE, agreeing to do so to reduce confusion with the American Cinema Editors (ACE) society. By 1997, the Emmys began to reach a tipping point, where cable programming had grown to hold much more critical acclaim over broadcast programming, and met an even parity, a position that would only hold for a short time before cable programming began to dominate the categories of the Primetime Emmys. Few attended the national CableACE Awards ceremony in November 1997, and the CableACE show had a low 0.6 rating on TNT, compared with a 1.2 rating the year before, while the Emmys had a 13.5 rating that year. Smaller cable networks called for the CableACEs to be saved as their only real forum for recognition. In April 1998, members of the NCTA chose to end the CableACEs. Judging Professionals in the television industry were randomly selected to be judges. A Universal City hotel would be selected, where several rooms would be rented for the day. Individual rooms would be designated for each award category. Judges were discouraged from leaving the rooms at any time during the day-long judging. There were usually eight to 12 judges for each category. Dependi
https://en.wikipedia.org/wiki/Epitaxial%20wafer
An epitaxial wafer (also called epi wafer, epi-wafer, or epiwafer) is a wafer of semiconducting material made by epitaxial growth (epitaxy) for use in photonics, microelectronics, spintronics, or photovoltaics. The epi layer may be the same material as the substrate, typically monocrystaline silicon, or it may be a silicon dioxide (SoI) or a more exotic material with specific desirable qualities. The purpose of epitaxy is to perfect the crystal structure over the bare substrate below and improve the wafer surface's electrical characteristics, making it suitable for highly complex microprocessors and memory devices. History Silicon epi wafers were first developed around 1966 and achieved commercial acceptance by the early 1980s. Methods for growing the epitaxial layer on monocrystalline silicon or other wafers include: various types of chemical vapor deposition (CVD) classified as Atmospheric pressure CVD (APCVD) or metal organic chemical vapor deposition (MOCVD), as well as molecular beam epitaxy (MBE). Two "kerfless" methods (without abrasive sawing) for separating the epitaxial layer from the substrate are called "implant-cleave" and "stress liftoff". A method applicable when the epi-layer and substrate are the same material employs ion implantation to deposit a thin layer of crystal impurity atoms and resulting mechanical stress at the precise depth of the intended epi layer thickness. The induced localized stress provides a controlled path for crack propagation in the following cleavage step. In the dry stress lift-off process applicable when the epi-layer and substrate are suitably different materials, a controlled crack is driven by a temperature change at the epi/wafer interface purely by the thermal stresses due to the mismatch in thermal expansion between the epi layer and substrate, without the necessity for any external mechanical force or tool to aid crack propagation. It was reported that this process yields single atomic plane cleavage, reducing the
https://en.wikipedia.org/wiki/Latin%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering
Many letters of the Latin alphabet, both capital and small, are used in mathematics, science, and engineering to denote by convention specific or abstracted constants, variables of a certain type, units, multipliers, or physical entities. Certain letters, when combined with special formatting, take on special meaning. Below is an alphabetical list of the letters of the alphabet with some of their uses. The field in which the convention applies is mathematics unless otherwise noted. Aa A represents: the first point of a triangle the digit "10" in hexadecimal and other positional numeral systems with a radix of 11 or greater the unit ampere for electric current in physics the area of a figure the mass number or nucleon number of an element in chemistry the Helmholtz free energy of a closed thermodynamic system of constant pressure and temperature a vector potential, in electromagnetics it can refer to the magnetic vector potential an Abelian group in abstract algebra the Glaisher–Kinkelin constant atomic weight, denoted by Ar work in classical mechanics the pre-exponential factor in the Arrhenius Equation electron affinity represents the algebraic numbers or affine space in algebraic geometry. A blood type A spectral type a represents: the first side of a triangle (opposite point A) the scale factor of the expanding universe in cosmology the acceleration in mechanics equations the first constant in a linear equation a constant in a polynomial the unit are for area (100 m2) the unit prefix atto (10−18) the first term in a sequence or series Reflectance Bb B represents: the digit "11" in hexadecimal and other positional numeral systems with a radix of 12 or greater the second point of a triangle a ball (also denoted by ℬ () or ) a basis of a vector space or of a filter (both also denoted by ℬ ()) in econometrics and time-series statistics it is often used for the backshift or lag operator, the formal parameter of the lag polynomial the magnetic field, denoted
https://en.wikipedia.org/wiki/Two-fluid%20model
Two-fluid model is a macroscopic traffic flow model to represent traffic in a town/city or metropolitan area, put forward in the 1970s by Ilya Prigogine and Robert Herman. There is also a two-fluid model which helps explain the behavior of superfluid helium. This model states that there will be two components in liquid helium below its lambda point (the temperature where superfluid forms). These components are a normal fluid and a superfluid component. Each liquid has a different density and together their sum makes the total density, which remains constant. The ratio of superfluid density to the total density increases as the temperature approaches absolute zero. External links Two Fluid Model of Superfluid Helium References Mathematical modeling Traffic flow Superfluidity
https://en.wikipedia.org/wiki/Color%20model
A color model is an abstract mathematical model describing the way colors can be represented as tuples of numbers, typically as three or four values or color components. When this model is associated with a precise description of how the components are to be interpreted (viewing conditions, etc.), taking account of visual perception, the resulting set of colors is called "color space." This article describes ways in which human color vision can be modeled, and discusses some of the models in common use. Tristimulus color space One can picture this space as a region in three-dimensional Euclidean space if one identifies the x, y, and z axes with the stimuli for the long-wavelength (L), medium-wavelength (M), and short-wavelength (S) light receptors. The origin, (S,M,L) = (0,0,0), corresponds to black. White has no definite position in this diagram; rather it is defined according to the color temperature or white balance as desired or as available from ambient lighting. The human color space is a horse-shoe-shaped cone such as shown here (see also CIE chromaticity diagram below), extending from the origin to, in principle, infinity. In practice, the human color receptors will be saturated or even be damaged at extremely high light intensities, but such behavior is not part of the CIE color space and neither is the changing color perception at low light levels (see: Kruithof curve). The most saturated colors are located at the outer rim of the region, with brighter colors farther removed from the origin. As far as the responses of the receptors in the eye are concerned, there is no such thing as "brown" or "gray" light. The latter color names refer to orange and white light respectively, with an intensity that is lower than the light from surrounding areas. One can observe this by watching the screen of an overhead projector during a meeting: one sees black lettering on a white background, even though the "black" has in fact not become darker than the white scre
https://en.wikipedia.org/wiki/Socket%205
Socket 5 was created for the second generation of Intel P5 Pentium processors operating at speeds from 75 to 133 MHz as well as certain Pentium OverDrive and Pentium MMX processors with core voltage 3.3 V. It superseded the earlier Socket 4. It was released in March 1994. Consisting of 320 pins, this was the first socket to use a staggered pin grid array, or SPGA, which allowed the chip's pins to be spaced closer together than earlier sockets. Socket 5 was replaced by Socket 7 in 1995. External links Differences between Socket 5 and Socket 7 (archived) See also List of Intel microprocessors List of AMD microprocessors References Socket 005
https://en.wikipedia.org/wiki/SMIF%20%28interface%29
SMIF (Standard Mechanical Interface) is an isolation technology developed in the 1980s by a group known as the "micronauts" at Hewlett-Packard in Palo Alto. The system is used in semiconductor wafer fabrication and cleanroom environments. It is a SEMI standard. Development The core development team was led by Ulrich Kaempf as engineering manager, under the direction of Mihir Parikh. The core team that developed the technology was driven by Barclay Tullis, who held most of the patents, with Dave Thrasher, who later joined the Silicon Valley Group, and Thomas Atchison, a member of the technical staff under direction of Barclay Tullis. Mihir later provided the technology to SEMI, and then licensed a copy for himself, and spun out Asyst Technologies to provide the technology commercially. Asyst technology subsequent acquire by Brooks Automation in their Versaport. The interface is the same after being acquired Use The purpose of SMIF pods is to isolate wafers from contamination by providing a miniature environment with controlled airflow, pressure and particle count. SMIF pods can be accessed by automated mechanical interfaces on production equipment. The wafers therefore remain in a carefully controlled environment whether in the SMIF pod or in a tool, without being exposed to the surrounding airflow. Each SMIF pod contains a wafer cassette in which the wafers are stored horizontally. The bottom surface of the pod is the opening door, and when a SMIF pod is placed on a load port, the bottom door and cassette are lowered into the tool so that the wafers can be removed. Both wafers and reticles can be handled by SMIF pods in a semiconductor fabrication environment. Used in lithographic tools, reticles or photomasks contain the image that is exposed on a coated wafer in one processing step of a complete integrated semiconductor manufacturing cycle. Because reticles are linked so directly with wafer processing, they also require steps to protect them from contamination
https://en.wikipedia.org/wiki/ShadowCrew
ShadowCrew was a cybercrime forum that operated under the domain name ShadowCrew.com between August 2002 and November 2004. Origins The concept of the ShadowCrew was developed in early 2002 during a series of chat sessions between Brett Johnson (GOllumfun), Seth Sanders (Kidd), and Kim Marvin Taylor (MacGayver). The ShadowCrew website also contained a number of sub-forums on the latest information on hacking tricks, social engineering, credit card fraud, virus development, scams, and phishing. Organizational structure ShadowCrew emerged early in 2002 from another underground site, counterfeitlibrary.com, which was run by Brett Johnson and would be followed up by carderplanet.com owned by Dmitry Golubov a.k.a. Script, a website primarily in the Russian language. The site also facilitated the sale of drugs wholesale. During its early years, the site was hosted in Hong Kong, but shortly before CumbaJohnny (Albert Gonzalez)'s arrest, the server was in his possession somewhere in New Jersey. Aftermath and legacy ShadowCrew was the forerunner of today's cybercrime forums and marketplaces. The structure, marketplace, review system, and other innovations began when Shadowcrew laid the basis of today's underground forums and marketplaces. Likewise, many of today's current scams and computer crimes began with Counterfeitlibrary and Shadowcrew. The site flourished from the time it opened in 2002 until its demise in late October 2004. Even though the site was booming with criminal activity and all seemed well, the members did not know what was going on behind the scenes. Federal agents received their "big break" when they found CumbaJohnny aka Albert Gonzalez. Upon Cumba's arrest, he immediately turned and started working with federal agents. From April 2003 to October 2004, Cumba assisted in gathering information and monitoring the site and those who utilized it. He started by taking out many of the Russians who were hacking databases and selling counterfeit credit cards
https://en.wikipedia.org/wiki/Torricelli%27s%20equation
In physics, Torricelli's equation, or Torricelli's formula, is an equation created by Evangelista Torricelli to find the final velocity of a moving object with constant acceleration along an axis (for example, the x axis) without having a known time interval. The equation itself is: where is the object's final velocity along the x axis on which the acceleration is constant. is the object's initial velocity along the x axis. is the object's acceleration along the x axis, which is given as a constant. is the object's change in position along the x axis, also called displacement. In this and all subsequent equations in this article, the subscript (as in ) is implied, but is not expressed explicitly for clarity in presenting the equations. This equation is valid along any axis on which the acceleration is constant. Derivation Without differentials and integration Begin with the definition of acceleration: where is the time interval. This is true because the acceleration is constant. The left hand side is this constant value of the acceleration and the right hand side is the average acceleration. Since the average of a constant must be equal to the constant value, we have this equality. If the acceleration was not constant, this would not be true. Now solve for the final velocity: Square both sides to get: The term also appears in another equation that is valid for motion with constant acceleration: the equation for the final position of an object moving with constant acceleration, and can be isolated: Substituting () into the original equation () yields: Using differentials and integration Begin with the definition of acceleration as the derivative of the velocity: Now, we multiply both sides by the velocity : In the left hand side we can rewrite the velocity as the derivative of the position: Multiplying both sides by gets us the following: Rearranging the terms in a more traditional manner: Integrating both sides from the initial instant wi
https://en.wikipedia.org/wiki/Hatching
Hatching () is an artistic technique used to create tonal or shading effects by drawing (or painting or scribing) closely spaced parallel lines. When lines are placed at an angle to one another, it is called cross-hatching. Hatching is also sometimes used to encode colours in monochromatic representations of colour images, particularly in heraldry. Hatching is especially important in essentially linear media, such as drawing, and many forms of printmaking, such as engraving, etching and woodcut. In Western art, hatching originated in the Middle Ages, and developed further into cross-hatching, especially in the old master prints of the fifteenth century. Master ES and Martin Schongauer in engraving and Erhard Reuwich and Michael Wolgemut in woodcut were pioneers of both techniques, and Albrecht Dürer in particular perfected the technique of crosshatching in both media. Artists use the technique, varying the length, angle, closeness and other qualities of the lines, most commonly in drawing, linear painting and engraving. Technique The main concept is that the quantity, thickness and spacing of the lines will affect the brightness of the overall image and emphasize forms creating the illusion of volume. Hatching lines should always follow (i.e. wrap around) the form. By increasing quantity, thickness and closeness, a darker area will result. An area of shading next to another area which has lines going in another direction is often used to create contrast. Line work can be used to represent colors, typically by using the same type of hatch to represent particular tones. For example, red might be made up of lightly spaced lines, whereas green could be made of two layers of perpendicular dense lines, resulting in a realistic image. Crosshatching is the technique of using line to shade and create value. Variations Representation of materials In technical drawing, the section lining may indicate the material of a component part of an assembly. Many hatching pat
https://en.wikipedia.org/wiki/Deep-level%20trap
Deep-level traps or deep-level defects are a generally undesirable type of electronic defect in semiconductors. They are "deep" in the sense that the energy required to remove an electron or hole from the trap to the valence or conduction band is much larger than the characteristic thermal energy kT, where k is the Boltzmann constant and T is the temperature. Deep traps interfere with more useful types of doping by compensating the dominant charge carrier type, annihilating either free electrons or electron holes depending on which is more prevalent. They also directly interfere with the operation of transistors, light-emitting diodes and other electronic and opto-electronic devices, by offering an intermediate state inside the band gap. Deep-level traps shorten the non-radiative life time of charge carriers, and—through the Shockley–Read–Hall (SRH) process—facilitate recombination of minority carriers, having adverse effects on the semiconductor device performance. Hence, deep-level traps are not appreciated in many opto-electronic devices as it may lead to poor efficiency and reasonably large delay in response. Common chemical elements that produce deep-level defects in silicon include iron, nickel, copper, gold, and silver. In general, transition metals produce this effect, while light metals such as aluminium do not. Surface states and crystallographic defects in the crystal lattice can also play role of deep-level traps. Optoelectronics Semiconductor properties Semiconductor structures
https://en.wikipedia.org/wiki/Benchmark%20%28computing%29
In computing, a benchmark is the act of running a computer program, a set of programs, or other operations, in order to assess the relative performance of an object, normally by running a number of standard tests and trials against it. The term benchmark is also commonly utilized for the purposes of elaborately designed benchmarking programs themselves. Benchmarking is usually associated with assessing performance characteristics of computer hardware, for example, the floating point operation performance of a CPU, but there are circumstances when the technique is also applicable to software. Software benchmarks are, for example, run against compilers or database management systems (DBMS). Benchmarks provide a method of comparing the performance of various subsystems across different chip/system architectures. Purpose As computer architecture advanced, it became more difficult to compare the performance of various computer systems simply by looking at their specifications. Therefore, tests were developed that allowed comparison of different architectures. For example, Pentium 4 processors generally operated at a higher clock frequency than Athlon XP or PowerPC processors, which did not necessarily translate to more computational power; a processor with a slower clock frequency might perform as well as or even better than a processor operating at a higher frequency. See BogoMips and the megahertz myth. Benchmarks are designed to mimic a particular type of workload on a component or system. Synthetic benchmarks do this by specially created programs that impose the workload on the component. Application benchmarks run real-world programs on the system. While application benchmarks usually give a much better measure of real-world performance on a given system, synthetic benchmarks are useful for testing individual components, like a hard disk or networking device. Benchmarks are particularly important in CPU design, giving processor architects the ability to measur
https://en.wikipedia.org/wiki/XrML
XrML is the eXtensible Rights Markup Language which has also been standardized as the Rights Expression Language (REL) for MPEG-21. XrML is owned by ContentGuard. XrML is based on XML and describes rights, fees and conditions together with message integrity and entity authentication information. History and development Xerox PARC and DPRL Mark Stefik, a researcher at Xerox PARC, is known as the originator of the concepts that became the XrML language. Stefik was engaged in research on the topic of trusted systems for secure digital commerce, of which one part was a language to express the rights that the system would allow users to perform on digital resources. The first version of the rights expression language that became XrML was developed at Xerox PARC, and called the Digital Property Rights Language (DPRL). DPRL appears in a patent filed by Xerox in November 1994 (and was granted in February 1998) entitled: "System for Controlling the Distribution and Use of Digital Work Having Attached Usage Rights Where the Usage Rights are Defined by a Usage Rights Grammar" (US Patent 5,715,403, issued to Xerox Corporation). Between 1994 and 1998, Xerox formed its Rights Management Group to continue the work represented in the patent. In November 1998, Xerox issued the first XML version of the Digital Property Rights Language (DPRL), labelled Version 2.0. Prior to that time, DPRL had been written in the LISP programming language. The DPRL 2.0 documentation makes it clear that DPRL was designed for machine-to-machine interaction, with rights expressed as machine actionable functions. It also states clearly that in interpreting a DPRL-based expression of rights, only those rights that are explicitly granted can be acted upon. Any areas where a rights expression is silent must be interpreted as rights not granted, and therefore must be denied by the software enforcing the rights. XrML 1.0 In 1999, version 2 of DPRL was licensed to a new company founded by Microsoft and
https://en.wikipedia.org/wiki/The%20Tower%20of%20Druaga
is a 1984 arcade action role-playing maze game developed and published in Japan by Namco. Controlling the golden-armored knight Gilgamesh, the player is tasked with scaling 60 floors of the titular tower in an effort to rescue the maiden Ki from Druaga, a demon with eight arms and four legs, who plans to use an artifact known as the Blue Crystal Rod to enslave all of mankind. It ran on the Namco Super Pac-Man arcade hardware, modified with a horizontal-scrolling video system used in Mappy. Druaga was designed by Masanobu Endo, best known for creating Xevious (1983). It was conceived as a "fantasy Pac-Man" with combat and puzzle solving, taking inspiration from games such as Wizardry and Dungeons & Dragons, along with Mesopotamian, Sumerian and Babylonian mythology. It began as a prototype game called Quest with interlocking mazes, revised to run on an arcade system; the original concept was scrapped due to Endo disliking the heavy use of role-playing elements, instead becoming a more action-oriented game. In Japan, The Tower of Druaga was widely successful, attracting millions of fans for its use of secrets and hidden items. It is cited as an important game of its genre for laying down the foundation for future games, as well as inspiring the idea of sharing tips with friends and guidebooks. Druaga is noted as being influential for many games to follow, including Ys, Hydlide, Dragon Slayer and The Legend of Zelda. The success of the game in Japan inspired several ports for multiple platforms, as well as spawning a massive franchise known as the Babylonian Castle Saga, including multiple sequels, spin-offs, literature and an anime series produced by Gonzo. However, the 2009 Wii Virtual Console release in North America was met with a largely negative reception for its obtuse design, which many said was near-impossible to finish without a guidebook, alongside its high difficulty and controls. Gameplay The Tower of Druaga is an action role-playing maze video game. C
https://en.wikipedia.org/wiki/SpaceWire
SpaceWire is a spacecraft communication network based in part on the IEEE 1355 standard of communications. It is coordinated by the European Space Agency (ESA) in collaboration with international space agencies including NASA, JAXA, and RKA. Within a SpaceWire network the nodes are connected through low-cost, low-latency, full-duplex, point-to-point serial links, and packet switching wormhole routing routers. SpaceWire covers two (physical and data-link) of the seven layers of the OSI model for communications. Architecture Physical layer SpaceWire's modulation and data formats generally follow the data strobe encoding - differential ended signaling (DS-DE) part of the IEEE Std 1355-1995. SpaceWire utilizes asynchronous communication and allows speeds between 2 Mbit/s and 200 Mbit/s, with initial signalling rate of 10Mbit/s. DS-DE is well-favored because it describes modulation, bit formats, routing, flow control, and error detection in hardware, with little need for software. SpaceWire also has very low error rates, deterministic system behavior, and relatively simple digital electronics. SpaceWire replaced old PECL differential drivers in the physical layer of IEEE 1355 DS-DE by low-voltage differential signaling (LVDS). SpaceWire also proposes the use of space-qualified 9-pin connectors. SpaceWire and IEEE 1355 DS-DE allows for a wider set of speeds for data transmission, and some new features for automatic failover. The fail-over features let data find alternate routes, so a spacecraft can have multiple data buses, and be made fault-tolerant. SpaceWire also allows the propagation of time interrupts over SpaceWire links, eliminating the need for separate time discretes. Link layer Each transferred character starts with a parity bit and a data-control flag bit. If data-control flag is a 0-bit, an 8-bit LSB character follows. Otherwise one of the control codes, including end of packet (EOP). Network layer The network data frames look as follows: One or
https://en.wikipedia.org/wiki/Weather%20Underground%20%28weather%20service%29
Weather Underground is a commercial weather service providing real-time weather information over the Internet. Weather Underground provides weather reports for most major cities around the world on its Web site, as well as local weather reports for newspapers and third-party sites. Its information comes from the National Weather Service (NWS), and over 250,000 personal weather stations (PWS). The site is available in many languages, and customers can access an ad-free version of the site with additional features for an annual fee. Weather Underground is owned by The Weather Company, a subsidiary of IBM. History The company is based in San Francisco, California and was founded in 1995 as an offshoot of the University of Michigan internet weather database. The name is a reference to the 1960s radical left-wing militant organization the Weather Underground, which also originated at the University of Michigan. Jeff Masters, a doctoral candidate in meteorology at the University of Michigan working under the direction of Professor Perry Samson, wrote a menu-based Telnet interface in 1991 that displayed real-time weather information around the world. In 1993, they recruited Alan Steremberg and initiated a project to bring Internet weather into K–12 classrooms. Weather Underground president Alan Steremberg wrote "Blue Skies" for the project, a graphical Mac Gopher client, which won several awards. When the Mosaic Web browser appeared, this provided a natural transition from "Blue Skies" to the Web. In 1995 Weather Underground Inc. became a commercial entity separate from the university. It has grown to provide weather for print sources, in addition to its online presence. In 2005, Weather Underground became the weather provider for the Associated Press; Weather Underground also provides weather reports for some newspapers, including the San Francisco Chronicle and the Google search engine. Alan Steremberg also worked on the early development of the Google search engine
https://en.wikipedia.org/wiki/Push%20technology
Push technology or server push is a style of Internet-based communication where the request for a given transaction is initiated by the publisher or central server. It is contrasted with pull, or get, where the request for the transmission of information is initiated by the receiver or client. Push services are often based on information and data preferences expressed in advance, called the publish-subscribe model. A client "subscribes" to various information channels provided by a server; whenever new content is available on one of those channels, the server "pushes" or "publishes" that information out to the client. Push is sometimes emulated with a polling technique, particularly under circumstances where a real push is not possible, such as sites with security policies that reject incoming HTTP requests. General use Synchronous conferencing and instant messaging are examples of push services. Chat messages and sometimes files are pushed to the user as soon as they are received by the messaging service. Both decentralized peer-to-peer programs (such as WASTE) and centralized programs (such as IRC or XMPP) allow pushing files, which means the sender initiates the data transfer rather than the recipient. Email may also be a push system: SMTP is a push protocol (see Push e-mail). However, the last step—from mail server to desktop computer—typically uses a pull protocol like POP3 or IMAP. Modern e-mail clients make this step seem instantaneous by repeatedly polling the mail server, frequently checking it for new mail. The IMAP protocol includes the IDLE command, which allows the server to tell the client when new messages arrive. The original BlackBerry was the first popular example of push-email in a wireless context. Another example is the PointCast Network, which was widely covered in the 1990s. It delivered news and stock market data as a screensaver. Both Netscape and Microsoft integrated push technology through the Channel Definition Format (CDF) into the
https://en.wikipedia.org/wiki/Partition%20%28database%29
A partition is a division of a logical database or its constituent elements into distinct independent parts. Database partitioning is normally done for manageability, performance or availability reasons, or for load balancing. It is popular in distributed database management systems, where each partition may be spread over multiple nodes, with users at the node performing local transactions on the partition. This increases performance for sites that have regular transactions involving certain views of data, whilst maintaining availability and security. Partitioning criteria Current high-end relational database management systems provide for different criteria to split the database. They take a partitioning key and assign a partition based on certain criteria. Some common criteria include: Range partitioning: selects a partition by determining if the partitioning key is within a certain range. An example could be a partition for all rows where the "zipcode" column has a value between 70000 and 79999. It distributes tuples based on the value intervals (ranges) of some attribute. In addition to supporting exact-match queries (as in hashing), it is well-suited for range queries. For instance, a query with a predicate “A between A1 and A2” may be processed by the only node(s) containing tuples. List partitioning: a partition is assigned a list of values. If the partitioning key has one of these values, the partition is chosen. For example, all rows where the column Country is either Iceland, Norway, Sweden, Finland or Denmark could build a partition for the Nordic countries. Composite partitioning: allows for certain combinations of the above partitioning schemes, by for example first applying a range partitioning and then a hash partitioning. Consistent hashing could be considered a composite of hash and list partitioning where the hash reduces the key space to a size that can be listed. Round-robin partitioning: the simplest strategy, it ensures uniform data dist
https://en.wikipedia.org/wiki/Flat-field%20correction
Flat-field correction (FFC) is a digital imaging technique to mitigate the image detector pixel-to-pixel sensitivity and distortions in the optical path. It is a standard calibration procedure in everything from personal digital cameras to large telescopes. Overview Flat fielding refers to the process of compensating for different gains and dark currents in a detector. Once a detector has been appropriately flat-fielded, a uniform signal will create a uniform output (hence flat-field). This then means any further signal is due to the phenomenon being detected and not a systematic error. A flat-field image is acquired by imaging a uniformly-illuminated screen, thus producing an image of uniform color and brightness across the frame. For handheld cameras, the screen could be a piece of paper at arm's length, but a telescope will frequently image a clear patch of sky at twilight, when the illumination is uniform and there are few, if any, stars visible. Once the images are acquired, processing can begin. A flat-field consists of two numbers for each pixel, the pixel's gain and its dark current (or dark frame). The pixel's gain is how the amount of signal given by the detector varies as a function of the amount of light (or equivalent). The gain is almost always a linear variable, as such the gain is given simply as the ratio of the input and output signals. The dark-current is the amount of signal given out by the detector when there is no incident light (hence dark frame). In many detectors this can also be a function of time, for example in astronomical telescopes it is common to take a dark-frame of the same time as the planned light exposure. The gain and dark-frame for optical systems can also be established by using a series of neutral density filters to give input/output signal information and applying a least squares fit to obtain the values for the dark current and gain. where: C = corrected image R = raw image F = flat field image D = dark field or da
https://en.wikipedia.org/wiki/Air%20interface
The air interface, or access mode, is the communication link between the two stations in mobile or wireless communication. The air interface involves both the physical and data link layers (layer 1 and 2) of the OSI model for a connection. Physical Layer The physical connection of an air interface is generally radio-based. This is usually a point to point link between an active base station and a mobile station. Technologies like Opportunity-Driven Multiple Access (ODMA) may have flexibility regarding which devices serve in which roles. Some types of wireless connections possess the ability to broadcast or multicast. Multiple links can be created in limited spectrum through FDMA, TDMA, or SDMA. Some advanced forms of transmission multiplexing combine frequency- and time-division approaches like OFDM or CDMA. In cellular telephone communications, the air interface is the radio-frequency portion of the circuit between the cellular phone set or wireless modem (usually portable or mobile) and the active base station. As a subscriber moves from one cell to another in the system, the active base station changes periodically. Each changeover is known as a handoff. In radio and electronics, an antenna (plural antennae or antennas), or aerial, is an electrical device which converts electric power into radio waves, and vice versa. It is usually used with a radio transmitter or radio receiver. In transmission, a radio transmitter supplies an electric current oscillating at radio frequency to the antenna's terminals, and the antenna radiates the energy from the current as electromagnetic waves (radio waves). An antenna focuses the radio waves in a certain direction. Usually, this is called the main direction. Because of that, in other directions less energy will be emitted. The gain of an antenna, in a given direction, is usually referenced to an (hypothetical) isotropic antenna, which emits the radiation evenly strong in all directions. The antenna gain is the power in the
https://en.wikipedia.org/wiki/148%20%28number%29
148 (one hundred [and] forty-eight) is the natural number following 147 and before 149. In mathematics 148 is the second number to be both a heptagonal number and a centered heptagonal number (the first is 1). It is the twelfth member of the Mian–Chowla sequence, the lexicographically smallest sequence of distinct positive integers with distinct pairwise sums. There are 148 perfect graphs with six vertices, and 148 ways of partitioning four people into subsets, ordering the subsets, and selecting a leader for each subset. In other fields In the Book of Nehemiah 7:44 there are 148 singers, sons of Asaph, at the census of men of Israel upon return from exile. This differs from Ezra 2:41, where the number is given as 128. Dunbar's number is a theoretical cognitive limit to the number of people with whom one can maintain stable interpersonal relationships. Dunbar predicted a "mean group size" of 148, but this is commonly rounded to 150. See also The year AD 148 or 148 BC List of highways numbered 148 References Integers
https://en.wikipedia.org/wiki/Nukernel
NuKernel is a microkernel which was developed at Apple Computer during the early 1990s. Written from scratch and designed using concepts from the Mach 3.0 microkernel, with extensive additions for soft real-time scheduling to improve multimedia performance, it was the basis for the Copland operating system. Only one NuKernel version was released, with a Copland alpha release. Development ended in 1996 with the cancellation of Copland. The External Reference Specification (ERS) for NuKernel is contained in its entirety in its patent. The one-time technical lead for NuKernel, Jeff Robbin, was one of the leaders of iTunes and the iPod. Apple's NuKernel is not the microkernel in BeOS, nukernel. See also XNU, the microkernel in Mac OS X References Apple Inc. operating systems Microkernels
https://en.wikipedia.org/wiki/F-coalgebra
In mathematics, specifically in category theory, an -coalgebra is a structure defined according to a functor , with specific properties as defined below. For both algebras and coalgebras, a functor is a convenient and general way of organizing a signature. This has applications in computer science: examples of coalgebras include lazy evaluation, infinite data structures, such as streams, and also transition systems. -coalgebras are dual to -algebras. Just as the class of all algebras for a given signature and equational theory form a variety, so does the class of all -coalgebras satisfying a given equational theory form a covariety, where the signature is given by . Definition Let be an endofunctor on a category . An -coalgebra is an object of together with a morphism of , usually written as . An -coalgebra homomorphism from to another -coalgebra is a morphism in such that . Thus the -coalgebras for a given functor F constitute a category. Examples Consider the endofunctor that sends a set to its disjoint union with the singleton set . A coalgebra of this endofunctor is given by , where is the so-called conatural numbers, consisting of the nonnegative integers and also infinity, and the function is given by , for and . In fact, is the terminal coalgebra of this endofunctor. More generally, fix some set , and consider the functor that sends to . Then an -coalgebra is a finite or infinite stream over the alphabet where is the set of states and is the state-transition function. Applying the state-transition function to a state may yield two possible results: either an element of together with the next state of the stream, or the element of the singleton set as a separate "final state" indicating that there are no more values in the stream. In many practical applications, the state-transition function of such a coalgebraic object may be of the form , which readily factorizes into a collection of "selectors", "observers", "me
https://en.wikipedia.org/wiki/Frequency%20changer
A frequency changer or frequency converter is an electronic or electromechanical device that converts alternating current (AC) of one frequency to alternating current of another frequency. The device may also change the voltage, but if it does, that is incidental to its principal purpose, since voltage conversion of alternating current is much easier to achieve than frequency conversion. Traditionally, these devices were electromechanical machines called a motor-generator set. Also devices with mercury arc rectifiers or vacuum tubes were in use. With the advent of solid state electronics, it has become possible to build completely electronic frequency changers. These devices usually consist of a rectifier stage (producing direct current) which is then inverted to produce AC of the desired frequency. The inverter may use thyristors, IGCTs or IGBTs. If voltage conversion is desired, a transformer will usually be included in either the AC input or output circuitry and this transformer may also provide galvanic isolation between the input and output AC circuits. A battery may also be added to the DC circuitry to improve the converter's ride-through of brief outages in the input power. Frequency changers vary in power-handling capability from a few watts to megawatts. Applications Frequency changers are used for converting bulk AC power from one frequency to another, when two adjacent power grids operate at different utility frequency. A variable-frequency drive (VFD) is a type of frequency changer used for speed control of AC motors such as used for pumps and fans. The speed of a Synchronous AC motor is dependent on the frequency of the AC power supply, so changing frequency allows the motor speed to be changed. This allows fan or pump output to be varied to match process conditions, which can provide energy savings. A cycloconverter is also a type of frequency changer. Unlike a VFD, which is an indirect frequency changer since it uses an AC-DC stage and then a D
https://en.wikipedia.org/wiki/Biomineralization
Biomineralization, also written biomineralisation, is the process by which living organisms produce minerals, often resulting in hardened or stiffened mineralized tissues. It is an extremely widespread phenomenon: all six taxonomic kingdoms contain members that are able to form minerals, and over 60 different minerals have been identified in organisms. Examples include silicates in algae and diatoms, carbonates in invertebrates, and calcium phosphates and carbonates in vertebrates. These minerals often form structural features such as sea shells and the bone in mammals and birds. Organisms have been producing mineralized skeletons for the past 550 million years. Calcium carbonates and calcium phosphates are usually crystalline, but silica organisms (sponges, diatoms...) are always non-crystalline minerals. Other examples include copper, iron, and gold deposits involving bacteria. Biologically formed minerals often have special uses such as magnetic sensors in magnetotactic bacteria (Fe3O4), gravity-sensing devices (CaCO3, CaSO4, BaSO4) and iron storage and mobilization (Fe2O3•H2O in the protein ferritin). In terms of taxonomic distribution, the most common biominerals are the phosphate and carbonate salts of calcium that are used in conjunction with organic polymers such as collagen and chitin to give structural support to bones and shells. The structures of these biocomposite materials are highly controlled from the nanometer to the macroscopic level, resulting in complex architectures that provide multifunctional properties. Because this range of control over mineral growth is desirable for materials engineering applications, there is interest in understanding and elucidating the mechanisms of biologically-controlled biomineralization. Types Mineralization can be subdivided into different categories depending on the following: the organisms or processes that create chemical conditions necessary for mineral formation, the origin of the substrate at the site of m
https://en.wikipedia.org/wiki/Aleph%20kernel
Aleph is a discontinued operating system kernel developed at the University of Rochester as part of their Rochester's Intelligent Gateway (RIG) project in 1975. Aleph was an early step on the road to the creation of the first practical microkernel operating system, Mach. Aleph used inter-process communications to move data between programs and the kernel, so applications could transparently access resources on any machine on the local area network (which at the time was a 3-Mbit/s experimental Xerox Ethernet). The project eventually petered out after several years due to rapid changes in the computer hardware market, but the ideas led to the creation of Accent at Carnegie Mellon University, leading in turn to Mach. Applications written for the RIG system communicated via ports. Ports were essentially message queues that were maintained by the Aleph kernel, identified by a machine unique (as opposed to globally unique) ID consisting of a process id, port id pair. Processes were automatically assigned a process number, or pid, on startup, and could then ask the kernel to open ports. Processes could open several ports and then "read" them, automatically blocking and allowing other programs to run until data arrived. Processes could also "shadow" another, receiving a copy of every message sent to the one it was shadowing. Similarly, programs could "interpose" on another, receiving messages and essentially cutting the original message out of the conversation. RIG was implemented on a number of Data General Eclipse minicomputers. The ports were implemented using memory buffers, limited to 2 kB in size. This produced significant overhead when copying large amounts of data. Another problem, realized only in retrospect, was that the use of global ID's allowed malicious software to "guess" at ports and thereby gain access to resources they should not have had. And since those IDs were based on the program ID, the port IDs changed if the program was restarted, making it dif
https://en.wikipedia.org/wiki/User%20operation%20prohibition
The user operation prohibition (abbreviated UOP) is a form of use restriction used on video DVD discs and Blu-ray discs. Most DVD players and Blu-ray players prohibit the viewer from performing a large majority of actions during sections of a DVD that are protected or restricted by this feature, and will display the no symbol or a message to that effect if any of these actions are attempted. It is used mainly for copyright notices or warnings, such as an FBI warning in the United States, and "protected" (i.e., unskippable) commercials. Countermeasures Some DVD players ignore the UOP flag, allowing the user full control over DVD playback. Virtually all players that are not purpose-built DVD player hardware (for example, a player program running on a general purpose computer) ignore the flag. There are also modchips available for some standard DVD players for the same purpose. The UOP flag can be removed in DVD ripper software such as: DVD Decrypter, DVD Shrink, AnyDVD, AVS Video Converter, Digiarty WinX DVD Ripper Platinum, MacTheRipper, HandBrake and K9Copy. On many DVD players, pressing stop-stop-play will cause the DVD player to play the movie immediately, ignoring any UOP flags that would otherwise make advertisements, piracy warnings or trailers unskippable. Nevertheless, removing UOP does not always provide navigation function in the restricted parts of the DVD. This is because those parts are sometimes lacking the navigation commands which allow skipping to the menu or other parts of the DVD. This has become more common in recent titles, in order to circumvent the UOP disabling that many applications or DVD players offer. Newer DVD players (i.e. post-c. late 2010) have, however, been designed to override the aforementioned counter-countermeasures. The DVD reader software inside the DVD player automatically generates chapters for parts of the DVD lacking navigation commands, allowing them to be fast-forwarded or skipped; pressing the menu button, even in the
https://en.wikipedia.org/wiki/Contract%20research%20organization
In the life sciences, a contract research organization (CRO) is a company that provides support to the pharmaceutical, biotechnology, and medical device industries in the form of research services outsourced on a contract basis. A CRO may provide such services as biopharmaceutical development, biological assay development, commercialization, clinical development, clinical trials management, pharmacovigilance, outcomes research, and real world evidence. CROs are designed to reduce costs for companies developing new medicines and drugs in niche markets. They aim to simplify entry into drug markets, and simplify development, as the need for large pharmaceutical companies to do everything ‘in house’ is now redundant. CROs also support foundations, research institutions, and universities, in addition to governmental organizations (such as the NIH, EMA, etc.). Many CROs specifically provide clinical-study and clinical-trial support for drugs and/or medical devices. However, the sponsor of the trial retains responsibility for the quality of the CRO's work. CROs range from large, international full-service organizations to small, niche specialty groups. CROs that specialize in clinical-trials services can offer their clients the expertise of moving a new drug or device from its conception to FDA/EMA marketing approval, without the drug sponsor having to maintain a staff for these services. Organizations who have had success in working with a particular CRO in a particular context (e.g. therapeutic area) might be tempted or encouraged to expand their engagement with that CRO into other, unrelated areas; however, caution is required as CROs are always seeking to expand their experience and success in one area cannot reliably predict success in unrelated areas that might be new to the organization. Definition, regulatory aspects The International Council on Harmonisation of Technical Requirements for Registration of Pharmaceuticals for Human Use, a 2015 Swiss NGO of ph
https://en.wikipedia.org/wiki/Equivalent%20annual%20cost
In finance, the equivalent annual cost (EAC) is the cost per year of owning and operating an asset over its entire lifespan. It is calculated by dividing the negative NPV of a project by the "present value of annuity factor": , where where r is the annual interest rate and t is the number of years. Alternatively, EAC can be obtained by multiplying the NPV of the project by the "loan repayment factor". EAC is often used as a decision-making tool in capital budgeting when comparing investment projects of unequal lifespans. However, the projects being compared must have equal risk: otherwise, EAC must not be used. The technique was first discussed in 1923 in engineering literature, and, as a consequence, EAC appears to be a favoured technique employed by engineers, while accountants tend to prefer net present value (NPV) analysis. Such preference has been described as being a matter of professional education, as opposed to an assessment of the actual merits of either method. In the latter group, however, the Society of Management Accountants of Canada endorses EAC, having discussed it as early as 1959 in a published monograph (which was a year before the first mention of NPV in accounting textbooks). Application EAC can be used in the following scenarios: Assessing alternative projects of unequal lives (where only the costs are relevant) in order to address any built-in bias favouring the longer-term investment. Determining the optimum economic life of an asset, through charting the change in EAC that may occur due to the fluctuation of operating costs and salvage values over time. Assessing whether leasing an asset would be more economical than purchasing it. Assessing whether increased maintenance costs will economically change the useful life of an asset. Calculating how much should be invested in an asset in order to achieve a desired result (i.e., purchasing a storage tank with a 20-year life, as opposed to one with a 5-year life, in order to achieve a si
https://en.wikipedia.org/wiki/Gelfond%27s%20constant
In mathematics, Gelfond's constant, named after Aleksandr Gelfond, is , that is, raised to the power . Like both and , this constant is a transcendental number. This was first established by Gelfond and may now be considered as an application of the Gelfond–Schneider theorem, noting that where is the imaginary unit. Since is algebraic but not rational, is transcendental. The constant was mentioned in Hilbert's seventh problem. A related constant is , known as the Gelfond–Schneider constant. The related value  +  is also irrational. Numerical value The decimal expansion of Gelfond's constant begins ... Construction If one defines and for , then the sequence converges rapidly to . Continued fraction expansion This is based on the digits for the simple continued fraction: As given by the integer sequence A058287. Geometric property The volume of the n-dimensional ball (or n-ball), is given by where is its radius, and is the gamma function. Any even-dimensional ball has volume and, summing up all the unit-ball () volumes of even-dimension gives Similar or related constants Ramanujan's constant This is known as Ramanujan's constant. It is an application of Heegner numbers, where 163 is the Heegner number in question. Similar to , is very close to an integer: ... This number was discovered in 1859 by the mathematician Charles Hermite. In a 1975 April Fool article in Scientific American magazine, "Mathematical Games" columnist Martin Gardner made the hoax claim that the number was in fact an integer, and that the Indian mathematical genius Srinivasa Ramanujan had predicted it—hence its name. The coincidental closeness, to within 0.000 000 000 000 75 of the number is explained by complex multiplication and the q-expansion of the j-invariant, specifically: and, where is the error term, which explains why is 0.000 000 000 000 75 below . (For more detail on this proof, consult the article on Heegner numbers.) The number The decimal