source
stringlengths
31
203
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Singleton%20bound
In coding theory, the Singleton bound, named after Richard Collom Singleton, is a relatively crude upper bound on the size of an arbitrary block code with block length , size and minimum distance . It is also known as the Joshibound. proved by and even earlier by . Statement of the bound The minimum distance of a set of codewords of length is defined as where is the Hamming distance between and . The expression represents the maximum number of possible codewords in a -ary block code of length and minimum distance . Then the Singleton bound states that Proof First observe that the number of -ary words of length is , since each letter in such a word may take one of different values, independently of the remaining letters. Now let be an arbitrary -ary block code of minimum distance . Clearly, all codewords are distinct. If we puncture the code by deleting the first letters of each codeword, then all resulting codewords must still be pairwise different, since all of the original codewords in have Hamming distance at least from each other. Thus the size of the altered code is the same as the original code. The newly obtained codewords each have length and thus, there can be at most of them. Since was arbitrary, this bound must hold for the largest possible code with these parameters, thus: Linear codes If is a linear code with block length , dimension and minimum distance over the finite field with elements, then the maximum number of codewords is and the Singleton bound implies: so that which is usually written as In the linear code case a different proof of the Singleton bound can be obtained by observing that rank of the parity check matrix is . Another simple proof follows from observing that the rows of any generator matrix in standard form have weight at most . History The usual citation given for this result is , but was proven earlier by . Joshi notes that the result was obtained earlier by using a more complex proof. also
https://en.wikipedia.org/wiki/Cut%20and%20fill
In earthmoving, cut and fill is the process of constructing a railway, road or canal whereby the amount of material from cuts roughly matches the amount of fill needed to make nearby embankments to minimize the amount of construction labor. Overview Cut sections of roadway or rail are areas where the roadway has a lower elevation than the surrounding terrain. Fill sections are elevated sections of a roadway or trackbed. Cut and fill takes material from cut excavations and uses this to make fill sections. It costs resources to excavate material, relocate it, and to compact and otherwise prepare the filled sections. The technique aims to minimise the effort of relocating excavated material while also taking into account other constraints such as maintaining a specified grade over the route. Other considerations In addition to minimising construction cost, other factors influence the placement of cut or filled sections. For example, air pollutants can concentrate in the valleys created by the cut section. Conversely, noise pollution is mitigated by cut sections since an effective blockage of line-of-sight sound propagation is created by the depressed roadway design. The environmental effects of fill sections are typically favorable with respect to air pollution dispersal, but in respect to sound propagation, exposure of nearby residents is generally increased, since sound walls and other forms of sound path blockage are less effective in this geometry. The reasons for creating fills include the reduction of grade along a route or the elevation of the route above water, swampy ground, or areas where snow drifts frequently collect. Fills can also be used to cover tree stumps, rocks, or unstable soil, in which case material with a higher bearing capacity is placed on top of the obstacle in order to carry the weight of the roadway or railway and reduce differential settlement. History The practice of cut-and-fill was widely utilized to construct tracks along rollin
https://en.wikipedia.org/wiki/Reconstruction%20filter
In a mixed-signal system (analog and digital), a reconstruction filter, sometimes called an anti-imaging filter, is used to construct a smooth analog signal from a digital input, as in the case of a digital to analog converter (DAC) or other sampled data output device. Sampled data reconstruction filters The sampling theorem describes why the input of an ADC requires a low-pass analog electronic filter, called the anti-aliasing filter: the sampled input signal must be bandlimited to prevent aliasing (here meaning waves of higher frequency being recorded as a lower frequency). For the same reason, the output of a DAC requires a low-pass analog filter, called a reconstruction filter - because the output signal must be bandlimited, to prevent imaging (meaning Fourier coefficients being reconstructed as spurious high-frequency 'mirrors'). This is an implementation of the Whittaker–Shannon interpolation formula. Ideally, both filters should be brickwall filters, constant phase delay in the pass-band with constant flat frequency response, and zero response from the Nyquist frequency. This can be achieved by a filter with a 'sinc' impulse response. Implementation While in theory a DAC outputs a series of discrete Dirac impulses, in practice, a real DAC outputs pulses with finite bandwidth and width. Both idealized Dirac pulses, zero-order held steps and other output pulses, if unfiltered, would contain spurious high-frequency replicas, "or images" of the original bandlimited signal. Thus, the reconstruction filter smooths the waveform to remove image frequencies (copies) above the Nyquist limit. In doing so, it reconstructs the continuous time signal (whether originally sampled, or modelled by digital logic) corresponding to the digital time sequence. Practical filters have non-flat frequency or phase response in the pass band and incomplete suppression of the signal elsewhere. The ideal sinc waveform has an infinite response to a signal, in both the positive and ne
https://en.wikipedia.org/wiki/Software%20industry
The software industry includes businesses for development, maintenance and publication of software that are using different business models, mainly either "license/maintenance based" (on-premises) or "Cloud based" (such as SaaS, PaaS, IaaS, MBaaS, MSaaS, DCaaS etc.). The industry also includes software services, such as training, documentation, consulting and data recovery. The software and computer services industry spends more than 11% of its net sales for Research & Development which is in comparison with other industries the second highest share after pharmaceuticals & biotechnology. History The first company founded to provide software products and services was Computer Usage Company in 1955. Before that time, computers were programmed either by customers, or the few commercial computer vendors of the time, such as Sperry Rand and IBM. The software industry expanded in the early 1960s, almost immediately after computers were first sold in mass-produced quantities. Universities, government, and business customers created a demand for software. Many of these programs were written in-house by full-time staff programmers. Some were distributed freely between users of a particular machine for no charge. Others were done on a commercial basis, and other firms such as Computer Sciences Corporation (founded in 1959) started to grow. Other influential or typical software companies begun in the early 1960s included Advanced Computer Techniques, Automatic Data Processing, Applied Data Research, and Informatics General. The computer/hardware makers started bundling operating systems, systems software and programming environments with their machines. When Digital Equipment Corporation (DEC) brought a relatively low-priced microcomputer to market, it brought computing within the reach of many more companies and universities worldwide, and it spawned great innovation in terms of new, powerful programming languages and methodologies. New software was built for microcomputer
https://en.wikipedia.org/wiki/Elliptic%20filter
An elliptic filter (also known as a Cauer filter, named after Wilhelm Cauer, or as a Zolotarev filter, after Yegor Zolotarev) is a signal processing filter with equalized ripple (equiripple) behavior in both the passband and the stopband. The amount of ripple in each band is independently adjustable, and no other filter of equal order can have a faster transition in gain between the passband and the stopband, for the given values of ripple (whether the ripple is equalized or not). Alternatively, one may give up the ability to adjust independently the passband and stopband ripple, and instead design a filter which is maximally insensitive to component variations. As the ripple in the stopband approaches zero, the filter becomes a type I Chebyshev filter. As the ripple in the passband approaches zero, the filter becomes a type II Chebyshev filter and finally, as both ripple values approach zero, the filter becomes a Butterworth filter. The gain of a lowpass elliptic filter as a function of angular frequency ω is given by: where Rn is the nth-order elliptic rational function (sometimes known as a Chebyshev rational function) and is the cutoff frequency is the ripple factor is the selectivity factor The value of the ripple factor specifies the passband ripple, while the combination of the ripple factor and the selectivity factor specify the stopband ripple. Properties In the passband, the elliptic rational function varies between zero and unity. The gain of the passband therefore will vary between 1 and . In the stopband, the elliptic rational function varies between infinity and the discrimination factor which is defined as: The gain of the stopband therefore will vary between 0 and . In the limit of the elliptic rational function becomes a Chebyshev polynomial, and therefore the filter becomes a Chebyshev type I filter, with ripple factor ε Since the Butterworth filter is a limiting form of the Chebyshev filter, it follows that in the limit of , an
https://en.wikipedia.org/wiki/Address%20geocoding
Address geocoding, or simply geocoding, is the process of taking a text-based description of a location, such as an address or the name of a place, and returning geographic coordinates, frequently latitude/longitude pair, to identify a location on the Earth's surface. Reverse geocoding, on the other hand, converts geographic coordinates to a description of a location, usually the name of a place or an addressable location. Geocoding relies on a computer representation of address points, the street / road network, together with postal and administrative boundaries. Geocode (verb): provide geographical coordinates corresponding to (a location). Geocode (noun): is a code that represents a geographic entity (location or object).In general is a human-readable and short identifier; like a nominal-geocode as ISO 3166-1 alpha-2, or a grid-geocode, as Geohash geocode. Geocoder (noun): a piece of software or a (web) service that implements a geocoding process i.e. a set of interrelated components in the form of operations, algorithms, and data sources that work together to produce a spatial representation for descriptive locational references. The geographic coordinates representing locations often vary greatly in positional accuracy. Examples include building centroids, land parcel centroids, interpolated locations based on thoroughfare ranges, street segments centroids, postal code centroids (e.g. ZIP codes, CEDEX), and Administrative division Centroids. History Geocoding – a subset of Geographic Information System (GIS) spatial analysis – has been a subject of interest since the early 1960s. 1960s In 1960, the first operational GIS – named the Canada Geographic Information System (CGIS) – was invented by Dr. Roger Tomlinson, who has since been acknowledged as the father of GIS. The CGIS was used to store and analyze data collected for the Canada Land Inventory, which mapped information about agriculture, wildlife, and forestry at a scale of 1:50,000, in order
https://en.wikipedia.org/wiki/Next-generation%20network
The next-generation network (NGN) is a body of key architectural changes in telecommunication core and access networks. The general idea behind the NGN is that one network transports all information and services (voice, data, and all sorts of media such as video) by encapsulating these into IP packets, similar to those used on the Internet. NGNs are commonly built around the Internet Protocol, and therefore the term all IP is also sometimes used to describe the transformation of formerly telephone-centric networks toward NGN. NGN is a different concept from Future Internet, which is more focused on the evolution of Internet in terms of the variety and interactions of services offered. Introduction of NGN According to ITU-T, the definition is: A next-generation network (NGN) is a packet-based network which can provide services including Telecommunication Services and is able to make use of multiple broadband, quality of service-enabled transport technologies and in which service-related functions are independent from underlying transport-related technologies. It offers unrestricted access by users to different service providers. It supports generalized mobility which will allow consistent and ubiquitous provision of services to users. From a practical perspective, NGN involves three main architectural changes that need to be looked at separately: In the core network, NGN implies a consolidation of several (dedicated or overlay) transport networks each historically built for a different service into one core transport network (often based on IP and Ethernet). It implies amongst others the migration of voice from a circuit-switched architecture (PSTN) to VoIP, and also migration of legacy services such as X.25, Frame Relay (either commercial migration of the customer to a new service like IP VPN, or technical emigration by emulation of the "legacy service" on the NGN). In the wired access network, NGN implies the migration from the dual system of legacy voice n
https://en.wikipedia.org/wiki/LOM%20port
The LOM port (Lights Out Management port) is a remote access facility on a Sun Microsystems server. When the main processor is switched off, or when it is impossible to telnet to the server, an operator would use a link to the LOM port to access the server. As long as the server has power, the LOM facility will work, regardless of whether or not the main processor is switched on. To use the LOM port, a rollover cable is connected to the LOM port, which is located at the back of the Sun server. The other end of the cable is connected to a terminal or a PC running a terminal emulator. The terminal or emulator must be set to a transmission rate of 9600 bits per second, and hardware flow control enabled. Implementations Specific implementations include: Advanced Lights Out Management (ALOM), Sun Microsystems-specific and comes standard on newer Sun servers (SunFire V125/V210/V215/V240/V245/V250/V440/T1000/T2000, Sun Netra 210/240/440). Integrated Lights Out Management (ILOM), Sun Microsystems's ALOM replacement on Sun x64 server SunFire X4100(M2)/X4200(M2)/X4600(M2)/X4140/X4240/X4440/X4150/X4250/X4450/X4170/X4270/X2250/X2270, Sun Blade 6000 Chassis Management Module/Blade Module(X6220/X6420/X6240/X6440/X6250/X6450/X6270/X6275), Sun CMT servers/blades (Sun T5120, T5220, T5240, T6340, T6320). Not to be confused with the similar-sounding HP Integrated Lights-Out management technology. Lomlite and Lomlite2 Single-chip implementations on the Netra T1 and possibly others. In the cases of the T1-200 and X1, the OpenBoot firmware implements lom@ and lom! commands allowing access to the registers representing temperature, voltage, etc. See also Out-of-band management Power distribution unit External links Netra-T1 AC200 LOM Usage Computer buses Out-of-band management Sun Microsystems hardware
https://en.wikipedia.org/wiki/Permissible%20stress%20design
Permissible stress design is a design philosophy used by mechanical engineers and civil engineers. The civil designer ensures that the stresses developed in a structure due to service loads do not exceed the elastic limit. This limit is usually determined by ensuring that stresses remain within the limits through the use of factors of safety. In structural engineering, the permissible stress design approach has generally been replaced internationally by limit state design (also known as ultimate stress design, or in USA, Load and Resistance Factor Design, LRFD) as far as structural engineering is considered, except for some isolated cases. In USA structural engineering construction, allowable stress design (ASD) has not yet been completely superseded by limit state design except in the case of Suspension bridges, which changed from allowable stress design to limit state design in the 1960s. Wood, steel, and other materials are still frequently designed using allowable stress design, although LRFD is probably more commonly taught in the USA university system. In mechanical engineering design such as design of pressure equipment, the method uses the actual loads predicted to be experienced in practice to calculate stress and deflection. Such loads may include pressure thrusts and the weight of materials. The predicted stresses and deflections are compared with allowable values that have a "factor" against various failure mechanisms such as leakage, yield, ultimate load prior to plastic failure, buckling, brittle fracture, fatigue, and vibration/harmonic effects. However, the predicted stresses almost always assumes the material is linear elastic. The "factor" is sometimes called a factor of safety, although this is technically incorrect because the factor includes allowance for matters such as local stresses and manufacturing imperfections that are not specifically calculated; exceeding the allowable values is not considered to be good practice (i.e. is not "safe
https://en.wikipedia.org/wiki/PGLO
The pGLO plasmid is an engineered plasmid used in biotechnology as a vector for creating genetically modified organisms. The plasmid contains several reporter genes, most notably the green fluorescent protein (GFP) and the ampicillin resistance gene. GFP was isolated from the jelly fish Aequorea victoria. Because it shares a bidirectional promoter with a gene for metabolizing arabinose, the GFP gene is expressed in the presence of arabinose, which makes the transgenic organism express its fluorescence under UV light. GFP can be induced in bacteria containing the pGLO plasmid by growing them on +arabinose plates. pGLO is made by Bio-Rad Laboratories. Structure pGLO is made up of three genes that are joined together using recombinant DNA technology. They are as follows: Bla, which codes for the enzyme beta-lactamase giving the transformed bacteria resistance to the beta-lactam family of antibiotics (such as of the penicillin family) araC, a promoter region that regulates the expression of GFP (specifically, the GFP gene will be expressed only in the presence of arabinose) GFP, the green fluorescent protein, which gives a green glow if cells produce this type of protein Like most other circular plasmids, the pGLO plasmid contains an origin of replication (ori), which is a region of the plasmid where replication will originate. The pGLO plasmid was made famous by researchers in France who used it to produce a green fluorescent rabbit named Alba. Other features on pGLO, like most other plasmids, include a selectable marker and an MCS (multiple cloning site) located at the end of the GFP gene. The plasmid is 5371 base pairs long. In supercoiled form, it runs on an agarose gel in the 4200–4500 range. Discovery of GFP The GFP gene was first observed by Osamu Shimomura and his team in 1962 while studying the jellyfish Aequorea victoria that have a ring of blue light under their umbrella. Shimomura and his team isolated the protein aequorin from thousands of jellyfis
https://en.wikipedia.org/wiki/Geographic%20information%20system%20software
A GIS software program is a computer program to support the use of a geographic information system, providing the ability to create, store, manage, query, analyze, and visualize geographic data, that is, data representing phenomena for which location is important. The GIS software industry encompasses a broad range of commercial and open-source products that provide some or all of these capabilities within various information technology architectures. History The earliest geographic information systems, such as the Canadian Geographic Information System started in 1963, were bespoke programs developed specifically for a single installation (usually a government agency), based on custom-designed data models. During the 1950s and 1960s, academic researchers during the quantitative revolution of geography began writing computer programs to perform spatial analysis, especially at the University of Washington and the University of Michigan, but these were also custom programs that were rarely available to other potential users. Perhaps the first general-purpose software that provided a range of GIS functionality was the Synagraphic Mapping Package (SYMAP), developed by Howard T. Fisher and others at the nascent Harvard Laboratory for Computer Graphics and Spatial Analysis starting in 1965. While not a true full-range GIS program, it included some basic mapping and analysis functions, and was freely available to other users. Through the 1970s, the Harvard Lab continued to develop and publish other packages focused on automating specific operations, such as SYMVU (3-D surface visualization), CALFORM (choropleth maps), POLYVRT (topological vector data management), WHIRLPOOL (vector overlay), GRID and IMGRID (raster data management), and others. During the late 1970s, several of these modules were brought together into Odyssey, one of the first commercial complete GIS programs, released in 1980. During the late 1970s and early 1980s, GIS was emerging in many large gover
https://en.wikipedia.org/wiki/Multiple%20cloning%20site
A multiple cloning site (MCS), also called a polylinker, is a short segment of DNA which contains many (up to ~20) restriction sites - a standard feature of engineered plasmids. Restriction sites within an MCS are typically unique, occurring only once within a given plasmid. The purpose of an MCS in a plasmid is to allow a piece of DNA to be inserted into that region. An MCS is found in a variety of vectors, including cloning vectors to increase the number of copies of target DNA, and in expression vectors to create a protein product. In expression vectors, the MCS is located downstream of the promoter. Creating a multiple cloning site In some instances, a vector may not contain an MCS. Rather, an MCS can be added to a vector. The first step is designing complementary oligonucleotide sequences that contain restriction enzyme sites along with additional bases on the end that are complementary to the vector after digesting. Then the oligonucleotide sequences can be annealed and ligated into the digested and purified vector. The digested vector is cut with a restriction enzyme that complements the oligonucleotide insert overhangs. After ligation, transform the vector into bacteria and verify the insert by sequencing. This method can also be used to add new restriction sites to a multiple cloning site. Uses Multiple cloning sites are a feature that allows for the insertion of foreign DNA without disrupting the rest of the plasmid which makes it extremely useful in biotechnology, bioengineering, and molecular genetics. MCS can aid in making transgenic organisms, more commonly known as a genetically modified organism (GMO) using genetic engineering. To take advantage of the MCS in genetic engineering, a gene of interest has to be added to the vector during production when the MCS is cut open. After the MCS is made and ligated it will include the gene of interest and can be amplified to increase gene copy number in a bacterium-host. After the bacterium replicates, the
https://en.wikipedia.org/wiki/Endace
Endace Ltd is a privately owned network monitoring company, based in New Zealand and founded in 2001. It provides network visibility and network recording products to large organizations. The company was listed on the London Stock Exchange in 2005 and then delisted in 2013 when it was acquired by Emulex. In 2016 Endace was spun out of Emulex and is currently a private company. In October 2016, The Intercept revealed that some Endace clients were intelligence agencies, including the British GCHQ (known for conducting massive surveillance on network communications) and the Moroccan DGST, likewise known for mass surveillance of its citizens. Background and history Endace was founded after the DAG project at the School of Computing and Mathematical Sciences at the University of Waikato in New Zealand. The first cards designed at the university were intended to measure latency in ATM networks. In 2006, Endace transitioned from component manufacturer to appliance manufacturer to managed infrastructure provider. The company now sells network visibility fabrics, based on its range of network recorders, to large corporations and government agencies. Endace was the first New Zealand company to list on London's Alternative Investment Market when it floated in mid-June 2005 a move which was not without controversy. Poor share price performance in the early years and a seeming failure to attract a broad enough shareholder base lent weight to the criticism that Endace should have focused initially on developing its local profile (via NZX) rather than pushing for overseas investment (via London AIM). Endace is headquartered in Auckland, New Zealand, and has an R&D centre in Hamilton, New Zealand, and offices in Australia, United States and Great Britain. Key innovations of the DAG The DAG project grew from academic research at Waikato University. Having found that software measurements of ATM cells (or packets) were unsatisfactory, both for reasons of accuracy and lack of
https://en.wikipedia.org/wiki/Nernst%20effect
In physics and chemistry, the Nernst effect (also termed first Nernst–Ettingshausen effect, after Walther Nernst and Albert von Ettingshausen) is a thermoelectric (or thermomagnetic) phenomenon observed when a sample allowing electrical conduction is subjected to a magnetic field and a temperature gradient normal (perpendicular) to each other. An electric field will be induced normal to both. This effect is quantified by the Nernst coefficient , which is defined to be where is the y-component of the electric field that results from the magnetic field's z-component and the x-component of the temperature gradient . The reverse process is known as the Ettingshausen effect and also as the second Nernst–Ettingshausen effect. Physical picture Mobile energy carriers (for example conduction-band electrons in a semiconductor) will move along temperature gradients due to statistics and the relationship between temperature and kinetic energy. If there is a magnetic field transversal to the temperature gradient and the carriers are electrically charged, they experience a force perpendicular to their direction of motion (also the direction of the temperature gradient) and to the magnetic field. Thus, a perpendicular electric field is induced. Sample types Semiconductors exhibit the Nernst effect. This has been studied in the 1950s by Krylova, Mochan and many others. In metals however, it is almost non-existent. It appears in the vortex phase of type-II superconductors due to vortex motion. This has been studied by Huebener et al. High-temperature superconductors exhibit the Nernst effect both in the superconducting and in the pseudogap phase, as was first found by Xu et al. Heavy-Fermion superconductors can show a strong Nernst signal which is likely not due to the vortices, as was found by Bel et al. See also Spin Nernst effect Seebeck effect Peltier effect Hall effect Righi–Leduc effect Journal articles Walther Nernst Electrodynamics Thermoelectr
https://en.wikipedia.org/wiki/Kenbak-1
The Kenbak-1 is considered by the Computer History Museum, the Computer Museum of America and the American Computer Museum to be the world's first "personal computer", invented by John Blankenbaker (born 1929) of Kenbak Corporation in 1970 and first sold in early 1971. Less than 50 machines were ever built, using Bud Industries enclosures as a housing. The system first sold for US$750. Today, only 14 machines are known to exist worldwide, in the hands of various collectors and museums. Production of the Kenbak-1 stopped in 1973, as Kenbak failed and was taken over by CTI Education Products, Inc. CTI rebranded the inventory and renamed it the 5050, though sales remained elusive. Since the Kenbak-1 was invented before the first microprocessor, the machine didn't have a one-chip CPU but was instead based purely on small-scale integration TTL chips. The 8-bit machine offered 256 bytes of memory, implemented on Intel's type 1404A silicon gate MOS shift registers. The clock signal period was 1 microsecond (equivalent to a clock speed of 1 MHz), but the program speed averaged below 1,000 instructions per second due the many clock cycles needed for each operation and slow access to serial memory. The machine was programmed in pure machine code using an array of buttons and switches. Output consisted of a row of lights. Internally, the Kenbak-1 has a serial computer architecture, processing one bit at a time. Technical description Registers The Kenbak-1 has a total of nine registers. All are memory mapped. It has three general-purpose registers: A, B and X. Register A is the implicit destination of some operations. Register X is also known as the index register and turns the direct and indirect modes into indexed direct and indexed indirect modes. It also has program counter, called Register P, three "overflow and carry" registers for A, B and X, respectively, as well as an Input Register and an Output Register. Addressing modes Add, Subtract, Load, Store, Load Com
https://en.wikipedia.org/wiki/Digital%20loop%20carrier
A digital loop carrier (DLC) is a system which uses digital transmission to extend the range of the local loop farther than would be possible using only twisted pair copper wires. A DLC digitizes and multiplexes the individual signals carried by the local loops onto a single datastream on the DLC segment. Reasons for using DLCs Subscriber Loop Carrier systems address a number of problems: Electrical constraints on long loops. Insufficient available cable pairs. Cable route congestion (inability to add cable due to lack of space, particularly in urban street, bridge, and building conduit) Construction challenges (in areas of difficult terrain) when limited cable pairs are already available Expense due to cable cost and the associated labour-intensive installation work (especially to solve the specific problems listed above) Long loops, such as those terminating at more than 18,000 feet (5.49 kilometres) from the central office, pose electrical challenges. When the subscriber goes off-hook, a cable pair behaves like a single loop inductance coil with a -48 V dc potential and an Electric current of between 20–50 mA dc. Electric current values vary with cable length and gauge. A minimum current of around 20 mA dc is required to convey terminal signalling information to the network. There is also a minimum power level required to provide adequate volume for the voice signal. A variety of schemes were implemented before DLC technology to offset the impedance long loops offered to signalling and volume levels. They included the following: Use heavy-gauge conductors – Up to 19 gauge (approximately the gauge of pencil lead), which is costly and bulky. The heavy-gauge cables yielded far fewer pairs per cable and led to early congestion in cable routes, especially in bridge crossings and other areas of limited space. Increase battery voltage – This violation of operating standards could pose a safety hazard. Add amplifiers to power the voice signal on long loops. This requi
https://en.wikipedia.org/wiki/X%20Font%20Server
The X font server (xfs) provides a standard mechanism for an X server to communicate with a font renderer, frequently one running on a remote machine. It usually runs on TCP port 7100. Current status The use of server-side fonts is currently considered deprecated in favour of client-side fonts. Such fonts are rendered by the client, not by the server, with the support of the Xft2 or Cairo libraries and the XRender extension. For the few cases in which server-side fonts are still needed, the new servers have their own integrated font renderer, so that no external one is needed. Server-side fonts can now be configured in the X server configuration files. For example, will set the server-side fonts for Xorg. No specification on client-side fonts is given in the core protocol. Future As of October 2006, the manpage for xfs on Debian states that: FUTURE DIRECTIONS Significant further development of xfs is unlikely. One of the original motivations behind xfs was the single-threaded nature of the X server — a user’s X session could seem to "freeze up" while the X server took a moment to rasterize a font. This problem with the X server (which remains single-threaded in all popular implementations to this day) has been mitigated on two fronts: machines have gotten much faster, and client-side font rendering (particularly via the Xft library) has become the norm in contemporary software. Deployment issues So the choice between local filesystem font access and xfs-based font access is purely a local deployment choice. It does not make much sense in a single computer scenario. See also X Window System core protocol X logical font description References X Window System Servers (computing)
https://en.wikipedia.org/wiki/Hamming%20bound
In mathematics and computer science, in the field of coding theory, the Hamming bound is a limit on the parameters of an arbitrary block code: it is also known as the sphere-packing bound or the volume bound from an interpretation in terms of packing balls in the Hamming metric into the space of all possible words. It gives an important limitation on the efficiency with which any error-correcting code can utilize the space in which its code words are embedded. A code that attains the Hamming bound is said to be a perfect code. Background on error-correcting codes An original message and an encoded version are both composed in an alphabet of q letters. Each code word contains n letters. The original message (of length m) is shorter than n letters. The message is converted into an n-letter codeword by an encoding algorithm, transmitted over a noisy channel, and finally decoded by the receiver. The decoding process interprets a garbled codeword, referred to as simply a word, as the valid codeword "nearest" the n-letter received string. Mathematically, there are exactly qm possible messages of length m, and each message can be regarded as a vector of length m. The encoding scheme converts an m-dimensional vector into an n-dimensional vector. Exactly qm valid codewords are possible, but any one of qn words can be received because the noisy channel might distort one or more of the n letters when a codeword is transmitted. Statement of the bound Preliminary definitions An alphabet set is a set of symbols with elements. The set of strings of length on the alphabet set are denoted . (There are distinct strings in this set of strings.) A -ary block code of length is a subset of the strings of , where the alphabet set is any alphabet set having elements. Defining the bound Let denote the maximum possible size of a -ary block code of length and minimum Hamming distance between elements of the block code (necessarily positive for ). Then, the Hamming bound is
https://en.wikipedia.org/wiki/Java%20Anon%20Proxy
Java Anon Proxy (JAP) also known as JonDonym, was a proxy system designed to allow browsing the Web with revocable pseudonymity. It was originally developed as part of a project of the Technische Universität Dresden, the Universität Regensburg and Privacy Commissioner of the state of Schleswig-Holstein. The client-software is written in the Java programming language. The service has been closed since August 2021. Cross-platform and free, it sends requests through a Mix Cascade and mixes the data streams of multiple users in order to further obfuscate the data to outsiders. JonDonym is available for all platforms that support Java. Furthermore, ANONdroid is a JonDonym proxy client for Android. Design The JonDonym client program allows the user to choose among several Mix Cascades (i.e. a group of anonymization proxies) offered by independent organisations. Users may choose by themselves whom of these operators they will trust, and whom they won't. This is different from peer-to-peer based anonymity networks like Tor and I2P, whose anonymization proxies are anonymous themselves, which means the users have to rely on unknown proxy operators. However, it means that all the relays used for JonDonym-mediated connections are known and identified, and therefore potentially targeted very easily by hackers, governmental agencies or lobbying groups. This has for example led to the issues mentioned below, where court orders essentially gave all control over the whole system to the German government. As discussed below, solutions like international distribution of the relays and the additional use of Tor can somewhat mitigate this loss of independence. The speed and availability of the service depends on the operators of the Mixes in the cascades, and therefore varies. More users on a cascade improve anonymity, but a large number of users might diminish the speed and bandwidth available for a single user. Cost, name change and commercial service Use of JonDonym has been
https://en.wikipedia.org/wiki/Peter%20Barham
Peter Barham (born 1950) is emeritus professor of physics at the University of Bristol. He was visiting professor of Molecular Gastronomy at the University of Copenhagen, Denmark. Early life Peter Barham was born in 1950. He received his BSc from the University of Warwick, and his MSc and PhD from the University of Bristol. Career Peter Barham's research at the University of Bristol is concerned with polymer physics. He found ways to connect his research with his love of penguins, including the creation of silicon-based flipper bands which can be used for monitoring penguin populations. The silicone bands are designed to minimize the potential impact of carrying an external marking device and are currently in use on African penguins (Spheniscus demersus) at Bristol Zoo, UK and in the wild in South Africa. More recently, together with colleagues in the Computer Science Department at the University of Bristol, he has developed a computer vision system for the automatic recognition of African penguins. This system in 2008 was undergoing trials in South Africa. Barham has contributed to the development of the new science of molecular gastronomy and has authored the book The Science of Cooking (). He has collaborated with a number of chefs including Heston Blumenthal, the chef/owner of The Fat Duck and also a proponent of molecular gastronomy. He is editor-in-chief of a new journal, Flavour, which covers the science of molecular gastronomy. In 1994 he appeared as the Scientific Cook in a regular feature on Channel 4 food magazine series Food File, in which he explained some of the chemical mysteries that take place during the cooking process. Peter Barham contributes to the public understanding of science by giving public lectures on molecular gastronomy and penguin conservation biology. He has addressed audiences in both the UK and further afield. Titles of previous public lectures include "Ice cream delights", "Why do we like some foods and hate others?", "Kitchen
https://en.wikipedia.org/wiki/Flexi%20disc
The flexi disc (also known as a phonosheet, Sonosheet or Soundsheet, a trademark) is a phonograph record made of a thin, flexible vinyl sheet with a molded-in spiral stylus groove, and is designed to be playable on a normal phonograph turntable. Flexible records were commercially introduced as the Eva-tone Soundsheet in 1962. They were very popular among children and teenagers and mass-produced by the state publisher in the Soviet government. History Before the advent of the compact disc, flexi discs were sometimes used as a means to include sound with printed material such as magazines and music instruction books. A flexi disc could be moulded with speech or music and bound into the text with a perforated seam, at very little cost and without any requirement for a hard binding. One problem with using the thinner vinyl was that the stylus's weight, combined with the flexi disc's low mass, would sometimes cause the disc to stop spinning on the turntable and become held in place by the stylus. For this reason, most flexi discs had a spot on the face of the disc for a coin, or other small, flat, weighted object to increase the friction with the turntable surface and enforce consistent rotation. If the turntable's surface is not completely flat, it is recommended that the flexi disc be placed on top of a full sized record. In Japan, starting in the early 1960s, Asahi Sonorama published the monthly Asahi Sonorama magazine which included an inserted flexi disc ("Sonosheet"). Every year between 1963 and 1969, The Beatles made a special Christmas recording which was made into a flexi disc and sent to members of their fan club. While the earlier discs largely contained 'thank you' messages to their fans, the later Christmas flexis were used as an outlet for the Beatles to explore more experimental areas; the 1967 disc, for example, became a pastiche of a BBC Radio show and even included a specially recorded song entitled "Christmas Time (Is Here Again)." In 1964, the Na
https://en.wikipedia.org/wiki/Graphophone
The Graphophone was the name and trademark of an improved version of the phonograph. It was invented at the Volta Laboratory established by Alexander Graham Bell in Washington, D.C., United States. Its trademark usage was acquired successively by the Volta Graphophone Company, the American Graphophone Company, the North American Phonograph Company, and finally by the Columbia Phonograph Company (known today as Columbia Records), all of which either produced or sold Graphophones. Research and development It took five years of research under the directorship of Benjamin Hulme, Harvey Christmas, Charles Sumner Tainter and Chichester Bell at the Volta Laboratory to develop and distinguish their machine from Thomas Edison's Phonograph. Among their innovations, the researchers experimented with lateral recording techniques as early as 1881. Contrary to the vertically-cut grooves of Edison Phonographs, the lateral recording method used a cutting stylus that moved from side to side in a "zig zag" pattern across the record. While cylinder phonographs never employed the lateral cutting process commercially, this later became the primary method of phonograph disc recording. Bell and Tainter also developed wax-coated cardboard cylinders for their record cylinder. Edison's grooved mandrel covered with a removable sheet of tinfoil (the actual recording medium) was prone to damage during installation or removal. Tainter received a separate patent for a tube assembly machine to automatically produce the coiled cardboard tube cores of the wax cylinder records. The shift from tinfoil to wax resulted in increased sound fidelity and record longevity. Besides being far easier to handle, the wax recording medium also allowed for lengthier recordings and created superior playback quality. Additionally the Graphophones initially deployed foot treadles to rotate the recordings, then wind-up clockwork drive mechanisms, and finally migrated to electric motors, instead of the manual cr
https://en.wikipedia.org/wiki/Phantasmagoria
Phantasmagoria (), alternatively fantasmagorie and/or fantasmagoria was a form of horror theatre that (among other techniques) used one or more magic lanterns to project frightening images, such as skeletons, demons, and ghosts, onto walls, smoke, or semi-transparent screens, typically using rear projection to keep the lantern out of sight. Mobile or portable projectors were used, allowing the projected image to move and change size on the screen, and multiple projecting devices allowed for quick switching of different images. In many shows, the use of spooky decoration, total darkness, (auto-)suggestive verbal presentation, and sound effects were also key elements. Some shows added a variety of sensory stimulation, including smells and electric shocks. Such elements as required fasting, fatigue (late shows), and drugs have been mentioned as methods of making sure spectators would be more convinced of what they saw. The shows started under the guise of actual séances in Germany in the late 18th century and gained popularity through most of Europe (including Britain) throughout the 19th century. The word "phantasmagoria" has also been commonly used to indicate changing successions or combinations of fantastic, bizarre, or imagined imagery. Etymology From French phantasmagorie, from Ancient Greek φάντασμα (phántasma, “ghost”) + possibly either αγορά (agorá, “assembly, gathering”) + the suffix -ia, or ἀγορεύω (agoreúō, “to speak publicly”). Paul Philidor (also known simply as "Phylidor") announced his show of ghost apparitions and evocation of the shadows of famous people as Phantasmagorie in the Parisian periodical Affiches, annonces et avis divers of December 16, 1792. About two weeks earlier the term had been the title of a letter by a certain "A.L.M.", published in Magazin Encyclopédique. The letter also promoted Phylidor's show. Phylidor had previously advertised his show as Phantasmorasi in Vienna in March 1790. The English variation Phantasmagoria was introd
https://en.wikipedia.org/wiki/Opte%20Project
The Opte Project, created in 2003 by Barrett Lyon, seeks to generate an accurate representation of the breadth of the Internet using visual graphics. Lyon believes that his network mapping can help teach students more about the Internet while also acting as a gauge illustrating both overall Internet growth and the specific areas where that growth occurs. It was not the first such project; others predated it, such as the Bell Labs Internet Mapping Project. Lyon has been generating image maps using traceroute, and later switched to mapping using BGP routes. The generated images were published on the Opte Project website. In 2021, Lyon created different video animations, using his mapping technique: shedding light on internet growth between 1997 and 2021, the Iranian internet shutdown of 2019, the United States Department of Defense's place on the internet as well as the few entry points into the Chinese internet. The project has gathered notice worldwide having been featured by Time, Cornell University, New Scientist, and Kaspersky Lab. In addition, Opte Project maps have found homes in at least two art galleries and exhibits such as The Museum of Modern Art and the Museum of Science's Mapping the World Around Us permanent exhibit. Opte images are licensed under a Creative Commons license and while use of The Opte Image is free for all non-commercial applications, a license fee is required for all others. References External links A "snapshot" version (courtesy of the "Wayback Machine") Internet architecture Visualization (web)
https://en.wikipedia.org/wiki/Backchannel
Backchannel is the use of networked computers to maintain a real-time online conversation alongside the primary group activity or live spoken remarks. The term was coined from the linguistics term to describe listeners' behaviours during verbal communication. The term "backchannel" generally refers to online conversation about the conference topic or speaker. Occasionally backchannel provides audience members a chance to fact-check the presentation. First growing in popularity at technology conferences, backchannel is increasingly a factor in education where WiFi connections and laptop computers allow participants to use ordinary chat like IRC or AIM to actively communicate during presentation. More recent research include works where the backchannel is brought publicly visible, such as the ClassCommons, backchan.nl and Fragmented Social Mirror. Twitter is also widely used today by audiences to create backchannels during broadcasting of content or at conferences. For example, television drama, other forms of entertainment and magazine programs. This practice is often also called live tweeting. Many conferences nowadays also have a hashtag that can be used by the participants to share notes and experiences; furthermore such hashtags can be user generated. History Victor Yngve first used the phrase "back channel" in 1970 in a linguistic meaning, in the following passage: "In fact, both the person who has the turn and his partner are simultaneously engaged in both speaking and listening. This is because of the existence of what I call the back channel, over which the person who has the turn receives short messages such as 'yes' and 'uh-huh' without relinquishing the turn." Such systems were widely imagined and tested in late 1990s and early 2000s. These cases include researcher's installations on conferences and classroom settings. The first famous instance of backchannel communications influencing a talk occurred on March 26, 2002, at the PC Forum conference,
https://en.wikipedia.org/wiki/Historical%20ecology
Historical ecology is a research program that focuses on the interactions between humans and their environment over long-term periods of time, typically over the course of centuries. In order to carry out this work, historical ecologists synthesize long-series data collected by practitioners in diverse fields. Rather than concentrating on one specific event, historical ecology aims to study and understand this interaction across both time and space in order to gain a full understanding of its cumulative effects. Through this interplay, humans adapt to and shape the environment, continuously contributing to landscape transformation. Historical ecologists recognize that humans have had world-wide influences, impact landscape in dissimilar ways which increase or decrease species diversity, and that a holistic perspective is critical to be able to understand that system. Piecing together landscapes requires a sometimes difficult union between natural and social sciences, close attention to geographic and temporal scales, a knowledge of the range of human ecological complexity, and the presentation of findings in a way that is useful to researchers in many fields. Those tasks require theory and methods drawn from geography, biology, ecology, history, sociology, anthropology, and other disciplines. Common methods include historical research, climatological reconstructions, plant and animal surveys, archaeological excavations, ethnographic interviews, and landscape reconstructions. History The discipline has several sites of origins by researchers who shared a common interest in the problem of ecology and history, but with a diversity of approaches. Edward Smith Deevey, Jr. used the term in the 1960s to describe a methodology that had been in long development. Deevey wished to bring together the practices of "general ecology" which was studied in an experimental laboratory, with a "historical ecology" which relied on evidence collected through fieldwork. For example, D
https://en.wikipedia.org/wiki/FireHOL
FireHOL is a shell script designed as a wrapper for iptables written to ease the customization of the Linux kernel's firewall netfilter. FireHOL is free software and open-source, distributed under the terms of the GNU General Public License. FireHOL does not have graphical user interface, but is configured through an easy to understand plain text configuration file. FireHOL first parses the configuration file and then sets the appropriate iptables rules to achieve the expected firewall behavior. It is a large, complex BASH script file, depending on the iptables console tools rather than communicating with the kernel directly. Any Linux system with iptables, BASH, and the appropriate tools can run it. Its main drawback is slower starting times, particularly on older systems. FireHOL's configuration files are fully functional BASH scripts in of themselves. External links Firewall software Free security software
https://en.wikipedia.org/wiki/External%20memory%20algorithm
In computing, external memory algorithms or out-of-core algorithms are algorithms that are designed to process data that are too large to fit into a computer's main memory at once. Such algorithms must be optimized to efficiently fetch and access data stored in slow bulk memory (auxiliary memory) such as hard drives or tape drives, or when memory is on a computer network. External memory algorithms are analyzed in the external memory model. Model External memory algorithms are analyzed in an idealized model of computation called the external memory model (or I/O model, or disk access model). The external memory model is an abstract machine similar to the RAM machine model, but with a cache in addition to main memory. The model captures the fact that read and write operations are much faster in a cache than in main memory, and that reading long contiguous blocks is faster than reading randomly using a disk read-and-write head. The running time of an algorithm in the external memory model is defined by the number of reads and writes to memory required. The model was introduced by Alok Aggarwal and Jeffrey Vitter in 1988. The external memory model is related to the cache-oblivious model, but algorithms in the external memory model may know both the block size and the cache size. For this reason, the model is sometimes referred to as the cache-aware model. The model consists of a processor with an internal memory or cache of size , connected to an unbounded external memory. Both the internal and external memory are divided into blocks of size . One input/output or memory transfer operation consists of moving a block of contiguous elements from external to internal memory, and the running time of an algorithm is determined by the number of these input/output operations. Algorithms Algorithms in the external memory model take advantage of the fact that retrieving one object from external memory retrieves an entire block of size . This property is sometimes referred
https://en.wikipedia.org/wiki/Grating%20light%20valve
The "'grating light valve'" ("'GLV'") is a "micro projection" technology that operates using a dynamically adjustable diffraction grating. It competes with other light valve technologies such as Digital Light Processing (DLP) and liquid crystal on silicon (LCoS) for implementation in video projector devices such as rear-projection televisions. The use of microelectromechanical systems (MEMS) in optical applications, which is known as optical MEMS or micro-opto-electro-mechanical structures (MOEMS), has enabled the possibility to combine the mechanical, electrical, and optical components in tiny-scale. Silicon Light Machines (SLM), in Sunnyvale CA, markets and licenses GLV technology with the capitalised trademarks "'Grated Light Valve'" and GLV, previously Grating Light Valve. The valve diffracts laser light using an array of tiny movable ribbons mounted on a silicon base. The GLV uses six ribbons as each pixel's diffraction gratings. Electronic signals alter the alignment of the gratings, and this displacement controls the intensity of the diffracted light in a very smooth gradation. Brief history The light valve was initially developed at Stanford University, in California, by electrical engineering professor David M. Bloom, along with William C. Banyai, Raj Apte, Francisco Sandejas, and Olav Solgaard, professor in the Stanford Department of Electrical Engineering. In 1994, the start-up company Silicon Light Machines was founded by Bloom to develop and commercialize the technology. Cypress Semiconductor acquired Silicon Light Machines in 2000 and sold the company to Dainippon Screen. Before the acquisition by Dainippon Screen, several marketing articles were published in EETimes, EETimes China, EETimes Taiwan, Electronica Olgi, and Fibre Systems Europe, highlighting Cypress Semiconductor's new MEMS manufacturing capabilities. The company is now wholly owned by Dainippon Screen Manufacturing Co., Ltd. In July 2000, Sony announced the signing of a technology li
https://en.wikipedia.org/wiki/List%20of%20algebraic%20coding%20theory%20topics
This is a list of algebraic coding theory topics. Algebraic coding theory
https://en.wikipedia.org/wiki/Public%20goods%20game
The public goods game is a standard of experimental economics. In the basic game, subjects secretly choose how many of their private tokens to put into a public pot. The tokens in this pot are multiplied by a factor (greater than one and less than the number of players, N) and this "public good" payoff is evenly divided among players. Each subject also keeps the tokens they do not contribute. Introduction Public goods games are fundamental in experimental economics. The nature of the experiment is incentives and the problem of free riding. Public goods games investigate the incentives of individuals who free-ride off individuals who are contributing to the common pool. A public goods game investigates behavioural economics and the actions of the players in the game. In this process, it seeks to use behavioural economics to understand the decisions of its players. It extends further to free-riding, which has far-reaching applications to environmental, managerial and social economics. Public goods games are valuable in understanding the role of incentives in an individual's behaviours. They arise from behavioural economics and have broad applications to societal challenges. Examples of applications include environmental policy, legal and justice issues and workplace and organisational structures. Results The group's total payoff is maximized when everyone contributes all of their tokens to the public pool. However, the Nash equilibrium in this game is simply zero contributions by all; if the experiment were a purely analytical exercise in game theory it would resolve to zero contributions because any rational agent does best contributing zero, regardless of whatever anyone else does. This only holds if the multiplication factor is less than the number of players, otherwise, the Nash equilibrium is for all players to contribute all of their tokens to the public pool. In fact, the Nash equilibrium is rarely seen in experiments; people do tend to add something into
https://en.wikipedia.org/wiki/Gilbert%E2%80%93Varshamov%20bound
In coding theory, the Gilbert–Varshamov bound (due to Edgar Gilbert and independently Rom Varshamov) is a limit on the parameters of a (not necessarily linear) code. It is occasionally known as the Gilbert–Shannon–Varshamov bound (or the GSV bound), but the name "Gilbert–Varshamov bound" is by far the most popular. Varshamov proved this bound by using the probabilistic method for linear codes. For more about that proof, see Gilbert–Varshamov bound for linear codes. Statement of the bound Let denote the maximum possible size of a q-ary code with length n and minimum Hamming distance d (a q-ary code is a code over the field of q elements). Then: Proof Let be a code of length and minimum Hamming distance having maximal size: Then for all  , there exists at least one codeword such that the Hamming distance between and satisfies since otherwise we could add x to the code whilst maintaining the code's minimum Hamming distance – a contradiction on the maximality of . Hence the whole of is contained in the union of all balls of radius having their centre at some : Now each ball has size since we may allow (or choose) up to of the components of a codeword to deviate (from the value of the corresponding component of the ball's centre) to one of possible other values (recall: the code is q-ary: it takes values in ). Hence we deduce That is: An improvement in the prime power case For q a prime power, one can improve the bound to where k is the greatest integer for which See also Singleton bound Hamming bound Johnson bound Plotkin bound Griesmer bound Grey–Rankin bound Gilbert–Varshamov bound for linear codes Elias-Bassalygo bound References Coding theory Articles containing proofs
https://en.wikipedia.org/wiki/Littlewood%27s%20three%20principles%20of%20real%20analysis
Littlewood's three principles of real analysis are heuristics of J. E. Littlewood to help teach the essentials of measure theory in mathematical analysis. The principles Littlewood stated the principles in his 1944 Lectures on the Theory of Functions as: The first principle is based on the fact that the inner measure and outer measure are equal for measurable sets, the second is based on Lusin's theorem, and the third is based on Egorov's theorem. Example Littlewood's three principles are quoted in several real analysis texts, for example Royden, Bressoud, and Stein & Shakarchi. Royden gives the bounded convergence theorem as an application of the third principle. The theorem states that if a uniformly bounded sequence of functions converges pointwise, then their integrals on a set of finite measure converge to the integral of the limit function. If the convergence were uniform this would be a trivial result, and Littlewood's third principle tells us that the convergence is almost uniform, that is, uniform outside of a set of arbitrarily small measure. Because the sequence is bounded, the contribution to the integrals of the small set can be made arbitrarily small, and the integrals on the remainder converge because the functions are uniformly convergent there. Notes Real analysis Heuristics Measure theory Mathematical principles
https://en.wikipedia.org/wiki/IBRIX%20Fusion
IBRIX Fusion is a parallel file system combined with a logical volume manager, availability features and a management interface. The software was produced, sold, and supported by IBRIX Incorporated of Billerica, Massachusetts. HP announced on July 17, 2009 that it had reached a definitive agreement to acquire IBRIX. Subsequent to the acquisition, the software components of IBRIX have been combined with ProLiant servers to form the X9000 series of storage systems. The X9000 storage systems are designed to provide network-attached storage over both standard protocols (SMB, NFS, HTTP and NDMP) as well as a proprietary protocol. Architecturally, the file system is limited to 16 petabytes under a single namespace, and is based upon a design described in . It was used in the HPE StoreOnce (former D2D) products. See also List of file systems Distributed file system References External links Product information Network file systems
https://en.wikipedia.org/wiki/Algebraic%20geometry%20code
Algebraic geometry codes, often abbreviated AG codes, are a type of linear code that generalize Reed–Solomon codes. The Russian mathematician V. D. Goppa constructed these codes for the first time in 1982. History The name of these codes has evolved since the publication of Goppa's paper describing them. Historically these codes have also been referred to as geometric Goppa codes; however, this is no longer the standard term used in coding theory literature. This is due to the fact that Goppa codes are a distinct class of codes which were also constructed by Goppa in the early 1970s. These codes attracted interest in the coding theory community because they have the ability to surpass the Gilbert–Varshamov bound; at the time this was discovered, the Gilbert–Varshamov bound had not been broken in the 30 years since its discovery. This was demonstrated by Tfasman, Vladut, and Zink in the same year as the code construction was published, in their paper "Modular curves, Shimura curves, and Goppa codes, better than Varshamov-Gilbert bound". The name of this paper may be one source of confusion affecting references to algebraic geometry codes throughout 1980s and 1990s coding theory literature. Construction In this section the construction of algebraic geometry codes is described. The section starts with the ideas behind Reed–Solomon codes, which are used to motivate the construction of algebraic geometry codes. Reed–Solomon codes Algebraic geometry codes are a generalization of Reed–Solomon codes. Constructed by Irving Reed and Gustave Solomon in 1960, Reed–Solomon codes use univariate polynomials to form codewords, by evaluating polynomials of sufficiently small degree at the points in a finite field . Formally, Reed–Solomon codes are defined in the following way. Let . Set positive integers . Let The Reed–Solomon code is the evaluation code Codes from algebraic curves Goppa observed that can be considered as an affine line, with corresponding projective lin
https://en.wikipedia.org/wiki/Cool%27n%27Quiet
AMD Cool'n'Quiet is a CPU dynamic frequency scaling and power saving technology introduced by AMD with its Athlon XP processor line. It works by reducing the processor's clock rate and voltage when the processor is idle. The aim of this technology is to reduce overall power consumption and lower heat generation, allowing for slower (thus quieter) cooling fan operation. The objectives of cooler and quieter result in the name Cool'n'Quiet. The technology is similar to Intel's SpeedStep and AMD's own PowerNow!, which were developed with the aim of increasing laptop battery life by reducing power consumption. Due to their different usage, Cool'n'Quiet refers to desktop and server chips, while PowerNow! is used for mobile chips; the technologies are similar but not identical. This technology was also introduced on "e-stepping" Opterons, however it is called Optimized Power Management, which is essentially a re-tooled Cool'n'Quiet scheme designed to work with registered memory. Cool'n'Quiet is fully supported in the Linux kernel from version 2.6.18 onward (using the powernow-k8 driver) and FreeBSD from 6.0-CURRENT onward. Implementation In-order to take advantage of Cool'n'Quiet Technology in Microsoft's Operating Systems: Cool'n'Quiet should be Enabled in system BIOS In Windows XP and 2000: Operating Systems "Minimal Power Management" profile must be active in "Power Schemes". A PPM driver was also released by AMD that facilitates this. In Windows Vista and 7: "Minimum processor state" found in "Processor Power Management" of "Advanced Power Settings" should be lower than "100%". Also In Windows Vista and 7 the "Power Saver" power profile allows much lower power state (frequency and voltage) than in the "High Performance" power state. Unlike Windows XP, Windows Vista only supports Cool'n'Quiet on motherboards that support ACPI 2.0 or later. With earlier versions of Windows, processor drivers along with Cool'n'Quiet software also need to be installed. The latest v
https://en.wikipedia.org/wiki/Same-origin%20policy
In computing, the same-origin policy (SOP) is an important concept in the web application security model. Under the policy, a web browser permits scripts contained in a first web page to access data in a second web page, but only if both web pages have the same origin. An origin is defined as a combination of URI scheme, host name, and port number. This policy prevents a malicious script on one page from obtaining access to sensitive data on another web page through that page's Document Object Model (DOM). This mechanism bears a particular significance for modern web applications that extensively depend on HTTP cookies to maintain authenticated user sessions, as servers act based on the HTTP cookie information to reveal sensitive information or take state-changing actions. A strict separation between content provided by unrelated sites must be maintained on the client-side to prevent the loss of data confidentiality or integrity. It is very important to remember that the same-origin policy applies only to scripts. This means that resources such as images, CSS, and dynamically-loaded scripts can be accessed across origins via the corresponding HTML tags (with fonts being a notable exception). Attacks take advantage of the fact that the same origin policy does not apply to HTML tags. History The concept of same-origin policy was introduced by Netscape Navigator 2.02 in 1995, shortly after the introduction of JavaScript in Netscape 2.0. JavaScript enabled scripting on web pages, and in particular programmatic access to the Document Object Model (DOM). The policy was originally designed to protect access to the DOM, but has since been broadened to protect sensitive parts of the global JavaScript object. Implementation All modern browsers implement some form of the same-origin policy as it is an important security cornerstone. The policies are not required to match an exact specification but are often extended to define roughly compatible security boundaries for
https://en.wikipedia.org/wiki/Shulba%20Sutras
The Shulva Sutras or Śulbasūtras (Sanskrit: शुल्बसूत्र; : "string, cord, rope") are sutra texts belonging to the Śrauta ritual and containing geometry related to fire-altar construction. Purpose and origins The Shulba Sutras are part of the larger corpus of texts called the Shrauta Sutras, considered to be appendices to the Vedas. They are the only sources of knowledge of Indian mathematics from the Vedic period. Unique fire-altar shapes were associated with unique gifts from the Gods. For instance, "he who desires heaven is to construct a fire-altar in the form of a falcon"; "a fire-altar in the form of a tortoise is to be constructed by one desiring to win the world of Brahman" and "those who wish to destroy existing and future enemies should construct a fire-altar in the form of a rhombus". The four major Shulba Sutras, which are mathematically the most significant, are those attributed to Baudhayana, Manava, Apastamba and Katyayana. Their language is late Vedic Sanskrit, pointing to a composition roughly during the 1st millennium BCE. The oldest is the sutra attributed to Baudhayana, possibly compiled around 800 BCE to 500 BCE. Pingree says that the Apastamba is likely the next oldest; he places the Katyayana and the Manava third and fourth chronologically, on the basis of apparent borrowings. According to Plofker, the Katyayana was composed after "the great grammatical codification of Sanskrit by Pāṇini in probably the mid-fourth century BCE", but she places the Manava in the same period as the Baudhayana. With regard to the composition of Vedic texts, Plofker writes,The Vedic veneration of Sanskrit as a sacred speech, whose divinely revealed texts were meant to be recited, heard, and memorized rather than transmitted in writing, helped shape Sanskrit literature in general. ... Thus texts were composed in formats that could be easily memorized: either condensed prose aphorisms (sūtras, a word later applied to mean a rule or algorithm in general) or ver
https://en.wikipedia.org/wiki/Brahmagupta%20theorem
In geometry, Brahmagupta's theorem states that if a cyclic quadrilateral is orthodiagonal (that is, has perpendicular diagonals), then the perpendicular to a side from the point of intersection of the diagonals always bisects the opposite side. It is named after the Indian mathematician Brahmagupta (598-668). More specifically, let A, B, C and D be four points on a circle such that the lines AC and BD are perpendicular. Denote the intersection of AC and BD by M. Drop the perpendicular from M to the line BC, calling the intersection E. Let F be the intersection of the line EM and the edge AD. Then, the theorem states that F is the midpoint AD. Proof We need to prove that AF = FD. We will prove that both AF and FD are in fact equal to FM. To prove that AF = FM, first note that the angles FAM and CBM are equal, because they are inscribed angles that intercept the same arc of the circle. Furthermore, the angles CBM and CME are both complementary to angle BCM (i.e., they add up to 90°), and are therefore equal. Finally, the angles CME and FMA are the same. Hence, AFM is an isosceles triangle, and thus the sides AF and FM are equal. The proof that FD = FM goes similarly: the angles FDM, BCM, BME and DMF are all equal, so DFM is an isosceles triangle, so FD = FM. It follows that AF = FD, as the theorem claims. See also Brahmagupta's formula for the area of a cyclic quadrilateral References External links Brahmagupta's Theorem at cut-the-knot Brahmagupta Theorems about quadrilaterals and circles Articles containing proofs
https://en.wikipedia.org/wiki/Wigner%20quasiprobability%20distribution
The Wigner quasiprobability distribution (also called the Wigner function or the Wigner–Ville distribution, after Eugene Wigner and Jean-André Ville) is a quasiprobability distribution. It was introduced by Eugene Wigner in 1932 to study quantum corrections to classical statistical mechanics. The goal was to link the wavefunction that appears in Schrödinger's equation to a probability distribution in phase space. It is a generating function for all spatial autocorrelation functions of a given quantum-mechanical wavefunction . Thus, it maps on the quantum density matrix in the map between real phase-space functions and Hermitian operators introduced by Hermann Weyl in 1927, in a context related to representation theory in mathematics (see Weyl quantization). In effect, it is the Wigner–Weyl transform of the density matrix, so the realization of that operator in phase space. It was later rederived by Jean Ville in 1948 as a quadratic (in signal) representation of the local time-frequency energy of a signal, effectively a spectrogram. In 1949, José Enrique Moyal, who had derived it independently, recognized it as the quantum moment-generating functional, and thus as the basis of an elegant encoding of all quantum expectation values, and hence quantum mechanics, in phase space (see Phase-space formulation). It has applications in statistical mechanics, quantum chemistry, quantum optics, classical optics and signal analysis in diverse fields, such as electrical engineering, seismology, time–frequency analysis for music signals, spectrograms in biology and speech processing, and engine design. Relation to classical mechanics A classical particle has a definite position and momentum, and hence it is represented by a point in phase space. Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville density. This strict interpretation fails for a quantum p
https://en.wikipedia.org/wiki/Comparison%20of%20file%20managers
The following tables compare general and technical information for a number of notable file managers. General information Operating system support Cross-platform file managers This table shows the operating systems that the file managers can run on, without emulation. Mac-only file managers Finder ForkLift Path Finder Xfile Commander One *nix-only file managers emelFM2 Gentoo file manager Konqueror Krusader nnn Nautilus Nemo PCMan File Manager Ranger ROX-Filer Thunar SpaceFM worker Windows-only file managers Altap Salamander Directory Opus Explorer++ File Manager Nomad.NET SE-Explorer STDU Explorer Total Commander File Explorer xplorer² XYplorer ZTreeWin iOS-only file managers Files (Apple) Android-only file managers Files by Google Ghost Commander Manager views Information about what common file manager views are implemented natively (without third-party add-ons). Note that the "Column View" does not refer to the Miller Columns browsing / visualization technique that can be applied to tree structures / folders. Twin-panel file managers have obligatory connected panels where action in one panel results in reaction in the second. Konqueror supports multiple panels divided horizontally, vertically or both, but these panels do not act as twin panels by default (the user has to mark the panels he wants to act as twin-panels). Network protocols Information on what networking protocols the file managers support. Note that many of these protocols might be supported, in part or in whole, by software layers below the file manager, rather than by the file manager itself; for example, the macOS Finder doesn't implement those protocols, and the Windows Explorer doesn't implement most of them, they just make ordinary file system calls to access remote files, and Konqueror either uses ordinary file system calls or KIO slave calls to access remote files. Some functions, such as browsing for servers or shares, might be implemented in the file manager even if mo
https://en.wikipedia.org/wiki/CD4
In molecular biology, CD4 (cluster of differentiation 4) is a glycoprotein that serves as a co-receptor for the T-cell receptor (TCR). CD4 is found on the surface of immune cells such as T helper cells, monocytes, macrophages, and dendritic cells. It was discovered in the late 1970s and was originally known as leu-3 and T4 (after the OKT4 monoclonal antibody that reacted with it) before being named CD4 in 1984. In humans, the CD4 protein is encoded by the CD4 gene. CD4+ T helper cells are white blood cells that are an essential part of the human immune system. They are often referred to as CD4 cells, T-helper cells or T4 cells. They are called helper cells because one of their main roles is to send signals to other types of immune cells, including CD8 killer cells, which then destroy the infectious particle. If CD4 cells become depleted, for example in untreated HIV infection, or following immune suppression prior to a transplant, the body is left vulnerable to a wide range of infections that it would otherwise have been able to fight. Structure Like many cell surface receptors/markers, CD4 is a member of the immunoglobulin superfamily. It has four immunoglobulin domains (D1 to D4) that are exposed on the extracellular surface of the cell: D1 and D3 resemble immunoglobulin variable (IgV) domains. D2 and D4 resemble immunoglobulin constant (IgC) domains. The immunoglobulin variable (IgV) domain of D1 adopts an immunoglobulin-like β-sandwich fold with seven β-strands in 2 β-sheets, in a Greek key topology. CD4 interacts with the β2-domain of MHC class II molecules through its D1 domain. T cells displaying CD4 molecules (and not CD8) on their surface, therefore, are specific for antigens presented by MHC II and not by MHC class I (they are MHC class II-restricted). MHC class I contains Beta-2 microglobulin. The short cytoplasmic/intracellular tail (C) of CD4 contains a special sequence of amino acids that allow it to recruit and interact with the tyrosine ki
https://en.wikipedia.org/wiki/Cortex%20%28anatomy%29
In anatomy and zoology, the cortex (: cortices) is the outermost (or superficial) layer of an organ. Organs with well-defined cortical layers include kidneys, adrenal glands, ovaries, the thymus, and portions of the brain, including the cerebral cortex, the best-known of all cortices. Etymology The word is of Latin origin and means bark, rind, shell or husk. Notable examples The renal cortex, between the renal capsule and the renal medulla; assists in ultrafiltration The adrenal cortex, situated along the perimeter of the adrenal gland; mediates the stress response through the production of various hormones The thymic cortex, mainly composed of lymphocytes; functions as a site for somatic recombination of T cell receptors, and positive selection The cerebral cortex, the outer layer of the cerebrum, plays a key role in memory, attention, perceptual awareness, thought, language, and consciousness. Cortical bone is the hard outer layer of bone; distinct from the spongy, inner cancellous bone tissue Ovarian cortex is the outer layer of the ovary and contains the follicles. The lymph node cortex is the outer layer of the lymph node. Cerebral cortex The cerebral cortex is typically described as comprising three parts: the sensory, motor, and association areas. These sensory areas receive and process information from the senses. The senses of vision, audition, and touch are served by the primary visual cortex, the primary auditory cortex, and primary somatosensory cortex. The cerebellar cortex is the thin gray surface layer of the cerebellum, consisting of an outer molecular layer or stratum moleculare, a single layer of Purkinje cells (the ganglionic layer), and an inner granular layer or stratum granulosum. The cortex is the outer surface of the cerebrum and is composed of gray matter. The motor areas are located in both hemispheres of the cerebral cortex. Two areas of the cortex are commonly referred to as motor: the primary motor cortex, which executes v
https://en.wikipedia.org/wiki/Rate%20of%20climb
In aeronautics, the rate of climb (RoC) is an aircraft's vertical speed, that is the positive or negative rate of altitude change with respect to time. In most ICAO member countries, even in otherwise metric countries, this is usually expressed in feet per minute (ft/min); elsewhere, it is commonly expressed in metres per second (m/s). The RoC in an aircraft is indicated with a vertical speed indicator (VSI) or instantaneous vertical speed indicator (IVSI). The temporal rate of decrease in altitude is referred to as the rate of descent (RoD) or sink rate. A negative rate of climb corresponds to a positive rate of descent: RoD = −RoC. Speed and rate of climb There are a number of designated airspeeds relating to optimum rates of ascent, the two most important of these are VX and VY. VX is the indicated forward airspeed for best angle of climb. This is the speed at which an aircraft gains the most altitude in a given horizontal , typically used to avoid a collision with an object a short distance away. By contrast, VY is the indicated airspeed for best rate of climb, a rate which allows the aircraft to climb to a specified altitude in the minimum amount of regardless of the horizontal distance required. Except at the aircraft's ceiling, where they are equal, VX is always lower than VY. Climbing at VX allows pilots to maximize altitude gain per horizontal distance. This occurs at the speed for which the difference between thrust and drag is the greatest (maximum excess thrust). In a jet airplane, this is approximately minimum drag speed, occurring at the bottom of the drag vs. speed curve. Climbing at VY allows pilots to maximize altitude gain per time. This occurs at the speed where the difference between engine power and the power required to overcome the aircraft's drag is greatest (maximum excess power). Vx increases with altitude and VY decreases with altitude until they converge at the airplane's absolute ceiling, the altitude above which the airplane c
https://en.wikipedia.org/wiki/Service%20control%20point
A service control point (SCP) is a standard component of the Intelligent Network (IN) telephone system which is used to control the service. Standard SCPs in the telecom industry today are deployed using SS7, SIGTRAN or SIP technologies. The SCP queries the service data point (SDP) which holds the actual database and directory. SCP, using the database from the SDP, identifies the geographical number to which the call is to be routed. This is the same mechanism that is used to route 800 numbers. SCP may also communicate with an intelligent peripheral (IP) to play voice messages, or prompt for information from the user, such as prepaid long distance using account codes. This is done by implementing telephone feature codes like "#", which can be used to terminate the input for a user name or password or can be used for call forwarding. These are realized using Intelligent Network Application Part (INAP) that sits above Transaction Capabilities Application Part (TCAP) on the SS7 protocol stack. The TCAP is part of the top or 7th layer of the OSI layer breakdown. SCPs are connected with either SSPs or STPs. This is dependent upon the network architecture that the network service provider wants. The most common implementation uses STPs. SCP and SDP split is becoming a common industry practice. This is known generally in the industry by split architecture. Reason is that operators want to decouple the dependency between the two functionality to facilitate upgrades and possibly rely on different vendors. External links See Telcordia GR-1299-CORE, for Service Control Point/Adjunct Interface generic requirements. References Network architecture Telephony equipment Signaling System 7
https://en.wikipedia.org/wiki/Core%20%28game%20theory%29
In cooperative game theory, the core is the set of feasible allocations or imputations where no coalition of agents can benefit by breaking away from the grand coalition. One can think of the core corresponding to situations where it is possible to sustain cooperation among all agents. A coalition is said to improve upon or block a feasible allocation if the members of that coalition can generate more value among themselves than they are allocated in the original allocation. As such, that coalition is not incentivized to stay with the grand coalition. An allocation is said to be in the core of a game if there is no coalition that can improve upon it. The core is then the set of all feasible allocations . Origin The idea of the core already appeared in the writings of , at the time referred to as the contract curve. Even though von Neumann and Morgenstern considered it an interesting concept, they only worked with zero-sum games where the core is always empty. The modern definition of the core is due to Gillies. Definition Consider a transferable utility cooperative game where denotes the set of players and is the characteristic function. An imputation is dominated by another imputation if there exists a coalition , such that each player in weakly-prefers ( for all ) and there exists that strictly-prefers (), and can enforce by threatening to leave the grand coalition to form (). The core is the set of imputations that are not dominated by any other imputation. Weak core An imputation is strongly-dominated by another imputation if there exists a coalition , such that each player in strictly-prefers ( for all ). The weak core is the set of imputations that are not strongly-dominated. Properties Another definition, equivalent to the one above, states that the core is a set of payoff allocations satisfying Efficiency: , Coalitional rationality: for all subsets (coalitions) . The core is always well-defined, but can be empty. The core is a se
https://en.wikipedia.org/wiki/Entropy%20unit
The entropy unit is a non-S.I. unit of thermodynamic entropy, usually denoted "e.u." or "eU" and equal to one calorie per kelvin per mole, or 4.184 joules per kelvin per mole. Entropy units are primarily used in chemistry to describe enthalpy changes. Sources Units of measurement
https://en.wikipedia.org/wiki/Electronic%20component
An electronic component is any basic discrete electronic device or physical entity part of an electronic system used to affect electrons or their associated fields. Electronic components are mostly industrial products, available in a singular form and are not to be confused with electrical elements, which are conceptual abstractions representing idealized electronic components and elements. A datasheet for an electronic component is a technical document that provides detailed information about the component's specifications, characteristics, and performance. Electronic components have a number of electrical terminals or leads. These leads connect to other electrical components, often over wire, to create an electronic circuit with a particular function (for example an amplifier, radio receiver, or oscillator). Basic electronic components may be packaged discretely, as arrays or networks of like components, or integrated inside of packages such as semiconductor integrated circuits, hybrid integrated circuits, or thick film devices. The following list of electronic components focuses on the discrete version of these components, treating such packages as components in their own right. Classification Components can be classified as passive, active, or electromechanic. The strict physics definition treats passive components as ones that cannot supply energy themselves, whereas a battery would be seen as an active component since it truly acts as a source of energy. However, electronic engineers who perform circuit analysis use a more restrictive definition of passivity. When only concerned with the energy of signals, it is convenient to ignore the so-called DC circuit and pretend that the power supplying components such as transistors or integrated circuits is absent (as if each such component had its own battery built in), though it may in reality be supplied by the DC circuit. Then, the analysis only concerns the AC circuit, an abstraction that ignores DC voltages a
https://en.wikipedia.org/wiki/AARON
AARON is the collective name for a series of computer programs written by artist Harold Cohen that create original artistic images. Proceeding from Cohen's initial question "What are the minimum conditions under which a set of marks functions as an image?", AARON was in development between 1972 and the 2010s. As the software is not open source, its development effectively ended with Cohen's death in 2016. The name "AARON" does not seem to be an acronym; rather, it was a name chosen to start with the letter "A" so that the names of successive programs could follow it alphabetically. However, Cohen did not create any other major programs. Initial versions of AARON created abstract drawings that grew more complex through the 1970s. More representational imagery was added in the 1980s; first rocks, then plants, then people. In the 1990s more representational figures set in interior scenes were added, along with color. AARON returned to more abstract imagery, this time in color, in the early 2000s. Cohen used machines that allowed AARON to produce physical artwork. The first machines drew in black and white using a succession of custom-built "turtle" and flatbed plotter devices. Cohen would sometimes color these images by hand in fabric dye (Procion), or scale them up to make larger paintings and murals. In the 1990s Cohen built a series of digital painting machines to output AARON's images in ink and fabric dye. His later work used a large-scale inkjet printer on canvas. Development of AARON began in the C programming language then switched to Lisp in the early 1990s. Cohen credits Lisp with helping him solve the challenges he faced in adding color capabilities to AARON. An article about Cohen appeared in Computer Answers that describes AARON and shows two line drawings that were exhibited at the Tate gallery. The article goes on to describe the workings of AARON, then running on a DEC VAX 750 minicomputer. Raymond Kurzweil's company has produced a downloadable sc
https://en.wikipedia.org/wiki/Stein%27s%20example
In decision theory and estimation theory, Stein's example (also known as Stein's phenomenon or Stein's paradox) is the observation that when three or more parameters are estimated simultaneously, there exist combined estimators more accurate on average (that is, having lower expected mean squared error) than any method that handles the parameters separately. It is named after Charles Stein of Stanford University, who discovered the phenomenon in 1955. An intuitive explanation is that optimizing for the mean-squared error of a combined estimator is not the same as optimizing for the errors of separate estimators of the individual parameters. In practical terms, if the combined error is in fact of interest, then a combined estimator should be used, even if the underlying parameters are independent. If one is instead interested in estimating an individual parameter, then using a combined estimator does not help and is in fact worse. Formal statement The following is the simplest form of the paradox, the special case in which the number of observations is equal to the number of parameters to be estimated. Let be a vector consisting of unknown parameters. To estimate these parameters, a single measurement is performed for each parameter , resulting in a vector of length . Suppose the measurements are known to be independent, Gaussian random variables, with mean and variance 1, i.e., . Thus, each parameter is estimated using a single noisy measurement, and each measurement is equally inaccurate. Under these conditions, it is intuitive and common to use each measurement as an estimate of its corresponding parameter. This so-called "ordinary" decision rule can be written as , which is the maximum likelihood estimator (MLE). The quality of such an estimator is measured by its risk function. A commonly used risk function is the mean squared error, defined as . Surprisingly, it turns out that the "ordinary" decision rule is suboptimal (inadmissible) in terms of mean
https://en.wikipedia.org/wiki/HP%20Time-Shared%20BASIC
HP Time-Shared BASIC (HP TSB) is a BASIC programming language interpreter for Hewlett-Packard's HP 2000 line of minicomputer-based time-sharing computer systems. TSB is historically notable as the platform that released the first public versions of the game Star Trek. The system implements a dialect of BASIC as well as a rudimentary user account and program library that allows multiple people to use the system at once. The systems were a major force in the early-to-mid 1970s and generated a large number of programs. HP maintained a database of contributed-programs and customers could order them on punched tape for a nominal fee. Most BASICs of the 1970s trace their history to the original Dartmouth BASIC of the 1960s, but early versions of Dartmouth did not handle string variables or offer string manipulation features. Vendors added their own solutions; HP used a system similar to Fortran and other languages with array slicing, while DEC later introduced the MID/LEFT/RIGHT functions. As microcomputers began to enter the market in the mid-1970s, many new BASICs appeared that based their parsers on DEC's or HP's syntax. Altair BASIC, the original version of what became Microsoft BASIC, was patterned on DEC's BASIC-PLUS. Others, including Apple's Integer BASIC, Atari BASIC and North Star BASIC were patterned on the HP style. This made conversions between these platforms somewhat difficult if string handling was encountered. Nomenclature The software was also known by its versioned name, tied to the hardware version on which it ran, such as HP 2000C Time-Shared BASIC and the operating system came in different varieties — 2000A, 2000B, 2000C, High-Speed 2000C, 2000E, and 2000F. HP also referred to the language as "Access BASIC" in some publications. This matched the naming of the machines on which it ran, known as the "2000/Access" in some publications. This terminology appears to have been used only briefly when the platform was first launched. Platform details
https://en.wikipedia.org/wiki/GlobeXplorer
GlobeXplorer was an online spatial data company that compiled and distributed aerial photos, satellite imagery, and map data from their online spatial archives. GlobeXplorer has been credited as the first company to establish a business around compiling and distributing online aerial and satellite imagery. In 2007, the company was acquired by DigitalGlobe. GlobeXplorer's imagery and property data was licensed to many online information websites. GlobeXplorer obtained its content through online distribution relationships with about 30 of the world's top acquirers of aerial, satellite, and property data. GlobeXplorer's primary products were the ImageAtlas ecommerce storefront and ImageBuilder web developer toolkit. It also provided ImageConnect extensions and web services for GIS and Computer-aided design. GlobeXplorer's defensible core competence was its ability to meter custom profiles of content for consumers and pay royalties to providers based on 512x512 "standard image units" (SIU). This was accomplished through content hosting, delivery via APIs and application plugins, and building a custom billing system modeled around TELCO call rating and inter-bank settlement accounting. Beyond imagery, the billing system also supported metering of vector data and usage of floating license tokens for programs such as Arc/INFO, PCI and ERDAS. History GlobeXplorer was founded in 1999 by Rob Shanks, Michael Fisher, Chris Nicholas, and Paul Smith (former executives at HJW GeoSpatial, Inc.) through partnerships with Sun Microsystems, NASA, and Oracle Corporation. It grew from a NASA EOCAP project of HJW's to place ready-made orthophotography online, and an internal project of Sun Microsystems and Oracle to counter the Microsoft "Terraserver" technology demonstration. The EOCAP project concluded with the creation of an 'earth imagery' searchable website in 1998. HJW was acquired by Harrods of London, who provided financial backing to spin out the online effort into GlobeXplor
https://en.wikipedia.org/wiki/Apache%20Axis
Apache Axis (Apache eXtensible Interaction System) is an open-source, XML based Web service framework. It consists of a Java and a C++ implementation of the SOAP server, and various utilities and APIs for generating and deploying Web service applications. Using Apache Axis, developers can create interoperable, distributed computing applications. Axis development takes place under the auspices of the Apache Software Foundation. Axis for Java When using the Java version of Axis, there are two ways to expose Java code as Web service. The easiest one is to use Axis native JWS (Java Web Service) files. Another way is to use custom deployment. Custom deployment enables you to customize resources that should be exposed as Web services. See also Apache Axis2. JWS Web service creation JWS files contain Java class source code that should be exposed as Web service. The main difference between an ordinary java file and jws file is the file extension. Another difference is that jws files are deployed as source code and not compiled class files. The following example will expose methods add and subtract of class Calculator. public class Calculator { public int add(int i1, int i2) { return i1 + i2; } public int subtract(int i1, int i2) { return i1 - i2; } } JWS Web service deployment Once the Axis servlet is deployed, you need only to copy the jws file to the Axis directory on the server. This will work if you are using an Apache Tomcat container. In the case that you are using another web container, custom WAR archive creation will be required. JWS Web service access JWS Web service is accessible using the URL http://localhost:8080/axis/Calculator.jws. If you are running a custom configuration of Apache Tomcat or a different container, the URL might be different. Custom deployed Web service Custom Web service deployment requires a specific deployment descriptor called WSDD (Web Service Deployment Descriptor) syntax. It can be used to sp
https://en.wikipedia.org/wiki/Character%20displacement
Character displacement is the phenomenon where differences among similar species whose distributions overlap geographically are accentuated in regions where the species co-occur, but are minimized or lost where the species' distributions do not overlap. This pattern results from evolutionary change driven by biological competition among species for a limited resource (e.g. food). The rationale for character displacement stems from the competitive exclusion principle, also called Gause's Law, which contends that to coexist in a stable environment two competing species must differ in their respective ecological niche; without differentiation, one species will eliminate or exclude the other through competition. Character displacement was first explicitly explained by William L. Brown Jr. and E. O. Wilson in 1956: "Two closely related species have overlapping ranges. In the parts of the ranges where one species occurs alone, the populations of that species are similar to the other species and may even be very difficult to distinguish from it. In the area of overlap, where the two species occur together, the populations are more divergent and easily distinguished, i.e., they 'displace' one another in one or more characters. The characters involved can be morphological, ecological, behavioral, or physiological; they are assumed to be genetically based." Brown and Wilson used the term character displacement to refer to instances of both reproductive character displacement, or reinforcement of reproductive barriers, and ecological character displacement driven by competition. As the term character displacement is commonly used, it generally refers to morphological differences due to competition. Brown and Wilson viewed character displacement as a phenomenon involved in speciation, stating, "we believe that it is a common aspect of geographical speciation, arising most often as a product of the genetic and ecological interaction of two (or more) newly evolved, cognate spec
https://en.wikipedia.org/wiki/TRON%20%28encoding%29
TRON Code is a multi-byte character encoding used in the TRON project. It is similar to Unicode but does not use Unicode's Han unification process: each character from each CJK character set is encoded separately, including archaic and historical equivalents of modern characters. This means that Chinese, Japanese, and Korean text can be mixed without any ambiguity as to the exact form of the characters; however, it also means that many characters with equivalent semantics will be encoded more than once, complicating some operations. TRON has room for 150 million code points. Separate code points for Chinese, Korean, and Japanese variants of the 70,000+ Han characters in Unicode 4.1 (if that were deemed necessary) would require more than 200,000 code points in TRON. TRON includes the non-Han characters from Unicode 2.0, but it has not been keeping up to date with recent editions to Unicode as Unicode expands beyond the Basic Multilingual Plane and adds characters to existing scripts. The TRON encoding has been updated to include other recent code page updates like JIS X 0213. Fonts for the TRON encoding are available, but they have restrictions for commercial use. Structure Each character in TRON Code is two bytes. Similarly to ISO/IEC 2022, the TRON character encoding handles characters in multiple character sets within a single character encoding by using escape sequences, referred to as language specifier codes, to switch between planes of 48,400 code points. Character sets incorporated into TRON Code include existing character sets such as JIS X 0208 and GB 2312, as well as other character sources such as the Dai Kan-Wa Jiten, and some scripts not included in other encodings such as Dongba symbols. Owing to the incorporation of entire character sets into TRON Code, many characters with equivalent semantics are encoded multiple times; for example, all of the kanji characters in the GT Typeface receive their own codepoints, despite many of them overlapping wit
https://en.wikipedia.org/wiki/Interactive%20whiteboard
An interactive whiteboard (IWB), also known as interactive board or smart board, is a large interactive display board in the form factor of a whiteboard. It can either be a standalone touchscreen computer used independently to perform tasks and operations, or a connectable apparatus used as a touchpad to control computers from a projector. They are used in a variety of settings, including classrooms at all levels of education, in corporate board rooms and work groups, in training rooms for professional sports coaching, in broadcasting studios, and others. The first interactive whiteboards were designed and manufactured for use in the office. They were developed by PARC around 1990. This board was used in small group meetings and round-tables. The interactive whiteboard industry was expected to reach sales of US$1 billion worldwide by 2008; one of every seven classrooms in the world was expected to feature an interactive whiteboard by 2011 according to market research by Futuresource Consulting. In 2004, 26% of British primary classrooms had interactive whiteboards. The Becta Harnessing Technology Schools Survey 2007 indicated that 98% of secondary and 100% of primary schools had IWBs. By 2008, the average numbers of interactive whiteboards rose in both primary schools (18 compared with just over six in 2005, and eight in the 2007 survey) and secondary schools (38, compared with 18 in 2005 and 22 in 2007). General operation and use An interactive whiteboard (IWB) device can either be a standalone computer or a large, functioning touchpad for computers to use. A device driver is usually installed on the attached computer so that the interactive whiteboard can act as a Human Input Device (HID), like a mouse. The computer's video output is connected to a digital projector so that images may be projected on the interactive whiteboard surface, although interactive whiteboards with LCD displays also exist. The user then calibrates the whiteboard image by matching the
https://en.wikipedia.org/wiki/Butterfly%20net
A butterfly net (sometimes called an aerial insect net) is one of several kinds of nets used to collect insects. The entire bag of the net is generally constructed from a lightweight mesh to minimize damage to delicate butterfly wings. Other types of nets used in insect collecting include beat nets, aquatic nets, and sweep nets. Nets for catching different insects have different mesh sizes. Aquatic nets usually have bigger, more 'open' mesh. Catching small aquatic creatures usually requires an insect net. The mesh is smaller and can capture more. References "Taron, Doug. "Gossamer Tapestry." : Some Thoughts on Butterfly Nets. 10 Apr 2009 External links Collecting and Preserving Insects and Mites USDA Anti Insect Net & Sun Shade Net Collecting Butterflies Nets (devices) Entomology equipment Environmental Sampling Equipment
https://en.wikipedia.org/wiki/Amelogenin
Amelogenins are a group of protein isoforms produced by alternative splicing or proteolysis from the AMELX gene, on the X chromosome, and also the AMELY gene in males, on the Y chromosome. They are involved in amelogenesis, the development of enamel. Amelogenins are type of extracellular matrix protein, which, together with ameloblastins, enamelins and tuftelins, direct the mineralization of enamel to form a highly organized matrix of rods, interrod crystal and proteins. Although the precise role of amelogenin(s) in regulating the mineralization process is unknown, it is known that amelogenins are abundant during amelogenesis. Developing human enamel contains about 70% protein, 90% of which are amelogenins. Function Amelogenins are believed to be involved in the organizing of enamel rods during tooth development. The latest research indicates that these proteins regulate the initiation and growth of hydroxyapatite crystals during the mineralization of enamel. In addition, amelogenins appear to aid in the development of cementum by directing cementoblasts to the tooth's root surface. Variants The amelogenin gene has been most widely studied in humans, where it is a single copy gene, located on the X and Y chromosomes at Xp22.1–Xp22.3 and Yp 11.2 [5]. The amelogenin gene's location on sex chromosomes has implications for variability both between the X chromosome form (AMELX) and the Y chromosome form (AMELY), and between alleles of AMELY among different populations. This is because AMELY exists in the non-recombining region of chromosome Y, effectively isolating it from normal selection pressures. Other sources of amelogenin variation arise from the various isoforms of AMELX obtained from alternative splicing of mRNA transcripts. Specific roles for isoforms have yet to be established. Among other organisms, amelogenin is well conserved among eutherians, and has homologs in monotremes, reptiles and amphibians. Application in sex determination Differences between
https://en.wikipedia.org/wiki/Usenet%20II
Usenet II was a proposed alternative to the classic Usenet hierarchy, started in 1998. Unlike the original Usenet, it was peered only between "sound sites" and employed a system of rules to keep out spam. Usenet II was backed by influential Usenetters like Russ Allbery. Sometime between 2010 and 2011, the web page for Usenet II went offline. The newsgroup hierarchy in Usenet II revived the old naming system used by Usenet before the Great Renaming. All groups had names starting "net.", which serve to distinguish them from the "Big 8" (misc.*, sci.*, news.*, rec.*, soc.*, talk.*, comp.*, humanities.*). A separate checkgroup system, using the same technical mechanism as the one produced by David C. Lawrence for the Big 8, enforced the Usenet II hierarchy and prevents the creation of unauthorized newsgroups within it. The basic principles of operation were controlled by a Steering Committee, which appointed "hierarchy czars" who were responsible for the content of specific portions of the namespace, or hierarchies. Usenet II had strictly enforced rules. Readers of messages in Usenet II had to be fully compliant with the RFC 1036 (Usenet) standard plus some additional format compliance rules that were specific to Usenet II. A message header had to contain a valid email address in the From field. It was required to have an NNTP-Posting-Host header field containing a sound site. The distribution field was to be set to "4gh" (a reference to Shockwave Rider by John Brunner). If the Subject field started with "Re:", indicating a follow-up, there had to be a valid "References" field that contained the Message-ID of a previous message. Crossposts to groups outside the net.* hierarchy were cancelled automatically. No message were allowed to spawn a discussion in more than three newsgroups. This applied both to the "newsgroups" field and the "Followup-To" field. It was permissible to post the same message three times. Posting the same message every day or every we
https://en.wikipedia.org/wiki/Great%20American%20Interchange
The Great American Biotic Interchange (commonly abbreviated as GABI), also known as the Great American Interchange and the Great American Faunal Interchange, was an important late Cenozoic paleozoogeographic biotic interchange event in which land and freshwater fauna migrated from North America via Central America to South America and vice versa, as the volcanic Isthmus of Panama rose up from the sea floor and bridged the formerly separated continents. Although earlier dispersals had occurred, probably over water, the migration accelerated dramatically about 2.7 million years (Ma) ago during the Piacenzian age. It resulted in the joining of the Neotropic (roughly South American) and Nearctic (roughly North American) biogeographic realms definitively to form the Americas. The interchange is visible from observation of both biostratigraphy and nature (neontology). Its most dramatic effect is on the zoogeography of mammals, but it also gave an opportunity for reptiles, amphibians, arthropods, weak-flying or flightless birds, and even freshwater fish to migrate. Coastal and marine biota, however, was affected in the opposite manner; the formation of the Central American Isthmus caused what has been termed the Great American Schism, with significant diversification and extinction occurring as a result of the isolation of the Caribbean from the Pacific. The occurrence of the interchange was first discussed in 1876 by the "father of biogeography", Alfred Russel Wallace. Wallace had spent five years exploring and collecting specimens in the Amazon basin. Others who made significant contributions to understanding the event in the century that followed include Florentino Ameghino, W. D. Matthew, W. B. Scott, Bryan Patterson, George Gaylord Simpson and S. David Webb. The Pliocene timing of the formation of the connection between North and South America was discussed in 1910 by Henry Fairfield Osborn. Analogous interchanges occurred earlier in the Cenozoic, when the formerly
https://en.wikipedia.org/wiki/IBM%20OfficeVision
OfficeVision was an IBM proprietary office support application. History PROFS, DISOSS and Office/36 OfficeVision started as a product for the VM operating system named PROFS (for PRofessional OFfice System) and was initially made available in 1981. Before that it was just a PRPQ (Programming Request for Price Quotation), an IBM administrative term for non-standard software offerings with unique features, support and pricing. The first release of PROFS was developed by IBM in Poughkeepsie, NY, in conjunction with Amoco, from a prototype developed years earlier by Paul Gardner and others. Subsequent development took place in Dallas. The editor XEDIT was the basis of the word processing function in PROFS. PROFS itself had descended from OFS (Office System) developed also on the same laboratory and first installed in October 1974. This was a primitive solution for office automation created between 1970 and 1972, which was replacement for an in-house system. Compared to Poughkeepsie's original in-house system, the distinctive new features added by OFS were a centralised database virtual machine (data base manager or DBM) for shared permanent storage of documents, instead of storing all documents in user's personal virtual machines; and a centralised virtual machine (mailman master machine or distribution virtual machine) to manage mail transfer between individuals, instead of relying on direct communication between the personal virtual machines of individual users. By 1981, IBM's Poughkeepsie site had over 500 PROFS users. In 1983, IBM introduced release 2 of PROFS, along with auxiliary software to enable document interchange between PROFS, DISOSS, Displaywriter, IBM 8100 and IBM 5520 systems. PROFS and its e-mail component, known colloquially as PROFS Notes, featured prominently in the investigation of the Iran-Contra scandal. Oliver North believed he had deleted his correspondence, but the system archived it anyway. Congress subsequently examined the e-mail
https://en.wikipedia.org/wiki/Cleavage%20%28embryo%29
In embryology, cleavage is the division of cells in the early development of the embryo, following fertilization. The zygotes of many species undergo rapid cell cycles with no significant overall growth, producing a cluster of cells the same size as the original zygote. The different cells derived from cleavage are called blastomeres and form a compact mass called the morula. Cleavage ends with the formation of the blastula, or of the blastocyst in mammals. Depending mostly on the concentration of yolk in the egg, the cleavage can be holoblastic (total or entire cleavage) or meroblastic (partial cleavage). The pole of the egg with the highest concentration of yolk is referred to as the vegetal pole while the opposite is referred to as the animal pole. Cleavage differs from other forms of cell division in that it increases the number of cells and nuclear mass without increasing the cytoplasmic mass. This means that with each successive subdivision, there is roughly half the cytoplasm in each daughter cell than before that division, and thus the ratio of nuclear to cytoplasmic material increases. Mechanism The rapid cell cycles are facilitated by maintaining high levels of proteins that control cell cycle progression such as the cyclins and their associated cyclin-dependent kinases (CDKs). The complex cyclin B/CDK1 also known as MPF (maturation promoting factor) promotes entry into mitosis. The processes of karyokinesis (mitosis) and cytokinesis work together to result in cleavage. The mitotic apparatus is made up of a central spindle and polar asters made up of polymers of tubulin protein called microtubules. The asters are nucleated by centrosomes and the centrosomes are organized by centrioles brought into the egg by the sperm as basal bodies. Cytokinesis is mediated by the contractile ring made up of polymers of actin protein called microfilaments. Karyokinesis and cytokinesis are independent but spatially and temporally coordinated processes. While mitosis
https://en.wikipedia.org/wiki/Tally%20light
In a television studio, a tally light (or on air indicator) is a small signal lamp on a professional video camera or monitor. It is usually located just above the lens or on the electronic viewfinder (EVF) and communicates, for the benefit of those in front of the camera as well as the camera operator, that the camera is “live,” i.e. its signal is being used for the “main program” at that moment. Many non-studio (i.e. intended for offline recording) video cameras—and even digital photo cameras capable of filming video—also feature some sort of recording indication. For television productions with more than one camera in a multiple-camera setup, the tally lights are generally illuminated automatically by a vision mixer trigger that is fed to a tally breakout board and then to a special video router designed for tally signals. The video switcher keeps track of which video sources are selected by the technical director and output to the main program bus. A switch automatically closes the appropriate electrical contacts to create a circuit, which activates the tally unit located in the camera control units (CCU). If more than one camera is on-air simultaneously (as in the case of a dissolve transition), during the duration of the transition the tally lights of both cameras will remain lit until transition completion. This is also the case when multiple cameras are placed in “boxes” on screen, sometimes referred to as “picture-in-picture” (PiP) mode. Colours & usage In the active (“on air”) mode, tally lights are typically red. Some cameras and video switchers are capable of additionally showing a “preview” tally signal (typically green) for when the camera is about to be switched to and become the main source of video signal. Once the switch happens, green changes to red. This feature allows the presenter to be aware of the upcoming transition, and, for example, change their posture. In addition to the tally lights, an additional light called ISO is sometimes
https://en.wikipedia.org/wiki/Yaw%20drive
The yaw drive is an important component of the horizontal axis wind turbines' yaw system. To ensure the wind turbine is producing the maximal amount of electric energy at all times, the yaw drive is used to keep the rotor facing into the wind as the wind direction changes. This only applies for wind turbines with a horizontal axis rotor. The wind turbine is said to have a yaw error if the rotor is not aligned to the wind. A yaw error implies that a lower share of the energy in the wind will be running through the rotor area. (The generated energy will be approximately proportional to the cosine of the yaw error). History When the windmills of the 18th century included the feature of rotor orientation via the rotation of the nacelle, an actuation mechanism able to provide that turning moment was necessary. Initially the windmills used ropes or chains extending from the nacelle to the ground in order to allow the rotation of the nacelle by means of human or animal power. Another historical innovation was the fantail. This device was actually an auxiliary rotor equipped with plurality of blades and located downwind of the main rotor, behind the nacelle in a 90° (approximately) orientation to the main rotor sweep plane. In the event of change in wind direction the fantail would rotate thus transmitting its mechanical power through a gearbox (and via a gear-rim-to-pinion mesh) to the tower of the windmill. The effect of the aforementioned transmission was the rotation of the nacelle towards the direction of the wind, where the fantail would not face the wind thus stop turning (i.e. the nacelle would stop to its new position). The modern yaw drives, even though electronically controlled and equipped with large electric motors and planetary gearboxes have great similarities to the old windmill concept. Types The main categories of yaw drives are: The Electric Yaw Drives: Commonly used in almost all modern turbines. The Hydraulic Yaw Drive: Hardly ever used anymore
https://en.wikipedia.org/wiki/NABTS
NABTS, the North American Broadcast Teletext Specification, is a protocol used for encoding NAPLPS-encoded teletext pages, as well as other types of digital data, within the vertical blanking interval (VBI) of an analog video signal. It is standardized under standard EIA-516, and has a rate of 15.6 kbit/s per line of video (with error correction). It was adopted into the international standard CCIR 653 (now ITU-R BT.653) of 1986 as CCIR Teletext System C. History NABTS was originally developed as a protocol by the Canadian Department of Communications, with their industry partner Norpak, for the Telidon system. Similar systems had been developed by the BBC in Europe for their Ceefax system, and were later standardized as the World System Teletext (WST, aka CCIR Teletext System B), but differences in European and North American television standards and the greater flexibility of the Telidon standard led to the creation of a new delivery mechanism that was tuned for speed. NABTS was the standard used for both CBS's ExtraVision and NBC's very short-lived NBC Teletext services in the mid-1980s. The short-lived Time Teletext service, operated by the Time Video Information Services division of Time, Inc. and several experimental services launched by Boston's PBS station WGBH, also used NABTS. Due to teletext in general not really catching on in North America, NABTS saw a new use for the datacasting features of WebTV for Windows, under Windows 98, as well as for the now-defunct Intercast system. Canadian company Norpak sold and manufactured encoders and decoders for NABTS until the end of analog broadcasting in North America in the early 2010s; it was acquired by the Ross Video consortium in 2010. NABTS is still used in legacy analog video systems for private closed-circuit data delivery over a television broadcast or video signal. Description In a normal NTSC video signal there are 525 "lines" of video signal. These are split into two half-images, known as "fields", s
https://en.wikipedia.org/wiki/Logical%20shift
In computer science, a logical shift is a bitwise operation that shifts all the bits of its operand. The two base variants are the logical left shift and the logical right shift. This is further modulated by the number of bit positions a given value shall be shifted, such as shift left by 1 or shift right by n. Unlike an arithmetic shift, a logical shift does not preserve a number's sign bit or distinguish a number's exponent from its significand (mantissa); every bit in the operand is simply moved a given number of bit positions, and the vacant bit-positions are filled, usually with zeros, and possibly ones (contrast with a circular shift). A logical shift is often used when its operand is being treated as a sequence of bits instead of as a number. Logical shifts can be useful as efficient ways to perform multiplication or division of unsigned integers by powers of two. Shifting left by n bits on a signed or unsigned binary number has the effect of multiplying it by 2n. Shifting right by n bits on an unsigned binary number has the effect of dividing it by 2n (rounding towards 0). Logical right shift differs from arithmetic right shift. Thus, many languages have different operators for them. For example, in Java and JavaScript, the logical right shift operator is , but the arithmetic right shift operator is . (Java has only one left shift operator (), because left shift via logic and arithmetic have the same effect.) The programming languages C, C++, and Go, however, have only one right shift operator, . Most C and C++ implementations, and Go, choose which right shift to perform depending on the type of integer being shifted: signed integers are shifted using the arithmetic shift, and unsigned integers are shifted using the logical shift. All currently relevant C standards (ISO/IEC 9899:1999 to 2011) leave a definition gap for cases where the number of shifts is equal to or bigger than the number of bits in the operands in a way that the result is undefined. Th
https://en.wikipedia.org/wiki/Clock%20network
A clock network or clock system is a set of synchronized clocks designed to always show exactly the same time by communicating with each other. Clock networks usually consist of a central master clock kept in sync with an official time source, and one or more slave clocks which receive and display the time from the master. Synchronization sources The master clock in a clock network can receive accurate time in a number of ways: through the United States GPS satellite constellation, a Network Time Protocol server, the CDMA cellular phone network, a modem connection to a time source, or by listening to radio transmissions from WWV or WWVH, or a special signal from an upstream broadcast network. Some master clocks don't determine the time automatically. Instead, they rely on an operator to manually set them. Clock networks in critical applications often include a backup source to receive the time, or provisions to allow the master clock to maintain the time even if it loses access to its primary time source. For example, many master clocks can use the reliable frequency of the alternating current line they are connected to. Slave clocks Slave clocks come in many shapes and sizes. They can connect to the master clock through either a cable or a short-range wireless signal. In the 19th century Paris used a series of pneumatic tubes to transmit the signal. Some slave clocks will run independently if they lose the master signal, often with a warning light lit. Others will freeze until the connection is restored. Clock synchronization Many master clocks include the capability to synchronize devices like computers to the master clock signal. Common features include the transmission of the time via RS-232, a Network Time Protocol, or a Pulse Per Second (PPS) contact. Others provide SMPTE time code outputs, which are often used in television settings to synchronize the video from multiple sources. Master Clocks often come equipped with programmable relay o
https://en.wikipedia.org/wiki/Joseph%20Goguen
Joseph Amadee Goguen ( ; June 28, 1941 – July 3, 2006) was an American computer scientist. He was professor of Computer Science at the University of California and University of Oxford, and held research positions at IBM and SRI International. In the 1960s, along with Lotfi Zadeh, Goguen was one of the earliest researchers in fuzzy logic and made profound contributions to fuzzy set theory. In the 1970s Goguen's work was one of the earliest approaches to the algebraic characterisation of abstract data types and he originated and helped develop the OBJ family of programming languages. He was author of A Categorical Manifesto and founder and Editor-in-Chief of the Journal of Consciousness Studies. His development of institution theory impacted the field of universal logic. Standard implication in product fuzzy logic is often called "Goguen implication". Goguen categories are named after him. He was married to Ryoko Amadee Goguen, who is a composer, pianist, and vocalist. Education and academic career Goguen received his bachelor's degree in mathematics from Harvard University in 1963, and his PhD in mathematics from the University of California, Berkeley in 1968, where he was a student of the founder of fuzzy set theory, Lotfi Zadeh. He taught at UC Berkeley, the University of Chicago and University of California, Los Angeles, where he was a full professor of computer science. He held a Research Fellowship in the Mathematical Sciences at the IBM Watson Research Center, where he organised the "ADJ" group. He also visited the University of Edinburgh in Scotland on three Senior Visiting Fellowships. From 1979 to 1988, Goguen worked at SRI International in Menlo Park, California. From 1988 to 1996, he was a professor at the Oxford University Computing Laboratory (now the Department of Computer Science, University of Oxford) in England and a Fellow at St Anne's College, Oxford. In 1996 he became professor of Computer Science at the University of California, San Die
https://en.wikipedia.org/wiki/UML%20tool
A UML tool is a software application that supports some or all of the notation and semantics associated with the Unified Modeling Language (UML), which is the industry standard general-purpose modeling language for software engineering. UML tool is used broadly here to include application programs which are not exclusively focused on UML, but which support some functions of the Unified Modeling Language, either as an add-on, as a component or as a part of their overall functionality. Kinds of Functionality UML tools support the following kinds of functionality: Diagramming Diagramming in this context means creating and editing UML diagrams; that is diagrams that follow the graphical notation of the Unified Modeling Language. The use of UML diagrams as a means to draw diagrams of – mostly – object-oriented software is generally agreed upon by software developers. When developers draw diagrams of object-oriented software, they usually follow the UML notation. On the other hand, it is often debated whether those diagrams are needed at all, during what stages of the software development process they should be used, and how (if at all) they should be kept up to date. The primacy of software code often leads to the diagrams being deprecated. Round-trip engineering Round-trip engineering refers to the ability of a UML tool to perform code generation from models, and model generation from code (a.k.a., reverse engineering), while keeping both the model and the code semantically consistent with each other. Code generation and reverse engineering are explained in more detail below. Code generation Code generation in this context means that the user creates UML diagrams, which have some connected model data, and the UML tool derives from the diagrams part or all of the source code for the software system. In some tools the user can provide a skeleton of the program source code, in the form of a source code template, where predefined tokens are then replaced with program
https://en.wikipedia.org/wiki/Eidophor
An Eidophor was a video projector used to create theater-sized images from an analog video signal. The name Eidophor is derived from the Greek word-roots eido and phor meaning 'image' and 'bearer' (carrier). Its basic technology was the use of electrostatic charges to deform an oil surface. Origins and use The idea for the original Eidophor was conceived in 1939 in Zurich by Swiss physicist Fritz Fischer, professor at the Labor für technische Physik of the Swiss Federal Institute of Technology, with the first prototype being unveiled in 1943. A basic patent was filed on November 8, 1939, in Switzerland and granted by the United States Patent and Trademark Office (patent no. 2,391,451) to Friederich Ernst Fischer for the Process and appliance for projecting television pictures on 25 December 1945. During the Second World War, Edgar Gretener worked together with Fischer at the Institute of Technical Physics to develop a prototype. When Gretener launched his own company Dr. Edgar Gretener AG in 1941 to develop enciphering equipment for the Swiss army, he stopped working on Eidophor. Hugo Thiemann took over this responsibility at the ETH. After six years of work on this project at the ETH, Thiemann moved together with the project to the company Dr. Edgar Gretener AG, which was licensed by the ETH to further develop Eidophor, following Fischer's death in 1947. An original August 1952 magazine article in the Radio and Television News credits the development of the Eidophor to Edgar Gretener. Following the Second World War, a first demonstration of an Eidophor system as a cinema video projector was organized in the Cinema Theater REX in Zurich to show successfully a TV broadcast in April 1958. An even more promising perspective was the interest of Paramount Pictures and 20th Century Fox which experimented with the concept of "theatre television", where television images would be broadcast onto cinema screens. Over 100 cinemas were set up for the project, which fa
https://en.wikipedia.org/wiki/H.E.R.O.%20%28video%20game%29
H.E.R.O. (standing for Helicopter Emergency Rescue Operation) is a video game written by John Van Ryzin and published by Activision for the Atari 2600 in March 1984. It was ported to the Apple II, Atari 5200, Atari 8-bit family, ColecoVision, Commodore 64, MSX, and ZX Spectrum. The player uses a helicopter backpack and other tools to rescue victims trapped deep in a mine. The mine is made up of multiple screens using a flip screen style. Sega released a version of the game for its SG-1000 console in Japan in 1985. While the gameplay was identical, Sega changed the backpack from a helicopter to a jetpack. Gameplay The player assumes control of Roderick Hero (sometimes styled as "R. Hero"), a one-man rescue team. Miners working in Mount Leone are trapped, and it's up to Roderick to reach them. The player is equipped with a backpack-mounted helicopter unit, which allows him to hover and fly, along with a helmet-mounted laser and a limited supply of dynamite. Each level consists of a maze of mine shafts that Roderick must safely navigate in order to reach the miner trapped at the bottom. The backpack has a limited amount of power, so the player must reach the miner before the power supply is exhausted, in which the player restarts the level from the beginning if that happens. The player only needs enough power to reach the trapped miner - not to return with him as well. Mine shafts may be blocked by cave-ins or magma, which require dynamite to clear. The helmet laser can also destroy cave-ins, but far more slowly than dynamite. Unlike a cave-in, magma is lethal when touched. Later levels include walls of magma with openings that alternate between open and closed requiring skillful navigation. The mine shafts are populated by spiders, bats and other unknown creatures that are deadly to the touch; these creatures can be destroyed using the laser or dynamite. Some deep mines are flooded, forcing players to hover safely above the water. In later levels, monsters st
https://en.wikipedia.org/wiki/Opportunistic%20infection
An opportunistic infection is an infection caused by pathogens (bacteria, fungi, parasites or viruses) that take advantage of an opportunity not normally available. These opportunities can stem from a variety of sources, such as a weakened immune system (as can occur in acquired immunodeficiency syndrome or when being treated with immunosuppressive drugs, as in cancer treatment), an altered microbiome (such as a disruption in gut microbiota), or breached integumentary barriers (as in penetrating trauma). Many of these pathogens do not necessarily cause disease in a healthy host that has a non-compromised immune system, and can, in some cases, act as commensals until the balance of the immune system is disrupted. Opportunistic infections can also be attributed to pathogens which cause mild illness in healthy individuals but lead to more serious illness when given the opportunity to take advantage of an immunocompromised host. Types of opportunistic infections A wide variety of pathogens are involved in opportunistic infection and can cause a similarly wide range in pathologies. A partial list of opportunistic pathogens and their associated presentations includes: Bacteria Clostridioides difficile (formerly known as Clostridium difficile) is a species of bacteria that is known to cause gastrointestinal infection and is typically associated with the hospital setting. Legionella pneumophila is a bacterium that causes Legionnaire’s disease, a respiratory infection. Mycobacterium avium complex (MAC) is a group of two bacteria, M. avium and M. intracellulare, that typically co-infect, leading to a lung infection called mycobacterium avium-intracellulare infection. Mycobacterium tuberculosis is a species of bacteria that causes tuberculosis, a respiratory infection. Pseudomonas aeruginosa is a bacterium that can cause respiratory infections. It is frequently associated with cystic fibrosis and hospital-acquired infections. Salmonella is a genus of bacteria, known
https://en.wikipedia.org/wiki/Riemann%20series%20theorem
In mathematics, the Riemann series theorem, also called the Riemann rearrangement theorem, named after 19th-century German mathematician Bernhard Riemann, says that if an infinite series of real numbers is conditionally convergent, then its terms can be arranged in a permutation so that the new series converges to an arbitrary real number, or diverges. This implies that a series of real numbers is absolutely convergent if and only if it is unconditionally convergent. As an example, the series 1 − 1 + 1/2 − 1/2 + 1/3 − 1/3 + ⋯ converges to 0 (for a sufficiently large number of terms, the partial sum gets arbitrarily near to 0); but replacing all terms with their absolute values gives 1 + 1 + 1/2 + 1/2 + 1/3 + 1/3 + ⋯, which sums to infinity. Thus the original series is conditionally convergent, and can be rearranged (by taking the first two positive terms followed by the first negative term, followed by the next two positive terms and then the next negative term, etc.) to give a series that converges to a different sum: 1 + 1/2 − 1 + 1/3 + 1/4 − 1/2 + ⋯ = ln 2. More generally, using this procedure with p positives followed by q negatives gives the sum ln(p/q). Other rearrangements give other finite sums or do not converge to any sum. History It is a basic result that the sum of finitely many numbers does not depend on the order in which they are added. For example, . The observation that the sum of an infinite sequence of numbers can depend on the ordering of the summands is commonly attributed to Augustin-Louis Cauchy in 1833. He analyzed the alternating harmonic series, showing that certain rearrangements of its summands result in different limits. Around the same time, Peter Gustav Lejeune Dirichlet highlighted that such phenomena is ruled out in the context of absolute convergence, and gave further examples of Cauchy's phenomena for some other series which fail to be absolutely convergent. In the course of his analysis of Fourier series and the theory of Riema
https://en.wikipedia.org/wiki/List%20of%20stochastic%20processes%20topics
In the mathematics of probability, a stochastic process is a random function. In practical applications, the domain over which the function is defined is a time interval (time series) or a region of space (random field). Familiar examples of time series include stock market and exchange rate fluctuations, signals such as speech, audio and video; medical data such as a patient's EKG, EEG, blood pressure or temperature; and random movement such as Brownian motion or random walks. Examples of random fields include static images, random topographies (landscapes), or composition variations of an inhomogeneous material. Stochastic processes topics This list is currently incomplete. See also :Category:Stochastic processes Basic affine jump diffusion Bernoulli process: discrete-time processes with two possible states. Bernoulli schemes: discrete-time processes with N possible states; every stationary process in N outcomes is a Bernoulli scheme, and vice versa. Bessel process Birth–death process Branching process Branching random walk Brownian bridge Brownian motion Chinese restaurant process CIR process Continuous stochastic process Cox process Dirichlet processes Finite-dimensional distribution First passage time Galton–Watson process Gamma process Gaussian process – a process where all linear combinations of coordinates are normally distributed random variables. Gauss–Markov process (cf. below) GenI process Girsanov's theorem Hawkes process Homogeneous processes: processes where the domain has some symmetry and the finite-dimensional probability distributions also have that symmetry. Special cases include stationary processes, also called time-homogeneous. Karhunen–Loève theorem Lévy process Local time (mathematics) Loop-erased random walk Markov processes are those in which the future is conditionally independent of the past given the present. Markov chain Markov chain central limit theorem Conti
https://en.wikipedia.org/wiki/Holonomic%20brain%20theory
Holonomic brain theory is a branch of neuroscience investigating the idea that human consciousness is formed by quantum effects in or between brain cells. Holonomic refers to representations in a Hilbert phase space defined by both spectral and space-time coordinates. Holonomic brain theory is opposed by traditional neuroscience, which investigates the brain's behavior by looking at patterns of neurons and the surrounding chemistry. This specific theory of quantum consciousness was developed by neuroscientist Karl Pribram initially in collaboration with physicist David Bohm building on the initial theories of holograms originally formulated by Dennis Gabor. It describes human cognition by modeling the brain as a holographic storage network. Pribram suggests these processes involve electric oscillations in the brain's fine-fibered dendritic webs, which are different from the more commonly known action potentials involving axons and synapses. These oscillations are waves and create wave interference patterns in which memory is encoded naturally, and the wave function may be analyzed by a Fourier transform. Gabor, Pribram and others noted the similarities between these brain processes and the storage of information in a hologram, which can also be analyzed with a Fourier transform. In a hologram, any part of the hologram with sufficient size contains the whole of the stored information. In this theory, a piece of a long-term memory is similarly distributed over a dendritic arbor so that each part of the dendritic network contains all the information stored over the entire network. This model allows for important aspects of human consciousness, including the fast associative memory that allows for connections between different pieces of stored information and the non-locality of memory storage (a specific memory is not stored in a specific location, i.e. a certain cluster of neurons). Origins and development In 1946 Dennis Gabor invented the hologram mathematically,
https://en.wikipedia.org/wiki/KisMAC
KisMAC is a wireless network discovery tool for Mac OS X. It has a wide range of features, similar to those of Kismet (its Linux/BSD namesake). The program is geared toward network security professionals, and is not as novice-friendly as similar applications. Distributed under the GNU General Public License, KisMAC is free software. KisMAC will scan for networks passively on supported cards - including Apple's AirPort, and AirPort Extreme, and many third-party cards, and actively on any card supported by Mac OS X itself. Cracking of WEP and WPA keys, both by brute force, and exploiting flaws such as weak scheduling and badly generated keys is supported when a card capable of monitor mode is used, and packet reinjection can be done with a supported card (Prism2 and some Ralink cards). GPS mapping can be performed when an NMEA compatible GPS receiver is attached. Kismac2 is a fork of the original software with a new GUI, new features and that works for OS X 10.7 - 10.10, 64-bit only. It is no longer maintained. Data can also be saved in pcap format and loaded into programs such as Wireshark. KisMAC Features Reveals hidden / cloaked / closed SSIDs Shows logged in clients (with MAC Addresses, IP addresses and signal strengths) Mapping and GPS support Can draw area maps of network coverage PCAP import and export Support for 802.11b/g Different attacks against encrypted networks Deauthentication attacks AppleScript-able Kismet drone support (capture from a Kismet drone) KisMAC and Germany The project was created and led by Michael Rossberg until July 27, 2007, when he removed himself from the project due to changes in German law (specifically, StGB Section 202c) that "prohibits the production and distribution of security software". On this date, project lead was passed on to Geoffrey Kruse, maintainer of KisMAC since 2003, and active developer since 2001. KisMAC is no longer being actively being developed. Primary development, and the relocated KisMA
https://en.wikipedia.org/wiki/List%20of%20circle%20topics
This list of circle topics includes things related to the geometric shape, either abstractly, as in idealizations studied by geometers, or concretely in physical space. It does not include metaphors like "inner circle" or "circular reasoning" in which the word does not refer literally to the geometric shape. Geometry and other areas of mathematics Circle Circle anatomy Annulus (mathematics) Area of a disk Bipolar coordinates Central angle Circular sector Circular segment Circumference Concentric Concyclic Degree (angle) Diameter Disk (mathematics) Horn angle Measurement of a Circle List of topics related to Pole and polar Power of a point Radical axis Radius Radius of convergence Radius of curvature Sphere Tangent lines to circles Versor Specific circles Apollonian circles Circles of Apollonius Archimedean circle Archimedes' circles – the twin circles doubtfully attributed to Archimedes Archimedes' quadruplets Circle of antisimilitude Bankoff circle Brocard circle Carlyle circle Circumscribed circle (circumcircle) Midpoint-stretching polygon Coaxal circles Director circle Fermat–Apollonius circle Ford circle Fuhrmann circle Generalised circle GEOS circle Great circle Great-circle distance Circle of a sphere Horocycle Incircle and excircles of a triangle Inscribed circle Johnson circles Magic circle (mathematics) Malfatti circles Nine-point circle Orthocentroidal circle Osculating circle Riemannian circle Schinzel circle Schoch circles Spieker circle Tangent circles Twin circles Unit circle Van Lamoen circle Villarceau circles Woo circles Circle-derived entities Apollonian gasket Arbelos Bicentric polygon Bicentric quadrilateral Coxeter's loxodromic sequence of tangent circles Cyclic quadrilateral Cycloid Ex-tangential quadrilateral Hawaiian earring Inscribed angle Inscribed angle theorem Inversive distance Inversive geometry Irrational rotation Lens (geometry) Lune Lune of
https://en.wikipedia.org/wiki/Osculating%20circle
An osculating circle is a circle that best approximates the curvature of a curve at a specific point. It is tangent to the curve at that point and has the same curvature as the curve at that point. The osculating circle provides a way to understand the local behavior of a curve and is commonly used in differential geometry and calculus. More formally, in differential geometry of curves, the osculating circle of a sufficiently smooth plane curve at a given point p on the curve has been traditionally defined as the circle passing through p and a pair of additional points on the curve infinitesimally close to p. Its center lies on the inner normal line, and its curvature defines the curvature of the given curve at that point. This circle, which is the one among all tangent circles at the given point that approaches the curve most tightly, was named circulus osculans (Latin for "kissing circle") by Leibniz. The center and radius of the osculating circle at a given point are called center of curvature and radius of curvature of the curve at that point. A geometric construction was described by Isaac Newton in his Principia: Nontechnical description Imagine a car moving along a curved road on a vast flat plane. Suddenly, at one point along the road, the steering wheel locks in its present position. Thereafter, the car moves in a circle that "kisses" the road at the point of locking. The curvature of the circle is equal to that of the road at that point. That circle is the osculating circle of the road curve at that point. Mathematical description Let be a regular parametric plane curve, where is the arc length (the natural parameter). This determines the unit tangent vector , the unit normal vector , the signed curvature and the radius of curvature at each point for which is composed: Suppose that P is a point on γ where . The corresponding center of curvature is the point Q at distance R along N, in the same direction if k is positive and in the opposite
https://en.wikipedia.org/wiki/Area%20of%20a%20circle
In geometry, the area enclosed by a circle of radius is . Here the Greek letter represents the constant ratio of the circumference of any circle to its diameter, approximately equal to 3.14159. One method of deriving this formula, which originated with Archimedes, involves viewing the circle as the limit of a sequence of regular polygons with an increasing number of sides. The area of a regular polygon is half its perimeter multiplied by the distance from its center to its sides, and because the sequence tends to a circle, the corresponding formula–that the area is half the circumference times the radius–namely, , holds for a circle. Terminology Although often referred to as the area of a circle in informal contexts, strictly speaking the term disk refers to the interior region of the circle, while circle is reserved for the boundary only, which is a curve and covers no area itself. Therefore, the area of a disk is the more precise phrase for the area enclosed by a circle. History Modern mathematics can obtain the area using the methods of integral calculus or its more sophisticated offspring, real analysis. However, the area of a disk was studied by the Ancient Greeks. Eudoxus of Cnidus in the fifth century B.C. had found that the area of a disk is proportional to its radius squared. Archimedes used the tools of Euclidean geometry to show that the area inside a circle is equal to that of a right triangle whose base has the length of the circle's circumference and whose height equals the circle's radius in his book Measurement of a Circle. The circumference is 2r, and the area of a triangle is half the base times the height, yielding the area  r2 for the disk. Prior to Archimedes, Hippocrates of Chios was the first to show that the area of a disk is proportional to the square of its diameter, as part of his quadrature of the lune of Hippocrates, but did not identify the constant of proportionality. Historical arguments A variety of arguments have been advance
https://en.wikipedia.org/wiki/Engineers%20India
Engineers India Limited (EIL) is an Indian public sector engineering consultancy and technology licensing company. It was set up in 1965 with the mandate of providing indigenous technology solutions across hydrocarbon projects. Over the years, it has also diversified into synergic sectors like non-ferrous metallurgy, infrastructure, water and wastewater management and fertilizers. EIL is headquartered at Bhikaji Cama Place, New Delhi. EIL also has an R&D complex at Gurgaon, a branch office at Mumbai, regional offices at Kolkata, Chennai, Vadodara, inspection offices at all major equipment manufacturing locations in India and overseas offices in London (England), Milan (Italy), Shanghai (China), Abu Dhabi (UAE). EIL has a wholly-owned subsidiary Certification Engineers International Limited (CEIL). It has set up a joint venture company namely Ramagundam Fertilizers and Chemicals Limited (RFCL) for enhancing the presence in fertilizers sector. As of March 2021, EIL has more than 2400 engineers & professionals in its employee base of over 2800 employees. Navratna status was accorded by Government of India in 2014. History EIL was incorporated on March 15, 1965, as a private limited company under the name Engineers India Private Limited pursuant to a memorandum of agreement dated June 27, 1964 between the Government of India and Bechtel International Corporation. In May 1967, EIL became a wholly-owned Government of India (GoI) enterprise. The following table illustrates the major events in the history of the company. 1970 - Commenced first international assignment 1972 - Commenced work in the metallurgy segment 1978 - Commenced work in onshore oil and gas processing segment 1989 - Set up its own Research and Development centre at Gurugram 1994 - Subsidiary, Certification Engineers International Limited was set up 1997 - Listing on the BSE and NSE; Mini Ratna status accorded by Government of India 2006 - Commenced work in underground crude oil storages 2
https://en.wikipedia.org/wiki/Coating
A coating is a covering that is applied to the surface of an object, usually referred to as the substrate. The purpose of applying the coating may be decorative, functional, or both. Coatings may be applied as liquids, gases or solids e.g. Powder coatings. Paints and lacquers are coatings that mostly have dual uses, which are protecting the substrate and being decorative, although some artists paints are only for decoration, and the paint on large industrial pipes is for preventing corrosion and identification e.g. blue for process water, red for fire-fighting control. Functional coatings may be applied to change the surface properties of the substrate, such as adhesion, wettability, corrosion resistance, or wear resistance. In other cases, e.g. semiconductor device fabrication (where the substrate is a wafer), the coating adds a completely new property, such as a magnetic response or electrical conductivity, and forms an essential part of the finished product. A major consideration for most coating processes is that the coating is to be applied at a controlled thickness, and a number of different processes are in use to achieve this control, ranging from a simple brush for painting a wall, to some very expensive machinery applying coatings in the electronics industry. A further consideration for 'non-all-over' coatings is that control is needed as to where the coating is to be applied. A number of these non-all-over coating processes are printing processes. Many industrial coating processes involve the application of a thin film of functional material to a substrate, such as paper, fabric, film, foil, or sheet stock. If the substrate starts and ends the process wound up in a roll, the process may be termed "roll-to-roll" or "web-based" coating. A roll of substrate, when wound through the coating machine, is typically called a web. Applications Coating applications are diverse and serve many purposes. Coatings can be both decorative and have other functions. A
https://en.wikipedia.org/wiki/Wheel%20tractor-scraper
In civil engineering, a wheel tractor-scraper (also known as a land scraper , land leveler or 'tournapull') is a type of heavy equipment used for earthmoving. It has a pan/hopper for loading and carrying material. The pan has a tapered horizontal front cutting edge that cuts into the soil like a carpenter's plane or cheese slicer and fills the hopper which has a movable ejection system. The horsepower of the machine, depth of the cut, type of material, and slope of the cut area affect how quickly the pan is filled. When full, the pan is raised, the apron is closed, and the scraper transports its load to the fill area. There the pan height is set and the lip is opened (the lip is what the bottom edge of the apron is called), so that the ejection system can be engaged for dumping the load. The forward momentum or speed of the machine affects how big an area is covered with the load. A high pan height and slow speed will dump the load over a short distance. With the pan set close to the ground, a higher speed will spread the material more thinly over a larger area. In an "elevating scraper" a conveyor moves material from the cutting edge into the hopper. History R.G LeTourneau conceived the idea of the self-propelled motor scraper while recovering from a near-fatal auto accident. He was an earth moving contractor dealer in bulldozer accessories and envisaged a pulled trailer that could excavate and pick up earth as it moved. He approached bulldozer manufacturer Caterpillar in the mid-1930s with his idea, but it was turned down, so he founded his own company . The first Tournapull, called the Model A, was rolled out of his factory and into trials in 1937. Design The scraper is a large piece of equipment which is used in mining, construction, agriculture and other earthmoving applications. The rear part has a vertically moveable hopper (also known as the bowl) with a sharp horizontal front edge. The hopper can be hydraulically lowered and raised. When the hop
https://en.wikipedia.org/wiki/Arc%20length
Arc length is the distance between two points along a section of a curve. Determining the length of an irregular arc segment by approximating the arc segment as connected (straight) line segments is also called curve rectification. A rectifiable curve has a finite number of segments in its rectification (so the curve has a finite length). If a curve can be parameterized as an injective and continuously differentiable function (i.e., the derivative is a continuous function) , then the curve is rectifiable (i.e., it has a finite length). The advent of infinitesimal calculus led to a general formula that provides closed-form solutions in some cases. General approach A curve in the plane can be approximated by connecting a finite number of points on the curve using (straight) line segments to create a polygonal path. Since it is straightforward to calculate the length of each linear segment (using the Pythagorean theorem in Euclidean space, for example), the total length of the approximation can be found by summation of the lengths of each linear segment; that approximation is known as the (cumulative) chordal distance. If the curve is not already a polygonal path, then using a progressively larger number of line segments of smaller lengths will result in better curve length approximations. Such a curve length determination by approximating the curve as connected (straight) line segments is called rectification of a curve. The lengths of the successive approximations will not decrease and may keep increasing indefinitely, but for smooth curves they will tend to a finite limit as the lengths of the segments get arbitrarily small. For some curves, there is a smallest number that is an upper bound on the length of all polygonal approximations (rectification). These curves are called and the is defined as the number . A signed arc length can be defined to convey a sense of orientation or "direction" with respect to a reference point taken as origin in the curve
https://en.wikipedia.org/wiki/Opaque%20predicate
In computer programming, an opaque predicate is a predicate, an expression that evaluates to either "true" or "false", for which the outcome is known by the programmer a priori, but which, for a variety of reasons, still needs to be evaluated at run time. Opaque predicates have been used as watermarks, as they will be identifiable in a program's executable. They can also be used to prevent an overzealous optimizer from optimizing away a portion of a program. Another use is in obfuscating the control or dataflow of a program to make reverse engineering harder. External links "A Method for Watermarking Java Programs via Opaque Predicates" Computer programming References
https://en.wikipedia.org/wiki/4x4%20Evo
4x4 Evo (also re-released as 4x4 Evolution) is a video game developed by Terminal Reality for the Windows, Macintosh, Sega Dreamcast, and PlayStation 2 platforms. It is one of the first console games to have cross-platform online play where Dreamcast, Macintosh, and Windows versions of the game appear online at the same time. The game can use maps created by users to download onto a hard drive as well as a Dreamcast VMU. All versions of the game are similar in quality and gameplay although the online systems feature a mode to customize the players' own truck and use it online. The game is still online-capable on all systems except for PlayStation 2. This was Terminal Reality's only video game to be released for the Dreamcast. Gameplay Gameplay features off-road racing of over 70 licensed truck manufacturers. Modes featured in the game were Career Mode, Online Mode, Map editor, and versus mode. The career mode is the most important part of the game to feature a way to buy better trucks similar to the Gran Turismo series. The Career mode also gives the player six purpose-built race vehicles: Chevrolet TrailBlazer Race SUV 2WD, Dodge Dakota Race Truck 4WD, Ford F-150 Race Truck 2WD, Mitsubishi Pajero Rally 4WD, Nissan Xterra Race SUV 4WD, and the Toyota Tundra Race Truck 2WD. They cost anywhere from $350,000 up to $850,000. These are the fastest vehicles in the game. Recently, KC Vale acquired permission from Terminal Reality, Incorporated to upload the game to his Web server, but the original vehicles have been removed due to an expired license. Multiplayer Although this game was released many years ago, the online community still exists with a fair number of players and some moderators who manage chat rooms. Dedicated servers are long gone, but it is possible to host games over the Internet and join other player-hosted games. The game has been brought back online thanks to the Dreamcast community as one of the more than 20 games so far to be brought back online fo
https://en.wikipedia.org/wiki/Lettering
Lettering is an umbrella term that covers the art of drawing letters, instead of simply writing them. Lettering is considered an art form, where each letter in a phrase or quote acts as an illustration. Each letter is created with attention to detail and has a unique role within a composition. Lettering is created as an image, with letters that are meant to be used in a unique configuration. Lettering words do not always translate into alphabets that can later be used in a typeface, since they are created with a specific word in mind. Examples Lettering includes that used for purposes of blueprints and comic books, as well as decorative lettering such as sign painting and custom graphics. For instance; on posters, for a letterhead or business wordmark, lettering in stone, lettering for advertisements, tire lettering, fileteado, graffiti, or on chalkboards. Lettering may be drawn, incised, applied using stencils, using a digital medium with a stylus, or a vector program. Lettering that was not created using digital tools is commonly referred to as hand-lettering. In the past, almost all decorative lettering other than that on paper was created as custom or hand-painted lettering. The use of fonts in place of lettering has increased due to new printing methods, phototypesetting, and digital typesetting, which allow fonts to be printed at any desired size. Lettering has been particularly important in Islamic art, due to the Islamic practice of avoiding depictions of sentient beings generally and of Muhammad in particular, and instead using representations in the form of Islamic calligraphy, including hilyes, or artforms based on written descriptions of Muhammed. More recently, there has been an influx of aspiring artists attempting hand-lettering with brush pens and digital mediums. Some popular styles are sans serif, serif, cursive/script, vintage, blackletter ("gothic") calligraphy, graffiti, and creative lettering. Related artforms Lettering can be confused
https://en.wikipedia.org/wiki/Remote%20Database%20Access
Remote database access (RDA) is a protocol standard for database access produced in 1993 by the International Organization for Standardization (ISO). Despite early efforts to develop proof of concept implementations of RDA for major commercial remote database management systems (RDBMSs) (including Oracle, Rdb, NonStop SQL and Teradata), this standard has not found commercial support from database vendors. The standard has since been withdrawn, and replaced by ISO/IEC 9579:1999 - Information technology -- Remote Database Access for SQL, which has also been withdrawn, and replaced by ISO/IEC 9579:2000 Information technology -- Remote database access for SQL with security enhancement. Purpose The purpose of RDA is to describe the connection of a database client to a database server. It includes features for: communicating database operations and parameters from the client to the server, in return, transporting result data from the server to the client, database transaction management, and exchange of information. RDA is an application-level protocol, inasmuch that it builds on an existing network connection between client and server. In the case of TCP/IP connections, RFC 1066 is used for implementing RDA. History RDA was published in 1993 as a combined standard of ANSI, ISO (International Organization for Standardization) and IEC (International Electrotechnical Commission). The standards definition comprises two parts: ANSI/ISO/IEC 9579-1:1993 - Remote Database Access -- Part 1: Generic Model, Service and Protocol ANSI/ISO/IEC 9579-2:1993 References Sources Clients (computing) Servers (computing) Data access technologies OSI protocols Database access protocols
https://en.wikipedia.org/wiki/Robinson%E2%80%93Schensted%20correspondence
In mathematics, the Robinson–Schensted correspondence is a bijective correspondence between permutations and pairs of standard Young tableaux of the same shape. It has various descriptions, all of which are of algorithmic nature, it has many remarkable properties, and it has applications in combinatorics and other areas such as representation theory. The correspondence has been generalized in numerous ways, notably by Knuth to what is known as the Robinson–Schensted–Knuth correspondence, and a further generalization to pictures by Zelevinsky. The simplest description of the correspondence is using the Schensted algorithm , a procedure that constructs one tableau by successively inserting the values of the permutation according to a specific rule, while the other tableau records the evolution of the shape during construction. The correspondence had been described, in a rather different form, much earlier by Robinson , in an attempt to prove the Littlewood–Richardson rule. The correspondence is often referred to as the Robinson–Schensted algorithm, although the procedure used by Robinson is radically different from the Schensted algorithm, and almost entirely forgotten. Other methods of defining the correspondence include a nondeterministic algorithm in terms of jeu de taquin. The bijective nature of the correspondence relates it to the enumerative identity where denotes the set of partitions of (or of Young diagrams with squares), and denotes the number of standard Young tableaux of shape . The Schensted algorithm The Schensted algorithm starts from the permutation written in two-line notation where , and proceeds by constructing sequentially a sequence of (intermediate) ordered pairs of Young tableaux of the same shape: where are empty tableaux. The output tableaux are and . Once is constructed, one forms by inserting into , and then by adding an entry to in the square added to the shape by the insertion (so that and have equal shapes for all
https://en.wikipedia.org/wiki/Qualitative%20property
Qualitative properties are properties that are observed and can generally not be measured with a numerical result. They are contrasted to quantitative properties which have numerical characteristics. Some engineering and scientific properties are qualitative. A test method can result in qualitative data about something. This can be a categorical result or a binary classification (e.g., pass/fail, go/no go, conform/non-conform). It can sometimes be an engineering judgement. The data that all share a qualitative property form a nominal category. A variable which codes for the presence or absence of such a property is called a binary categorical variable, or equivalently a dummy variable. In businesses Some important qualitative properties that concern businesses are: Human factors, 'human work capital' is probably one of the most important issues that deals with qualitative properties. Some common aspects are work, motivation, general participation, etc. Although all of these aspects are not measurable in terms of quantitative criteria, the general overview of them could be summarized as a quantitative property. Environmental issues are in some cases quantitatively measurable, but other properties are qualitative e.g.: environmentally friendly manufacturing, responsibility for the entire life of a product (from the raw-material till scrap), attitudes towards safety, efficiency, and minimum waste production. Ethical issues are closely related to environmental and human issues, and may be covered in corporate governance. Child labour and illegal dumping of waste are examples of ethical issues. The way a company deals with its stockholders (the 'acting' of a company) is probably the most obvious qualitative aspect of a business. Although measuring something in qualitative terms is difficult, most people can (and will) make a judgement about a behaviour on the basis of how they feel treated. This indicates that qualitative properties are closely related to emotiona
https://en.wikipedia.org/wiki/Lean%20software%20development
Lean software development is a translation of lean manufacturing principles and practices to the software development domain. Adapted from the Toyota Production System, it is emerging with the support of a pro-lean subculture within the agile community. Lean offers a solid conceptual framework, values and principles, as well as good practices, derived from experience, that support agile organizations. Origin The expression "lean software development" originated in a book by the same name, written by Mary Poppendieck and Tom Poppendieck in 2003. The book restates traditional lean principles, as well as a set of 22 tools and compares the tools to corresponding agile practices. The Poppendiecks' involvement in the agile software development community, including talks at several Agile conferences has resulted in such concepts being more widely accepted within the agile community. Lean principles Lean development can be summarized by seven principles, very close in concept to lean manufacturing principles: Eliminate waste Amplify learning Decide as late as possible Deliver as fast as possible Empower the team Build integrity in Optimize the whole Eliminate waste Lean philosophy regards everything not adding value to the customer as waste (muda). Such waste may include: Partially done work Extra features Relearning Task switching Waiting Handoffs Defects Management activities In order to eliminate waste, one should be able to recognize it. If some activity could be bypassed or the result could be achieved without it, it is waste. Partially done coding eventually abandoned during the development process is waste. Extra features like paperwork and features not often used by customers are waste. Switching people between tasks is waste (because of time spent, and often lost, by people involved in context-switching). Waiting for other activities, teams, processes is waste. Relearning requirements to complete work is waste. Defects and lower quality are wast
https://en.wikipedia.org/wiki/Monoscope
A monoscope was a special form of video camera tube which displayed a single still video image. The image was built into the tube, hence the name. The tube resembled a small cathode ray tube (CRT). Monoscopes were used beginning in the 1950s to generate TV test patterns and station logos. This type of test card generation system was technologically obsolete by the 1980s. Design The monoscope was similar in construction to a CRT, with an electron gun at one end and at the other, a metal target screen with an image formed on it. This was in the position where a CRT would have its phosphor-coated display screen. As the electron beam scanned the target, varying numbers of electrons were reflected from the different areas of the image. The reflected electrons were picked up by an internal electrode ring, producing a varying electrical signal which was amplified to become the video output of the tube. This signal reproduced an accurate still image of the target, so the monoscope was used to produce still images such as test patterns and station logo cards. For example, the classic Indian Head test card as used by many television stations in North America, was often produced using a monoscope. Usage Monoscopes were available with a wide variety of standard patterns and messages, and could be ordered with a custom image such as a station logo. Monoscope "cameras" were widely used to produce test cards, station logos, special signals for test purposes and standard announcements like "Please stand by" and "normal service will be resumed....". They had many advantages over using a live camera pointed at a card; an expensive camera was not tied up, they were always ready, and were never misframed or out of focus. Indeed, monoscopes were often used to calibrate the live cameras, by comparing the monoscope image and the live camera image of the same test pattern. Pointing an electronic camera at the same stationary monochrome caption for a long period of time could res
https://en.wikipedia.org/wiki/Feature-driven%20development
Feature-driven development (FDD) is an iterative and incremental software development process. It is a lightweight or Agile method for developing software. FDD blends a number of industry-recognized best practices into a cohesive whole. These practices are driven from a client-valued functionality (feature) perspective. Its main purpose is to deliver tangible, working software repeatedly in a timely manner in accordance with the Principles behind the Agile Manifesto. History FDD was initially devised by Jeff De Luca to meet the specific needs of a 15-month, 50-person software development project at a large Singapore bank in 1997. This resulted in a set of five processes that covered the development of an overall model and the listing, planning, design, and building of features. The first process is heavily influenced by Peter Coad's approach to object modelling. The second process incorporates Coad's ideas of using a feature list to manage functional requirements and development tasks. The other processes are a result of Jeff De Luca's experience. There have been several implementations of FDD since its successful use on the Singapore project. The description of FDD was first introduced to the world in Chapter 6 of the book Java modelling in Color with UML by Peter Coad, Eric Lefebvre, and Jeff De Luca in 1999. Later, in Stephen Palmer and Mac Felsing's book A Practical Guide to Feature-Driven Development (published in 2002), a more general description of FDD was given decoupled from Java modelling. Overview FDD is a model-driven short-iteration process that consists of five basic activities. For accurate state reporting and keeping track of the software development project, milestones that mark the progress made on each feature are defined. This section gives a high level overview of the activities. In the figure on the right, the meta-process model for these activities is displayed. During the first two sequential activities, an overall model shape is estab
https://en.wikipedia.org/wiki/Betavoltaic%20device
A betavoltaic device (betavoltaic cell or betavoltaic battery) is a type of nuclear battery which generates electric current from beta particles (electrons) emitted from a radioactive source, using semiconductor junctions. A common source used is the hydrogen isotope tritium. Unlike most nuclear power sources which use nuclear radiation to generate heat which then is used to generate electricity, betavoltaic devices use a non-thermal conversion process, converting the electron-hole pairs produced by the ionization trail of beta particles traversing a semiconductor. Betavoltaic power sources (and the related technology of alphavoltaic power sources) are particularly well-suited to low-power electrical applications where long life of the energy source is needed, such as implantable medical devices or military and space applications. History Betavoltaics were invented in the 1970s. Some pacemakers in the 1970s used betavoltaics based on promethium, but were phased out as cheaper lithium batteries were developed. Early semiconducting materials weren't efficient at converting electrons from beta decay into usable current, so higher energy, more expensive—and potentially hazardous—isotopes were used. The more efficient semiconducting materials used can be paired with relatively benign isotopes such as tritium, which produce less radiation. The Betacel was considered the first successfully commercialized betavoltaic battery. Proposals The primary use for betavoltaics is for remote and long-term use, such as spacecraft requiring electrical power for a decade or two. Recent progress has prompted some to suggest using betavoltaics to trickle-charge conventional batteries in consumer devices, such as cell phones and laptop computers. As early as 1973, betavoltaics were suggested for use in long-term medical devices such as pacemakers. In 2018 a Russian design based on 2-micron thick nickel-63 slabs sandwiched between 10 micron diamond layers was introduced. It produc
https://en.wikipedia.org/wiki/Burst%20noise
Burst noise is a type of electronic noise that occurs in semiconductors and ultra-thin gate oxide films. It is also called random telegraph noise (RTN), popcorn noise, impulse noise, bi-stable noise, or random telegraph signal (RTS) noise. It consists of sudden step-like transitions between two or more discrete voltage or current levels, as high as several hundred microvolts, at random and unpredictable times. Each shift in offset voltage or current often lasts from several milliseconds to seconds, and sounds like popcorn popping if hooked up to an audio speaker. Burst noise was first observed in early point contact diodes, then re-discovered during the commercialization of one of the first semiconductor op-amps; the 709. No single source of burst noise is theorized to explain all occurrences, however the most commonly invoked cause is the random trapping and release of charge carriers at thin film interfaces or at defect sites in bulk semiconductor crystal. In cases where these charges have a significant impact on transistor performance (such as under a MOS gate or in a bipolar base region), the output signal can be substantial. These defects can be caused by manufacturing processes, such as heavy ion implantation, or by unintentional side-effects such as surface contamination. Individual op-amps can be screened for burst noise with peak detector circuits, to minimize the amount of noise in a specific application. Burst noise is modeled mathematically by means of the telegraph process, a Markovian continuous-time stochastic process that jumps discontinuously between two distinct values. See also Atomic electron transition Telegraph process References External links A review of popcorn noise and smart filtering, www.advsolned.com Noise (electronics)
https://en.wikipedia.org/wiki/Wave%20front%20set
In mathematical analysis, more precisely in microlocal analysis, the wave front (set) WF(f) characterizes the singularities of a generalized function f, not only in space, but also with respect to its Fourier transform at each point. The term "wave front" was coined by Lars Hörmander around 1970. Introduction In more familiar terms, WF(f) tells not only where the function f is singular (which is already described by its singular support), but also how or why it is singular, by being more exact about the direction in which the singularity occurs. This concept is mostly useful in dimension at least two, since in one dimension there are only two possible directions. The complementary notion of a function being non-singular in a direction is microlocal smoothness. Intuitively, as an example, consider a function ƒ whose singular support is concentrated on a smooth curve in the plane at which the function has a jump discontinuity. In the direction tangent to the curve, the function remains smooth. By contrast, in the direction normal to the curve, the function has a singularity. To decide on whether the function is smooth in another direction v, one can try to smooth the function out by averaging in directions perpendicular to v. If the resulting function is smooth, then we regard ƒ to be smooth in the direction of v. Otherwise, v is in the wavefront set. Formally, in Euclidean space, the wave front set of ƒ is defined as the complement of the set of all pairs (x0,v) such that there exists a test function with (x0) ≠ 0 and an open cone Γ containing v such that the estimate holds for all positive integers N. Here denotes the Fourier transform. Observe that the wavefront set is conical in the sense that if (x,v) ∈ Wf(ƒ), then (x,λv) ∈ Wf(ƒ) for all λ > 0. In the example discussed in the previous paragraph, the wavefront set is the set-theoretic complement of the image of the tangent bundle of the curve inside the tangent bundle of the plane. Because the def
https://en.wikipedia.org/wiki/Brassica%20rapa
Brassica rapa is a plant species growing in various widely cultivated forms including the turnip (a root vegetable); Komatsuna, napa cabbage, bomdong, bok choy, and rapini. Brassica rapa subsp. oleifera is an oilseed which has many common names, including rape, field mustard, bird's rape, and keblock. The term rapeseed oil is a general term for oil from Brassica species. Food grade oil made from the seed of low-erucic acid Canadian-developed strains is also called canola oil, while non-food oil is called colza oil. Canola oil is sourced from three species of Brassica plants: Brassica rapa and Brassica napus are commonly grown in Canada, while Brassica juncea (brown mustard) is a minor crop for oil production. History The origin of B. rapa, both geographically and any surviving wild relatives, has been difficult to identify because it has been developed by humans into many types of vegetables, is now found in most parts of the world, and has returned to the wild many times as a feral plant. A study of genetic sequences from over 400 domesticated and feral B. rapa individuals, along with environmental modelling, has provided more information about the complex history. These indicate that the ancestral B. rapa probably originated 4000 to 6000 years ago in the Hindu Kush area of Central Asia, and had three sets of chromosomes. This provided the genetic potential for a diversity of form, flavour and growth requirements. Domestication has produced modern vegetables and oil-seed crops, all with two sets of chromosomes. Oilseed subspecies (oleifera) of Brassica rapa may have been domesticated several times from the Mediterranean to India, starting as early as 2000 BC. Edible turnips were possibly first cultivated in northern Europe, and were an important food in ancient Rome. The turnip then spread east to China, and reached Japan by 700 AD. There are descriptions of B. rapa vegetables in Indian and Chinese documents from around 1000 BC. In the 18th century, the turnip
https://en.wikipedia.org/wiki/Transformation%20%28function%29
In mathematics, a transformation is a function f, usually with some geometrical underpinning, that maps a set X to itself, i.e. . Examples include linear transformations of vector spaces and geometric transformations, which include projective transformations, affine transformations, and specific affine transformations, such as rotations, reflections and translations. Partial transformations While it is common to use the term transformation for any function of a set into itself (especially in terms like "transformation semigroup" and similar), there exists an alternative form of terminological convention in which the term "transformation" is reserved only for bijections. When such a narrow notion of transformation is generalized to partial functions, then a partial transformation is a function f: A → B, where both A and B are subsets of some set X. Algebraic structures The set of all transformations on a given base set, together with function composition, forms a regular semigroup. Combinatorics For a finite set of cardinality n, there are nn transformations and (n+1)n partial transformations. See also Coordinate transformation Data transformation (statistics) Geometric transformation Infinitesimal transformation Linear transformation Rigid transformation Transformation geometry Transformation semigroup Transformation group Transformation matrix References External links Functions and mappings
https://en.wikipedia.org/wiki/Pannus
Pannus is an abnormal layer of fibrovascular tissue or granulation tissue. Common sites for pannus formation include over the cornea, over a joint surface (as seen in rheumatoid arthritis), or on a prosthetic heart valve. Pannus may grow in a tumor-like fashion, as in joints where it may erode articular cartilage and bone. In common usage, the term pannus is often used to refer to a panniculus (a hanging flap of tissue). Pannus in rheumatoid arthritis The term "pannus" is derived from the Latin for "tablecloth". Chronic inflammation and exuberant proliferation of the synovium leads to formation of pannus and destruction of cartilage, bone, tendons, ligaments, and blood vessels. Pannus tissue is composed of aggressive macrophage- and fibroblast-like mesenchymal cells, macrophage-like cells and other inflammatory cells that release collagenolytic enzymes. In people suffering from rheumatoid arthritis, pannus tissue eventually forms in the joint affected by the disease, causing bony erosion and cartilage loss via release of IL-1, prostaglandins, and substance P by macrophages. Pannus in ophthalmology In ophthalmology, pannus refers to the growth of blood vessels into the peripheral cornea. In normal individuals, the cornea is avascular. Chronic local hypoxia (such as that occurring with overuse of contact lenses) or inflammation may lead to peripheral corneal vascularization, or pannus. Pannus may also develop in diseases of the corneal stem cells, such as aniridia. It is often resolved by peritomy. References Medical terminology Pathology
https://en.wikipedia.org/wiki/Mantis%20Bug%20Tracker
Mantis Bug Tracker is a free and open source, web-based bug tracking system. The most common use of MantisBT is to track software defects. However, MantisBT is often configured by users to serve as a more generic issue tracking system and project management tool. The name Mantis and the logo of the project refer to the insect family Mantidae, known for the tracking of and feeding on other insects, colloquially referred to as "bugs". The name of the project is typically abbreviated to either MantisBT or just Mantis. History Kenzaburo Ito started development of the Mantis Bug Tracking project in 2000. In 2002, Kenzaburo was joined by Jeroen Latour, Victor Boctor and Julian Fitzell to be the administrators and it became a team project. Version 1.0.0 was released in February 2006. Version 1.1.0 was released in December 2007. In November 2008, after a long discussion, the project switched from using the Subversion revision control tool to Git, a distributed revision control tool. In February 2010, version 1.2.0 was released. In July 2012, the MantisBT organization on GitHub became the official repository for the Project's source code. Features Plug-ins An event-driven plug-in system was introduced with the release of version 1.2.0. This plug-in system allows extension of MantisBT through both officially maintained and third party plug-ins. As of November 2013, there are over 50 plug-ins available on the MantisBT-plugins organization on GitHub. Prior to version 1.2.0, a third party plug-in system created by Vincent Debout was available to users along with a variety of different plug-ins. This system was not officially supported by the MantisBT project and is incompatible with MantisBT 1.2.0 and later. Notifications MantisBT supports the sending of e-mail notifications upon changes being made to issues in the system. Users have the ability to specify the type of e-mails they receive and set filters to define the minimum severity of issues to receive notifications abo
https://en.wikipedia.org/wiki/Flexible-fuel%20vehicle
A flexible-fuel vehicle (FFV) or dual-fuel vehicle (colloquially called a flex-fuel vehicle) is an alternative fuel vehicle with an internal combustion engine designed to run on more than one fuel, usually gasoline blended with either ethanol or methanol fuel, and both fuels are stored in the same common tank. Modern flex-fuel engines are capable of burning any proportion of the resulting blend in the combustion chamber as fuel injection and spark timing are adjusted automatically according to the actual blend detected by a fuel composition sensor. This device is known as an oxygen sensor and it reads the oxygen levels in the stream of exhaust gasses, its signal enriching or leaning the fuel mixture going into the engine. Flex-fuel vehicles are distinguished from bi-fuel vehicles, where two fuels are stored in separate tanks and the engine runs on one fuel at a time, for example, compressed natural gas (CNG), liquefied petroleum gas (LPG), or hydrogen. The most common commercially available FFV in the world market is the ethanol flexible-fuel vehicle, with about 60 million automobiles, motorcycles and light duty trucks manufactured and sold worldwide by March 2018, and concentrated in four markets, Brazil (30.5 million light-duty vehicles and over 6 million motorcycles), the United States (21 million by the end of 2017), Canada (1.6 million by 2014), and Europe, led by Sweden (243,100). In addition to flex-fuel vehicles running with ethanol, in Europe and the US, mainly in California, there have been successful test programs with methanol flex-fuel vehicles, known as M85 flex-fuel vehicles. There have been also successful tests using P-series fuels with E85 flex fuel vehicles, but as of June 2008, this fuel is not yet available to the general public. These successful tests with P-series fuels were conducted on Ford Taurus and Dodge Caravan flexible-fuel vehicles. Though technology exists to allow ethanol FFVs to run on any mixture of gasoline and ethanol, from pu