source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Resolution%20enhancement%20technologies
|
Resolution enhancement technologies are methods used to modify the photomasks in the lithographic processes used to make integrated circuits (ICs or "chips") to compensate for limitations in the optical resolution of the projection systems. These processes allow the creation of features well beyond the limit that would normally apply due to the Rayleigh criterion. Modern technologies allow the creation of features on the order of 5 nanometers (nm), far below the normal resolution possible using deep ultraviolet (DUV) light.
Background
Integrated circuits are created in a multi-step process known as photolithography. This process starts with the design of the IC circuitry as a series of layers than will be patterned onto the surface of a sheet of silicon or other semiconductor material known as a wafer.
Each layer of the ultimate design is patterned onto a photomask, which in modern systems is made of fine lines of chromium deposited on highly purified quartz glass. Chromium is used because it is highly opaque to UV light, and quartz because it has limited thermal expansion under the intense heat of the light sources as well as being highly transparent to ultraviolet light. The mask is positioned over the wafer and then exposed to an intense UV light source. With a proper optical imaging system between the mask and the wafer (or no imaging system if the mask is sufficiently closely positioned to the wafer such as in early lithography machines), the mask pattern is imaged on a thin layer of photoresist on the surface of the wafer and a light (UV or EUV)-exposed part of the photoresist experiences chemical reactions causing the photographic pattern to be physically created on the wafer.
When light shines on a pattern like that on a mask, diffraction effects occur. This causes the sharply focused light from the UV lamp to spread out on the far side of the mask and becoming increasingly unfocussed over distance. In early systems in the 1970s, avoiding these effects re
|
https://en.wikipedia.org/wiki/Mask%20data%20preparation
|
Mask data preparation (MDP), also known as layout post processing, is the procedure of translating a file containing the intended set of polygons from an integrated circuit layout into set of instructions that a photomask writer can use to generate a physical mask. Typically, amendments and additions to the chip layout are performed in order to convert the physical layout into data for mask production.
Mask data preparation requires an input file which is in a GDSII or OASIS format, and produces a file that is in a proprietary format specific to the mask writer.
MDP procedures
Although historically converting the physical layout into data for mask production was relatively simple, more recent MDP procedures require various procedures:
Chip finishing which includes custom designations and structures to improve manufacturability of the layout. Examples of the latter are a seal ring and filler structures.
Producing a reticle layout with test patterns and alignment marks.
Layout-to-mask preparation that enhances layout data with graphics operations and adjusts the data to mask production devices. This step includes resolution enhancement technologies (RET), such as optical proximity correction (OPC) or inverse lithography technology (ILT).
Special considerations in each of these steps must also be made to mitigate the negative affects associated with the enormous amounts of data they can produce; too much data can sometimes become a problem for the mask writer to be able to create a mask in a reasonable amount of time.
Mask Fracturing
MDP usually involves mask fracturing where complex polygons are translated into simpler shapes, often rectangles and trapezoids, that can be handled by the mask writing hardware. Because mask fracturing is such a common procedure within the whole MDP, the term fracture, used as a noun, is sometimes used inappropriately in place of the term mask data preparation. The term fracture does however accurately describe that sub-proc
|
https://en.wikipedia.org/wiki/Real-time%20polymerase%20chain%20reaction
|
A real-time polymerase chain reaction (real-time PCR, or qPCR when used quantitatively) is a laboratory technique of molecular biology based on the polymerase chain reaction (PCR). It monitors the amplification of a targeted DNA molecule during the PCR (i.e., in real time), not at its end, as in conventional PCR. Real-time PCR can be used quantitatively and semi-quantitatively (i.e., above/below a certain amount of DNA molecules).
Two common methods for the detection of PCR products in real-time PCR are (1) non-specific fluorescent dyes that intercalate with any double-stranded DNA and (2) sequence-specific DNA probes consisting of oligonucleotides that are labelled with a fluorescent reporter, which permits detection only after hybridization of the probe with its complementary sequence.
The Minimum Information for Publication of Quantitative Real-Time PCR Experiments (MIQE) guidelines propose that the abbreviation qPCR be used for quantitative real-time PCR and that RT-qPCR be used for reverse transcription–qPCR. The acronym "RT-PCR" commonly denotes reverse transcription polymerase chain reaction and not real-time PCR, but not all authors adhere to this convention.
Background
Cells in all organisms regulate gene expression by turnover of gene transcripts (single stranded RNA): The amount of an expressed gene in a cell can be measured by the number of copies of an RNA transcript of that gene present in a sample. In order to robustly detect and quantify gene expression from small amounts of RNA, amplification of the gene transcript is necessary. The polymerase chain reaction (PCR) is a common method for amplifying DNA; for RNA-based PCR the RNA sample is first reverse-transcribed to complementary DNA (cDNA) with reverse transcriptase.
In order to amplify small amounts of DNA, the same methodology is used as in conventional PCR using a DNA template, at least one pair of specific primers, deoxyribonucleotide triphosphates, a suitable buffer solution and a thermo
|
https://en.wikipedia.org/wiki/List%20of%20systems%20engineers
|
This is a list of notable systems engineers, people who were trained in or practice systems engineering, and made notable contributions to this field in theory or practice.
A
James S. Albus (1935–2011), American engineer, founder of NIST Intelligent Systems Division
Genrich Altshuller (1926–1998), Russian engineer; inventor of TRIZ, Theory of Inventive Problem Solving
Arnaldo Maria Angelini (1909–1999), Italian engineer; Professor of Electrotechnics at the Sapienza University of Rome
Fred Ascani (1917–2010), American Major General, "father of systems engineering at Wright Field"
B
Dave Bennett (born 1963)
Benjamin Blanchard (1929–2019), Virginia Polytechnic Institute; SE educator; author of texts on systems engineering and related disciplines
Wernher von Braun (1912–1977), chief architect of the Saturn V launch vehicle
C
Peter Checkland (born 1930), British management scientist and emeritus professor of Systems at Lancaster University; developer of soft systems methodology (SSM), a methodology based on a way of systems thinking
Boris Chertok (1912–2011), Rocket Space Corporation "Energy", Moscow, Russia; 2004 Simon Ramo Medal winner for significant contributions to systems engineering and technical leadership of control systems design for the orbiting space station Mir
Harold Chestnut (1918–2001), American electrical engineer and systems engineer; first president of the International Federation of Automatic Control (IFAC)
John R. Clymer (born 1942), researcher, practitioner, and teacher in the field of systems engineering; INCOSE Fellow; expert in conceiving, engineering, and demonstrating computeraided design tools for context-sensitive, self-adaptive systems
Mary (Missy) Cummings (born ), Associate Professor of Aeronautics and Astronautics at the Massachusetts Institute of Technology; one of the first female fighter pilots in the U.S. Navy
E
F
Wolt Fabrycky (born 1932), Virginia Polytechnic Institute; SE educator; author of texts on system
|
https://en.wikipedia.org/wiki/Fillet%20%28mechanics%29
|
In mechanical engineering, a fillet is a rounding of an interior or exterior corner of a part designed in CAD. An interior or exterior corner, with an angle or type of bevel, is called a "chamfer". Fillet geometry, when on an interior corner is a line of concave function, whereas a fillet on an exterior corner is a line of convex function (in these cases, fillets are typically referred to as rounds). Fillets commonly appear on welded, soldered, or brazed joints.
Depending on a geometric modelling kernel different CAD software products may provide different fillet functionality. Usually fillets can be quickly designed onto parts using 3D solid modeling engineering by picking edges of interest and invoking the function. Smooth edges connecting two simple flat features are generally simple for a computer to create and fast for a human user to specify. It is pronounced as "fill-et" similarly like the Fillet in picture framing. Once these features are included in the CAD design of a part, they are often manufactured automatically using computer-numerical control.
Applications
Stress concentration is a problem of load-bearing mechanical parts which is reduced by employing fillets on points and lines of expected high stress. The fillets distribute the stress over a broader area and effectively make the parts more durable and capable of bearing larger loads.
For considerations in aerodynamics, fillets are employed to reduce interference drag where aircraft components such as wings, struts, and other surfaces meet one another.
For manufacturing, concave corners are sometimes filleted to allow the use of round-tipped end mills to cut out an area of a material. This has a cycle time benefit if the round mill is simultaneously being used to mill complex curved surfaces.
Radii are used to eliminate sharp edges that can be easily damaged or that can cause injury when the part is handled.
Terminology
Different design packages use different names for the same operations.
A
|
https://en.wikipedia.org/wiki/Reynolds%20transport%20theorem
|
In differential calculus, the Reynolds transport theorem (also known as the Leibniz–Reynolds transport theorem), or simply the Reynolds theorem, named after Osborne Reynolds (1842–1912), is a three-dimensional generalization of the Leibniz integral rule. It is used to recast time derivatives of integrated quantities and is useful in formulating the basic equations of continuum mechanics.
Consider integrating over the time-dependent region that has boundary , then taking the derivative with respect to time:
If we wish to move the derivative into the integral, there are two issues: the time dependence of , and the introduction of and removal of space from due to its dynamic boundary. Reynolds transport theorem provides the necessary framework.
General form
Reynolds transport theorem can be expressed as follows:
in which is the outward-pointing unit normal vector, is a point in the region and is the variable of integration, and are volume and surface elements at , and is the velocity of the area element (not the flow velocity). The function may be tensor-, vector- or scalar-valued. Note that the integral on the left hand side is a function solely of time, and so the total derivative has been used.
Form for a material element
In continuum mechanics, this theorem is often used for material elements. These are parcels of fluids or solids which no material enters or leaves. If is a material element then there is a velocity function , and the boundary elements obey
This condition may be substituted to obtain:
A special case
If we take to be constant with respect to time, then and the identity reduces to
as expected. (This simplification is not possible if the flow velocity is incorrectly used in place of the velocity of an area element.)
Interpretation and reduction to one dimension
The theorem is the higher-dimensional extension of differentiation under the integral sign and reduces to that expression in some cases. Suppose is independent of and
|
https://en.wikipedia.org/wiki/Tamga
|
A tamga or tamgha (from ; ; ; ; ; ) was an abstract seal or stamp used by Eurasian nomads and by cultures influenced by them. The tamga was normally the emblem of a particular tribe, clan or family. They were common among the Eurasian nomads throughout Classical Antiquity and the Middle Ages. As clan and family identifiers, the collection and systematic comparison of tamgas is regarded to provide insights into relations between families, individuals and ethnic groups in the steppe territory.
Similar tamga-like symbols were sometimes adopted by sedentary peoples adjacent to the Pontic–Caspian steppe both in Eastern Europe and Central Asia.
It has been speculated that Turkic tamgas represent one of the sources of the Old Turkic script of the 6th–10th centuries, but since the mid-20th century, this hypothesis is widely rejected as being unverifiable.
Tamgas in the steppe tradition
Ancient origins
Tamgas originate in pre-historic times, but their exact usage and development cannot be continuously traced over time. There are, however, symbols represented in rock art that are referred to as tamgas or tamga-like. If they serve to record the presence of individuals at a particular place, they may be functionally equivalent with medieval tamgas.
In the later phases of the Bosporan Kingdom, the ruling dynasty applied personal tamgas, composed of a fragment representing the family and a fragment representing the individual king, apparently in continuation of steppe traditions and in an attempt to consolidate seditary and nomadic factions within the kingdom.
Turkic peoples
According to Clauson (1972, p.504f.), Common Turkic tamga means "originally a `brand' or mark of ownership placed on horses, cattle, and other livestock; it became at a very early date something like a European coat of arms or crest, and as such appears at the head of several Türkü and many O[ld] Kir[giz] funary monuments".
Among modern Turkic peoples, the tamga is a design identifying property or ca
|
https://en.wikipedia.org/wiki/Service%20Data%20Objects
|
Service Data Objects is a technology that allows heterogeneous data to be accessed in a uniform way. The SDO specification was originally developed in 2004 as a joint collaboration between Oracle (BEA) and IBM and approved by the Java Community Process in JSR 235. Version 2.0 of the specification was introduced in November 2005 as a key part of the Service Component Architecture.
Relation to other technologies
Originally, the technology was known as Web Data Objects, or WDO, and was shipped in IBM WebSphere Application Server 5.1 and IBM WebSphere Studio Application Developer 5.1.2. Other similar technologies are JDO, EMF, JAXB and ADO.NET.
Design
Service Data Objects denote the use of language-agnostic data structures that facilitate communication between structural tiers and various service-providing entities. They require the use of a tree structure with a root node and provide traversal mechanisms (breadth/depth-first) that allow client programs to navigate the elements. Objects can be static (fixed number of fields) or dynamic with a map-like structure allowing for unlimited fields. The specification defines meta-data for all fields and each object graph can also be provided with change summaries that can allow receiving programs to act more efficiently on them.
Developers
The specification is now being developed by IBM, Rogue Wave, Oracle, SAP, Siebel, Sybase, Xcalia, Software AG within the OASIS Member Section Open CSA since April 2007. Collaborative work and materials remain on the collaboration platform of Open SOA, an informal group of actors of the industry.
Implementations
The following SDO products are available:
Rogue Wave Software HydraSDO
Xcalia (for Java and .Net)
Oracle (Data Service Integrator)
IBM (Virtual XML Garden)
IBM (WebSphere Process Server)
There are open source implementations of SDO from:
The Eclipse Persistence Services Project (EclipseLink)
The Apache Tuscany project for Java and C++
The fcl-sdo library included with
|
https://en.wikipedia.org/wiki/Lin%20Hsin%20Hsin
|
Lin Hsin Hsin () is an IT inventor, artist, poet and composer from Singapore, deeply rooted in mathematics and information technology.
Early life and education
Lin was born in Singapore. She graduated in mathematics from the University of Singapore and received a postgraduate degree in computer science from Newcastle University, England. She studied music and art in Singapore, printmaking at the University of Ulster, papermaking in Ogawamachi, Japan and paper conservation at the University of Melbourne Conservation Services.
Career
Lin is a digital native. Lin builds paradigm shift & patent-grade inventions.She is an IT visionary some 20 years ahead of time, who pens her IT vision in computing, poems, and paintings.
In 1976, Lin painted "Distillation of an Apple", an oil painting claimed to visualised the construction and usage of Apple computer 7 days before the birth of Apple computer. In 1977, she painted "The Computer as Architect", an oil painting depicting the vision of the power of computer in architecture. Lin claimed she has never seen nor used a Computer-aided design (CAD) system prior to her painting while commercial CAD systems are available since early 1970s.
1988 March organized 1st Artificial Intelligence conference in Singapore
1991 February 1 poem titled "Cellular Phone Galore" predicted mobile phone, & cellular network BEFORE 2G GSM launch, 27 March 1991, p. 54,55, "from time to time"
1992 wanted to build a multimedia museum (letter to National Computer Board, Singapore)
1993 February, predicted the Y2K bug while building a ten-year forecasting model on an IBM i486 PC, Journal of the Asia Pacific Economic Conference (APEC), 1999
1993 August 21, poem title "Online Intimacy" on Online dating service, p. 235, "Sunny Side Up"
1993 August 23, poem titled "Till Bankrupt Do Us Part", on online shopping & e-commerce, p. 241, "Sunny Side Up"
1994 May, painted "Voices of the Future" – oil painting depicted the wireless and mobile entertainment futu
|
https://en.wikipedia.org/wiki/Knight%20Tyme
|
Knight Tyme is a computer game released for the ZX Spectrum, Amstrad CPC, Commodore 64 and MSX compatibles in 1986. It was published by Mastertronic as part of their Mastertronic Added Dimension label. Two versions of the ZX Spectrum release were published: a full version for the 128K Spectrum (which was published first) and a cut-down version for the 48K Spectrum that removed the music, some graphics and some locations (which was published later).
It was programmed by David Jones and is the third game in the Magic Knight series. The in-game music was written by David Whittaker on the C64 version and Rob Hubbard on the Spectrum and Amstrad versions. Graphics were by Ray Owen.
Plot
Having rescued his friend Gimbal the wizard from a self-inflicted white-out spell, the Magic Knight finds himself transported into the far future aboard the starship USS Pisces. Magic Knight must find a way back to his own time, with the help of the Tyme Guardians, before he is apprehended by the Paradox Police. On board the USS Pisces, the Magic Knight is first not recognized at all by the crew of the ship, and must create an ID Card, which he receives a template of from Derby IV, the ship's main computer. After getting his ID completed, he then takes command of the ship, first arriving at Starbase 1 to refuel the ship. After refueling, the Magic Knight collects the pieces of the Golden Sundial from Monopole, Retreat and Outpost. Returning to the ship with all the pieces of the sundial, he discovers that a time machine has appeared inside the USS Pisces to take him back to his own time.
Gameplay
Gameplay is very similar to Knight Tyme'''s predecessor, Spellbound. Once again, the game's wide range of commands are carried out using "Windimation", a system whereby text commands are carried out through choosing options in command windows.
The importance of watching Magic Knight's energy level and keeping him from harm is rather different this time around. Whilst Spellbound required the pl
|
https://en.wikipedia.org/wiki/Signal%20integrity
|
Signal integrity or SI is a set of measures of the quality of an electrical signal. In digital electronics, a stream of binary values is represented by a voltage (or current) waveform. However, digital signals are fundamentally analog in nature, and all signals are subject to effects such as noise, distortion, and loss. Over short distances and at low bit rates, a simple conductor can transmit this with sufficient fidelity. At high bit rates and over longer distances or through various mediums, various effects can degrade the electrical signal to the point where errors occur and the system or device fails. Signal integrity engineering is the task of analyzing and mitigating these effects. It is an important activity at all levels of electronics packaging and assembly, from internal connections of an integrated circuit (IC), through the package, the printed circuit board (PCB), the backplane, and inter-system connections. While there are some common themes at these various levels, there are also practical considerations, in particular the interconnect flight time versus the bit period, that cause substantial differences in the approach to signal integrity for on-chip connections versus chip-to-chip connections.
Some of the main issues of concern for signal integrity are ringing, crosstalk, ground bounce, distortion, signal loss, and power supply noise.
History
Signal integrity primarily involves the electrical performance of the wires and other packaging structures used to move signals about within an electronic product. Such performance is a matter of basic physics and as such has remained relatively unchanged since the inception of electronic signaling. The first transatlantic telegraph cable suffered from severe signal integrity problems, and analysis of the problems yielded many of the mathematical tools still used today to analyze signal integrity problems, such as the telegrapher's equations. Products as old as the Western Electric crossbar telephone
|
https://en.wikipedia.org/wiki/Transmission%20disequilibrium%20test
|
The transmission disequilibrium test (TDT) was proposed by Spielman, McGinnis and Ewens (1993) as a family-based association test for the presence of genetic linkage between a genetic marker and a trait. It is an application of McNemar's test.
A specificity of the TDT is that it will detect genetic linkage only in the presence of genetic association.
While genetic association can be caused by population structure, genetic linkage will not be affected, which makes the TDT robust to the presence of population structure.
The case of trios: one affected child per family
Description of the test
We first describe the TDT in the case where families consist of trios (two parents and one affected child). Our description follows the notations used in Spielman, McGinnis & Ewens (1993).
The TDT measures the over-transmission of an allele from heterozygous parents to affected offsprings.
The n affected offsprings have 2n parents. These can be represented by the transmitted and the non-transmitted alleles and at some genetic locus. Summarizing the data in a 2 by 2 table gives:
The derivation of the TDT shows that one should only use the heterozygous parents (total number b+c).
The TDT tests whether the proportions b/(b+c) and c/(b+c) are compatible with probabilities (0.5, 0.5).
This hypothesis can be tested using a binomial (asymptotically chi-square) test with one degree of freedom:
Outline of the test derivation
A derivation of the test consists of using a population genetics model to obtain the expected proportions for the quantities and in the table above. In particular, one can show that under nearly all disease models the expected proportion of and are identical. This result motivates the use of a binomial (asymptotically ) test to test whether these proportions are equal.
On the other hand, one can also show that under such models the proportions and are not equal to the product of the marginals probabilities , and , . A rewording of this statement wou
|
https://en.wikipedia.org/wiki/Poly-Bernoulli%20number
|
In mathematics, poly-Bernoulli numbers, denoted as , were defined by M. Kaneko as
where Li is the polylogarithm. The are the usual Bernoulli numbers.
Moreover, the Generalization of Poly-Bernoulli numbers with a,b,c parameters defined as follows
where Li is the polylogarithm.
Kaneko also gave two combinatorial formulas:
where is the number of ways to partition a size set into non-empty subsets (the Stirling number of the second kind).
A combinatorial interpretation is that the poly-Bernoulli numbers of negative index enumerate the set of by (0,1)-matrices uniquely reconstructible from their row and column sums. Also it is the number of open tours by a biased rook on a board (see A329718 for definition).
The Poly-Bernoulli number satisfies the following asymptotic:
For a positive integer n and a prime number p, the poly-Bernoulli numbers satisfy
which can be seen as an analog of Fermat's little theorem. Further, the equation
has no solution for integers x, y, z, n > 2; an analog of Fermat's Last Theorem.
Moreover, there is an analogue of Poly-Bernoulli numbers (like Bernoulli numbers and Euler numbers) which is known as Poly-Euler numbers.
See also
Bernoulli numbers
Stirling numbers
Gregory coefficients
Bernoulli polynomials
Bernoulli polynomials of the second kind
Stirling polynomials
References
.
.
.
.
Integer sequences
Enumerative combinatorics
|
https://en.wikipedia.org/wiki/Fiber%20to%20the%20x
|
Fiber to the x (FTTX; also spelled "fibre") or fiber in the loop is a generic term for any broadband network architecture using optical fiber to provide all or part of the local loop used for last mile telecommunications. As fiber optic cables are able to carry much more data than copper cables, especially over long distances, copper telephone networks built in the 20th century are being replaced by fiber.
FTTX is a generalization for several configurations of fiber deployment, arranged into two groups: FTTP/FTTH/FTTB (Fiber laid all the way to the premises/home/building) and FTTC/N (fiber laid to the cabinet/node, with copper wires completing the connection).
Residential areas already served by balanced pair distribution plant call for a trade-off between cost and capacity. The closer the fiber head, the higher the cost of construction and the higher the channel capacity. In places not served by metallic facilities, little cost is saved by not running fiber to the home.
Fiber to the x is the key method used to drive next-generation access (NGA), which describes a significant upgrade to the broadband available by making a step change in speed and quality of the service. This is typically thought of as asymmetrical with a download speed of 24 Mbit/s plus and a fast upload speed.
Ofcom have defined super-fast broadband as "broadband products that provide a maximum download speed that is greater than 24 Mbit/s - this threshold is commonly considered to be the maximum speed that can be supported on current generation (copper-based) networks."
A similar network called a hybrid fiber-coaxial (HFC) network is used by cable television operators but is usually not synonymous with "fiber In the loop", although similar advanced services are provided by the HFC networks. Fixed wireless and mobile wireless technologies such as Wi-Fi, WiMAX and 3GPP Long Term Evolution (LTE) are an alternative for providing Internet access.
Definitions
The telecommunications industry differe
|
https://en.wikipedia.org/wiki/Unified%20Display%20Interface
|
Unified Display Interface (UDI) was a digital video interface specification released in 2006 which was based on Digital Visual Interface (DVI). It was intended to be a lower cost implementation while providing compatibility with existing High-Definition Multimedia Interface (HDMI) and DVI displays. Unlike HDMI, which is aimed at high-definition multimedia consumer electronics devices such as television monitors and DVD players, UDI was specifically targeted towards computer monitor and video card manufacturers and did not support the transfer of audio data. A contemporary rival standard, DisplayPort, gained significant industry support starting in 2007 and the UDI specification was abandoned shortly thereafter without having released any products.
Development
On December 20, 2005, the UDI Special Interest Group (UDI SIG) was announced, along with a tentative specification called version 0.8. The group, which worked on refining the specification and promoting the interface, was led by Intel and included Apple Computer, Intel, LG, NVIDIA, Samsung, and Silicon Image Inc.
The announcement of UDI lagged the DisplayPort standard by a few months, which had been unveiled by the Video Electronics Standards Association (VESA) in May 2005. DisplayPort was being developed by a rival consortium including ATI Technologies, Samsung, NVIDIA, Dell, Hewlett-Packard, and Molex. Fundamentally, DisplayPort transmits video in packets of data, while the preceding DVI and HDMI standards transmit raw video as a digital signal; UDI took an approach closer to DVI/HDMI. The UDI specification 1.0 was finalized and released in July 2006. The differences between UDI and HDMI were kept to a minimum since both specifications were designed for long-term compatibility. Again, UDI lagged DisplayPort by a few months, which had released its finalized version 1.0 specification in May 2006.
The group changed its title in late 2006 from "special interest group" to "working group" and contemporary pres
|
https://en.wikipedia.org/wiki/Error%20vector%20magnitude
|
The error vector magnitude or EVM (sometimes also called relative constellation error or RCE) is a measure used to quantify the performance of a digital radio transmitter or receiver. A signal sent by an ideal transmitter or received by a receiver would have all constellation points precisely at the ideal locations, however various imperfections in the implementation (such as carrier leakage, low image rejection ratio, phase noise etc.) cause the actual constellation points to deviate from the ideal locations. Informally, EVM is a measure of how far the points are from the ideal locations.
Noise, distortion, spurious signals, and phase noise all degrade EVM, and therefore EVM provides a comprehensive measure of the quality of the radio receiver or transmitter for use in digital communications. Transmitter EVM can be measured by specialized equipment, which demodulates the received signal in a similar way to how a real radio demodulator does it. One of the stages in a typical phase-shift keying demodulation process produces a stream of I-Q points which can be used as a reasonably reliable estimate for the ideal transmitted signal in EVM calculation.
Definition
An error vector is a vector in the I-Q plane between the ideal constellation point and the point received by the receiver. In other words, it is the difference between actual received symbols and ideal symbols. The root mean square (RMS) average amplitude of the error vector, normalized to ideal signal amplitude reference, is the EVM. EVM is generally expressed in percent by multiplying the ratio by 100.
The ideal signal amplitude reference can either be the maximum ideal signal amplitude of the constellation, or it can be the root mean square (RMS) average amplitude of all possible ideal signal amplitude values in the constellation. For many common constellations including BPSK, QPSK, and 8PSK, these two methods for finding the reference give the same result, but for higher-order QAM constellations incl
|
https://en.wikipedia.org/wiki/Eigenvector%20centrality
|
In graph theory, eigenvector centrality (also called eigencentrality or prestige score) is a measure of the influence of a node in a network. Relative scores are assigned to all nodes in the network based on the concept that connections to high-scoring nodes contribute more to the score of the node in question than equal connections to low-scoring nodes. A high eigenvector score means that a node is connected to many nodes who themselves have high scores.
Google's PageRank and the Katz centrality are variants of the eigenvector centrality.
Using the adjacency matrix to find eigenvector centrality
For a given graph with vertices let be the adjacency matrix, i.e. if vertex is linked to vertex , and otherwise. The relative centrality score, , of vertex can be defined as:
where is the set of neighbors of and is a constant. With a small rearrangement this can be rewritten in vector notation as the eigenvector equation
In general, there will be many different eigenvalues for which a non-zero eigenvector solution exists. However, the additional requirement that all the entries in the eigenvector be non-negative implies (by the Perron–Frobenius theorem) that only the greatest eigenvalue results in the desired centrality measure. The component of the related eigenvector then gives the relative centrality score of the vertex in the network. The eigenvector is only defined up to a common factor, so only the ratios of the centralities of the vertices are well defined. To define an absolute score, one must normalise the eigenvector e.g. such that the sum over all vertices is 1 or the total number of vertices n. Power iteration is one of many eigenvalue algorithms that may be used to find this dominant eigenvector. Furthermore, this can be generalized so that the entries in A can be real numbers representing connection strengths, as in a stochastic matrix.
Normalized eigenvector centrality scoring
Google's PageRank is based on the normalized eigenvector
|
https://en.wikipedia.org/wiki/Design%20for%20manufacturability
|
Design for manufacturability (also sometimes known as design for manufacturing or DFM) is the general engineering practice of designing products in such a way that they are easy to manufacture. The concept exists in almost all engineering disciplines, but the implementation differs widely depending on the manufacturing technology. DFM describes the process of designing or engineering a product in order to facilitate the manufacturing process in order to reduce its manufacturing costs. DFM will allow potential problems to be fixed in the design phase which is the least expensive place to address them. Other factors may affect the manufacturability such as the type of raw material, the form of the raw material, dimensional tolerances, and secondary processing such as finishing.
Depending on various types of manufacturing processes there are set guidelines for DFM practices. These DFM guidelines help to precisely define various tolerances, rules and common manufacturing checks related to DFM.
While DFM is applicable to the design process, a similar concept called DFSS (design for Six Sigma) is also practiced in many organizations.
For printed circuit boards (PCB)
In the PCB design process, DFM leads to a set of design guidelines that attempt to ensure manufacturability. By doing so, probable production problems may be addressed during the design stage.
Ideally, DFM guidelines take into account the processes and capabilities of the manufacturing industry. Therefore, DFM is constantly evolving.
As manufacturing companies evolve and automate more and more stages of the processes, these processes tend to become cheaper. DFM is usually used to reduce these costs. For example, if a process may be done automatically by machines (i.e. SMT component placement and soldering), such process is likely to be cheaper than doing so by hand.
For integrated circuits (IC)
Achieving high-yielding designs, in the state of the art VLSI technology has become an extremely challenging t
|
https://en.wikipedia.org/wiki/Rubber%20diode
|
In electronics, a rubber diode or V multiplier is a bipolar junction transistor circuit that serves as a voltage reference. It consists of one transistor and two resistors, and the reference voltage across the circuit is determined by the selected resistor values and the base-to-emitter voltage (V) of the transistor. The circuit behaves as a voltage divider, but with the voltage across the base-emitter resistor determined by the forward base-emitter junction voltage.
It is commonly used in the biasing of push-pull output stages of amplifiers, where one benefit is thermal compensation: The temperature-dependent variations in the multiplier's V, approximately -2.2 mV/°C, can be made to match variations occurring in the V of the power transistors by mounting to the same heat sink. In this context, it is sometimes called a bias servo.
References
Semiconductors
|
https://en.wikipedia.org/wiki/The%20Fourth%20Dimension%20%28book%29
|
The Fourth Dimension: Toward a Geometry of Higher Reality (1984) is a popular mathematics book by Rudy Rucker, a Silicon Valley professor of mathematics and computer science. It provides a popular presentation of set theory and four dimensional geometry as well as some mystical implications. A foreword is provided by Martin Gardner and the 200+ illustrations are by David Povilaitis.
The Fourth Dimension: Toward a Geometry of Higher Reality was reprinted in 1985 as the paperback The Fourth Dimension: A Guided Tour of the Higher Universes. It was again reprinted in paperback in 2014 by Dover Publications with its original subtitle.
Like other Rucker books, The Fourth Dimension is dedicated to Edwin Abbott Abbott, author of the novella Flatland.
Synopsis
The Fourth Dimension teaches readers about the concept of a fourth spatial dimension. Several analogies are made to Flatland; in particular, Rucker compares how a square in Flatland would react to a cube in Spaceland to how a cube in Spaceland would react to a hypercube from the fourth dimension.
The book also includes multiple puzzles.
Reception
Kirkus Reviews called it "animated, often amusing", and a "rare treat", but noted that the book eventually leaves mathematical topics behind to focus instead on "mysticism of the all-is-one-one-is-all thinking of an Ouspensky." The Quarterly Review of Biology declared it to be "nice", and "at times (...) enchanting", comparing it to The Tao of Physics.
See also
Hiding in the Mirror, a similar book by Lawrence M. Krauss.
References
1984 non-fiction books
Books by Rudy Rucker
Mathematics books
|
https://en.wikipedia.org/wiki/Building%20envelope
|
A building envelope or building enclosure is the physical separator between the conditioned and unconditioned environment of a building, including the resistance to air, water, heat, light, and noise transfer.
Discussion
The building envelope or enclosure is all of the elements of the outer shell that maintain a dry, heated, or cooled indoor environment and facilitate its climate control. Building envelope design is a specialized area of architectural and engineering practice that draws from all areas of building science and indoor climate control.
The many functions of the building envelope can be separated into three categories:
Support (to resist and transfer structural and dynamic loads)
Control (the flow of matter and energy of all types)
Finish (to meet desired aesthetics on the inside and outside)
The control function is at the core of good performance, and in practice focuses, in order of importance, on rain control, air control, heat control, and vapor control.
Water and water vapor control
Control of rain is most fundamental, and there are numerous strategies to this end, namely, perfect barriers, drained screens, and mass / storage systems.
One of the main purposes of a roof is to resist water. Two broad categories of roofs are flat and pitched. Flat roofs actually slope up to 10° or 15° but are built to resist intrusion from standing water. Pitched roofs are designed to shed water but not resist standing water intrusion which can occur during wind-driven rain or ice damming. Typically residential, pitched roofs are covered with an underlayment material beneath the roof covering material as a second line of defense. Domestic roof construction may also be ventilated to help remove moisture from leakage and condensation.
Walls do not get as severe water exposure as roofs but still leak water. Types of wall systems with regard to water penetration are barrier, drainage and surface-sealed walls. Barrier walls are designed to allow water to be absorbe
|
https://en.wikipedia.org/wiki/Vx32
|
The Vx32 virtual extension environment is an application-level virtual machine implemented as an ordinary user-mode library and designed to run native x86 code. Applications can link with and use Vx32 in order to create safe, OS-independent execution environments, in which to run untrusted plug-ins or other extensions written in any language that compiles to x86 code.
From the host processor's viewpoint, plug-ins running under the Vx32 virtual machine monitor run in the context of the application process itself, but the Vx32 library uses dynamic recompilation to prevent the "guest" plug-in code from accessing memory or jumping to instructions outside its designated sandbox. The Vx32 library redirects any system calls the plug-in makes to the application itself rather than to the host operating system, thereby giving the application exclusive control over the API and security environment in which the plug-in code executes.
Vx32 thus provides an application extension facility comparable in function to the Java virtual machine (JVM) or the Common Language Runtime (CLR), but with less overhead and with the ability to run code written in any language, safe or unsafe. Vx32's primary disadvantage is that it is more difficult to make it run on non-x86 host processors.
Criticism
There are some disadvantages that have been proposed by critics of Vx32:
Vx32 is closely tied to the IA-32 instruction set, which makes it difficult to use on non-x86 architectures
The IA-32e (AMD64) mode cannot be used by guests (the host can still run in 64-bit mode), because of the use of segmentation which is inherent to Vx32's design
External links
The Vx32 Virtual Extension Environment
Vx32: Lightweight User-level Sandboxing on the x86 - Paper presented at USENIX 2008
9vx - A port of Plan 9 from Bell Labs to vx32.
vx32 for Win32
Virtualization software
Virtualization software for Linux
X86 emulators
|
https://en.wikipedia.org/wiki/Radeon%20R300%20series
|
The R300 GPU, introduced in August 2002 and developed by ATI Technologies, is its third generation of GPU used in Radeon graphics cards. This GPU features 3D acceleration based upon Direct3D 9.0 and OpenGL 2.0, a major improvement in features and performance compared to the preceding R200 design. R300 was the first fully Direct3D 9-capable consumer graphics chip. The processors also include 2D GUI acceleration, video acceleration, and multiple display outputs.
The first graphics cards using the R300 to be released were the Radeon 9700. It was the first time that ATI marketed its GPU as a Visual Processing Unit (VPU). R300 and its derivatives would form the basis for ATI's consumer and professional product lines for over 3 years.
The integrated graphics processor based upon R300 is the Xpress 200.
Development
ATI had held the lead for a while with the Radeon 8500 but Nvidia retook the performance crown with the launch of the GeForce 4 Ti line. A new high-end refresh part, the 8500XT (R250) was supposedly in the works, ready to compete against NVIDIA's high-end offerings, particularly the top line Ti 4600. Pre-release information listed a 300 MHz core and RAM clock speed for the R250 chip. ATI, perhaps mindful of what had happened to 3dfx when they took focus off their Rampage processor, abandoned it in favor of finishing off their next-generation R300 card. This proved to be a wise move, as it enabled ATI to take the lead in development for the first time instead of trailing NVIDIA. The R300, with its next-generation architecture giving it unprecedented features and performance, would have been superior to any R250 refresh.
The R3xx chip was designed by ATI's West Coast team (formerly ArtX Inc.), and the first product to use it was the Radeon 9700 PRO (internal ATI code name: R300; internal ArtX codename: Khan), launched in August 2002. The architecture of R300 was quite different from its predecessor, Radeon 8500 (R200), in nearly every way. The core of 9700 P
|
https://en.wikipedia.org/wiki/Shift%20rule
|
The shift rule is a mathematical rule for sequences and series.
Here and are natural numbers.
For sequences, the rule states that if is a sequence, then it converges if and only if also converges, and in this case both sequences always converge to the same number.
For series, the rule states that the series converges to a number if and only if converges.
References
Sequences and series
|
https://en.wikipedia.org/wiki/Conidiation
|
Conidiation is a biological process in which filamentous fungi reproduce asexually from spores. Rhythmic conidiation is the most obvious output of fungal circadian rhythms. Neurospora species are most often used to study this rhythmic conidiation. Physical stimuli, such as light exposure and mechanical injury to the mycelium trigger conidiation; however, conidiogenesis itself is a holistic response determined by the cell's metabolic state, as influenced by the environment and endogenous biological rhythms.
See also
Conidium
References
Further reading
Mycology
|
https://en.wikipedia.org/wiki/Webisode
|
A webisode (portmanteau of "web" and "episode") is an episode of a series that is distributed as part of a web series or on streaming television. It is available either for download or in streaming, as opposed to first airing on broadcast or cable television. The format can be used as a preview, as a promotion, as part of a collection of shorts, or as a commercial. A webisode may or may not have been broadcast on TV. What defines it is its online distribution on the web, or through video-sharing web sites such as Vimeo or YouTube. While there is no set standard for length, most webisodes are relatively short, ranging from 3–15 minutes in length. It is a single web episode, but collectively is part of a web series. The term webisode (a portmanteau formed from the words web and episode) was first introduced in the Merriam-Webster's Collegiate Dictionary in 2009.
History
Webisodes have become increasingly common in the midst of the post-broadcast era, which implies that audiences are drifting away past free-to-use television design. The post-broadcast era has been influenced by new media formats such as the Internet. Contemporary trends indicate that the Internet has become the dominant mechanism for accessing Media Content. In 2012, the Nielsen Company reported that the number of American households with television access has diminished for the second straight year, showing that viewers are transitioning away from broadcast television. The post-broadcast era is best defined as embodiment by a complex mediascape that cannot be maintained by broadcast television; in its wake, the popularity of webisodes has expanded because the internet has become a potential solution to television's ailments by combining interpersonal communication and multimedia elements alongside entertainment programing.
These original web series are a means to monetize this transitional audience and produce new celebrities, both independently on the web and working in accordance to the previous m
|
https://en.wikipedia.org/wiki/Open%20Transport%20Network
|
Open Transport Network (OTN) is a flexible private communication network based on fiber optic technology, manufactured by OTN Systems.
It is a networking technology used in vast, private networks with a great diversity of communication requirements, such as subway systems, pipelines, the mining industry, tunnels and the like (ref). It permits all kinds of applications such as video images, different forms of speech and data traffic, information for process management and the like to be sent flawlessly and transparent over a practically unlimited distance. The system is a mix of Transmission and Access NE, communicating over an optical fiber. The communication protocols include serial protocols (e.g. RS232) as well as telephony (POTS/ISDN), audio, Ethernet, video and video-over-IP (via M-JPEG, MPEG2/4, H.264 or DVB) (ref).
Open Transport Network is a brand name and not to be mistaken with Optical Transport Network.
Concept
The basic building block of OTN is called a node. It is a 19" frame that houses and interconnects the building blocks that produce the OTN functionality. Core building blocks are the power supply and the optical ring adapter (called BORA : Broadband Optical Ring Adapter) (ref). The remaining node space can be configured with up to 8 (different) layer 1 interfaces as required.
OTN nodes are interconnected using pluggable optical fibers in a dual counterrotating ring topology. The primary ring consists of fibers carrying data from node to node in one direction, the secondary ring runs parallel with the primary ring but carries data in the opposite direction. Under normal circumstances, only one ring carries active data. If a failure is detected in this data path, the secondary ring is activated. This hot standby topology results in a 1 + 1 path redundancy. The switchover mechanism is hardware based and results in ultrafast (50ms) switchover without service loss.
Virtual bidirectional point-to-point or point-to-multipoint connections (se
|
https://en.wikipedia.org/wiki/EDA%20database
|
An EDA database is a database specialized for the purpose of electronic design automation. These application specific databases are required because general purpose databases have historically not provided enough performance for EDA applications.
In examining EDA design databases, it is useful to look at EDA tool architecture, to determine which parts are to be considered part of the design database, and which parts are the application levels. In addition to the database itself, many other components are needed for a useful EDA application. Associated with a database are one or more language systems (which, although not directly part of the database, are used by EDA applications such as parameterized cells and user scripts). On top of the database are built the algorithmic engines within the tool (such as timing, placement, routing, or simulation engines ), and the highest level represents the applications built from these component blocks, such as floorplanning. The scope of the design database includes the actual design, library information, technology information, and the set of translators to and from external formats such as Verilog and GDSII.
Mature design databases
Many instances of mature design databases exist in the EDA industry, both as a basis for commercial EDA tools as well as proprietary EDA tools developed by the CAD groups of major electronics companies.
IBM, Hewlett-Packard, SDA Systems and ECAD (now Cadence Design Systems), High Level Design Systems, and many other companies developed EDA specific databases over the last 20 years, and these continue to be the basis of IC-design systems today. Many of these systems took ideas from university research and successfully productized them. Most of the mature design databases have evolved to the point where they can represent netlist data, layout data, and the ties between the two. They are hierarchical to allow for reuse and smaller designs. They can support styles of layout from digital through pur
|
https://en.wikipedia.org/wiki/Nord-100
|
The Nord-100 was a 16-bit minicomputer series made by Norsk Data, introduced in 1979. It shipped with the Sintran III operating system, and the architecture was based on, and backward compatible with, the Nord-10 line.
The Nord-100 was originally named the Nord-10/M (M for Micro) as a bit sliced OEM processor. The board was laid out, finished, and tested when they realized that the central processing unit (CPU) was far faster than the Nord-10/S. The result was that all the marketing material for the new NORD-10/M was discarded, the board was rechristened the Nord-100, and extensively advertised as the successor of the Nord-10 line. Later, in an effort to internationalize their line, the machine was renamed ND-100.
Performance
CPU
The ND-100 line used a custom processor, and like the PDP-11 line, the CPU decided the name of the computer.
Nord-100/CE, Commercial Extended, with decimal arithmetic instructions (The decimal instruction set was later renamed CX)
ND-110, incrementally improved ND-100
ND-110/CX, an ND-110 with decimal instructions
ND-120/CX, full redesign
The ND-100 line was machine-instruction compatible with the Nord-10 line, except for some extended instructions, all in supervisor mode, mostly used by the operating system. Like most processors of its time, the native bit grouping was octal, despite the 16-bit word length.
The ND-100 series had a microcoded CPU, with downloadable microcode, and was considered a complex instruction set computer (CISC) processor.
===ND-100===
The ND-100 was implemented using medium-scale integration (MSI) logic and bit-slice processors.
The ND-100 was frequently sold together with a memory management unit card, the MMS. The combined power use of these boards was 90 watts. The boards would usually occupy slots 2 and 3, for the CPU and MMS, respectively. Slot 1 was reserved for the Tracer, a hardware debugger system.
ND-100/CE
The CE stood for Commercial Extended. The processor was upgraded by replacing the microcod
|
https://en.wikipedia.org/wiki/ND-500
|
The ND-500 was a 32-bit superminicomputer delivered in 1981 by Norsk Data. It relied on a ND-100 to do housekeeping tasks and run the OS, SINTRAN III. A configuration could feature up to four ND-500 CPUs in a shared-memory configuration.
Hardware implementations
The ND-500 architecture lived through four distinct implementations. Each implementation was sold under a variety of different model numbers.
ND also sold multiprocessor configurations, naming them ND-580/n and an ND-590n, where n represented the number of CPUs in a given configuration, 2, 3, or 4.
ND-500/1
Sold as the ND-500, ND-520, ND-540, and ND-560.
ND-500/2
Sold as the ND-570, ND-570/CX, and ND-570/ACX.
ND-505
A 28-bit version of the ND-500 machine. Pins were snipped on the backplane, removing its status as a superminicomputer, allowing it to legally pass through the CoCom embargo.
Samson
Sold as the ND-5200, ND-5400, ND-5500, ND-5700, and ND-5800. The ND-120 CPU line, which constituted the ND-100 side of most ND-5000 computers, was named Delilah. As the 5000 line progressed in speed, the dual-arch ND-100/500 configuration increasingly became bottlenecked by all input/output (I/O) having to go through the ND-100.
Rallar
Sold as the ND-5830 and ND-5850. The Rallar processor consisted of two main VLSI gate arrays, KUSK (En: Jockey) and GAMP (En: Horse).
Software
LED was a programmer's source-code editor by Norsk Data running on the ND-500 computers running Sintran III. It featured automatic indenting, pretty-printing of source code, and integration with the compiler environment. It was sold as an advanced alternative to PED. Several copies exist, and it is installed on the NODAF public access ND-5700.
In 198283, Logica PLC in London undertook a project, on behalf of ND, to port Unix Berkley Software Distribution (BSD) 4.2 to the ND-500. A C compiler from Luleå University College in Northern Sweden was used. The goal was to port Unix BSD to the ND-500 and use the ND-100 running Sintran-III as t
|
https://en.wikipedia.org/wiki/Anonymous%20pipe
|
In computer science, an anonymous pipe is a simplex FIFO communication channel that may be used for one-way interprocess communication (IPC). An implementation is often integrated into the operating system's file IO subsystem. Typically a parent program opens anonymous pipes, and creates a new process that inherits the other ends of the pipes, or creates several new processes and arranges them in a pipeline.
Full-duplex (two-way) communication normally requires two anonymous pipes.
Pipelines are supported in most popular operating systems, from Unix and DOS onwards, and are created using the "|" character in many shells.
Unix
Pipelines are an important part of many traditional Unix applications and support for them is well integrated into most Unix-like operating systems. Pipes are created using the pipe system call, which creates a new pipe and returns a pair of file descriptors referring to the read and write ends of the pipe. Many traditional Unix programs are designed as filters to work with pipes.
Microsoft Windows
Like many other device IO and IPC facilities in the Windows API, anonymous pipes are created and configured with API functions that are specific to the IO facility. In this case CreatePipe is used to create an anonymous pipe with separate handles for the read and write ends of the pipe. Read and write IO operations on the pipe are performed with the standard IO facility API functions ReadFile and WriteFile.
On Microsoft Windows, reads and writes to anonymous pipes are always blocking. In other words, a read from an empty pipe will cause the calling thread to wait until at least one byte becomes available or an end-of-file is received as a result of the write handle of the pipe being closed. Likewise, a write to a full pipe will cause the calling thread to wait until space becomes available to store the data being written. Reads may return with fewer than the number of bytes requested (also called a short read).
New processes can inherit hand
|
https://en.wikipedia.org/wiki/Multi-core%20processor
|
A multi-core processor is a microprocessor on a single integrated circuit with two or more separate processing units, called cores (for example, dual-core or quad-core), each of which reads and executes program instructions. The instructions are ordinary CPU instructions (such as add, move data, and branch) but the single processor can run instructions on separate cores at the same time, increasing overall speed for programs that support multithreading or other parallel computing techniques. Manufacturers typically integrate the cores onto a single integrated circuit die (known as a chip multiprocessor or CMP) or onto multiple dies in a single chip package. The microprocessors currently used in almost all personal computers are multi-core.
A multi-core processor implements multiprocessing in a single physical package. Designers may couple cores in a multi-core device tightly or loosely. For example, cores may or may not share caches, and they may implement message passing or shared-memory inter-core communication methods. Common network topologies used to interconnect cores include bus, ring, two-dimensional mesh, and crossbar. Homogeneous multi-core systems include only identical cores; heterogeneous multi-core systems have cores that are not identical (e.g. big.LITTLE have heterogeneous cores that share the same instruction set, while AMD Accelerated Processing Units have cores that do not share the same instruction set). Just as with single-processor systems, cores in multi-core systems may implement architectures such as VLIW, superscalar, vector, or multithreading.
Multi-core processors are widely used across many application domains, including general-purpose, embedded, network, digital signal processing (DSP), and graphics (GPU). Core count goes up to even dozens, and for specialized chips over 10,000, and in supercomputers (i.e. clusters of chips) the count can go over 10 million (and in one case up to 20 million processing elements total in addition to h
|
https://en.wikipedia.org/wiki/Alpha%20Microsystems
|
Alpha Microsystems, Inc., often shortened to Alpha Micro, was an American computer company founded in California in 1977. The company was founded in 1977 in Costa Mesa, California, by John French, Dick Wilcox and Bob Hitchcock. During the dot-com boom, the company changed its name to AlphaServ, then NQL Inc., reflecting its pivot toward being a provider of Internet software. However, the company soon reverted to its original Alpha Microsystems name after the dot-com bubble burst.
Products
The first Alpha Micro computer was the S-100 AM-100, based upon the WD16 microprocessor chipset from Western Digital. As of 1982, AM-100/L and the AM-1000 were based on the Motorola 68000 and succeeding processors, though Alpha Micro swapped several addressing lines to create byte-ordering compatibility with their earlier processor.
Early peripherals included standard computer terminals (such models as Soroc, Hazeltine 1500, and Wyse WY50), Fortran punch card readers, 100 baud rate acoustic coupler modems (later upgraded to 300 baud modems), and 10 MB CDC Hawk hard drives with removable disk packs.
The company's primary claim to fame was selling inexpensive minicomputers that provided multi-user power using a proprietary operating system called AMOS (Alpha Micro Operating System). The operating system on the 68000 machines was called AMOS/L. The operating system had major similarities to the operating system of the DEC DECsystem-10. This may not be coincidental; legend has it that the founders based their operating system on "borrowed" source code from DEC, and DEC, perceiving the same, unsuccessfully tried to sue Alpha Micro over the similarities in 1984.
As Motorola stopped developing their 68000 product, Alpha Micro started to move to the x86 CPU family, used in common PCs. This was initially done with the Falcon cards, allowing standard DOS and later Windows-based PCs to run AMOS applications on the 68000-series CPU on the Falcon card. The work done on AMPC became the fo
|
https://en.wikipedia.org/wiki/%E2%86%92
|
→ or -> may refer to:
one of the arrow symbols, characters of Unicode
one of the arrow keys, on a keyboard
→, >, representing the assignment operator in various programming languages
->, a pointer operator in C and C++ where a->b is synonymous with (*a).b (except when either -> or * has been overridden in C++).
→, goto in the APL programming language
→, representing the direction of a chemical reaction in a chemical equation
→, representing the set of all mathematical functions that map from one set to another in set theory
→, representing a material implication in logic
→, representing morphism in category theory
→, representing a vector in physics and mathematics
the relative direction of right or forward
→, a notation of Conway chained arrow notation for very large integers
"Due to" (and other meanings), in medical notation
the button that starts playback of a recording on a media player
See also
Arrow (disambiguation)
↑ (disambiguation)
↓ (disambiguation)
← (disambiguation)
"Harpoons":
↼
↽
↾
↿
⇀
⇁
⇂
⇃
⇋
⇌
Logic symbols
|
https://en.wikipedia.org/wiki/Congruum
|
In number theory, a congruum (plural congrua) is the difference between successive square numbers in an arithmetic progression of three squares.
That is, if , , and (for integers , , and ) are three square numbers that are equally spaced apart from each other, then the spacing between them, , is called a congruum.
The congruum problem is the problem of finding squares in arithmetic progression and their associated congrua. It can be formalized as a Diophantine equation: find integers , , and such that
When this equation is satisfied, both sides of the equation equal the congruum.
Fibonacci solved the congruum problem by finding a parameterized formula for generating all congrua, together with their associated arithmetic progressions. According to this formula, each congruum is four times the area of a Pythagorean triangle. Congrua are also closely connected with congruent numbers: every congruum is a congruent number, and every congruent number is a congruum multiplied by the square of a rational number.
Examples
As an example, the number 96 is a congruum because it is the difference between adjacent squares in the sequence 4, 100, and 196 (the squares of 2, 10, and 14 respectively).
The first few congrua are:
History
The congruum problem was originally posed in 1225, as part of a mathematical tournament held by Frederick II, Holy Roman Emperor, and answered correctly at that time by Fibonacci, who recorded his work on this problem in his Book of Squares.
Fibonacci was already aware that it is impossible for a congruum to itself be a square, but did not give a satisfactory proof of this fact. Geometrically, this means that it is not possible for the pair of legs of a Pythagorean triangle to be the leg and hypotenuse of another Pythagorean triangle. A proof was eventually given by Pierre de Fermat, and the result is now known as Fermat's right triangle theorem. Fermat also conjectured, and Leonhard Euler proved, that there is no sequence of four squares in
|
https://en.wikipedia.org/wiki/Differential%20entropy
|
Differential entropy (also referred to as continuous entropy) is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average (surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy (described here) is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.
In terms of measure theory, the differential entropy of a probability measure is the negative relative entropy from that measure to the Lebesgue measure, where the latter is treated as if it were a probability measure, despite being unnormalized.
Definition
Let be a random variable with a probability density function whose support is a set . The differential entropy or is defined as
For probability distributions which do not have an explicit density function expression, but have an explicit quantile function expression, , then can be defined in terms of the derivative of i.e. the quantile density function as
.
As with its discrete analog, the units of differential entropy depend on the base of the logarithm, which is usually 2 (i.e., the units are bits). See logarithmic units for logarithms taken in different bases. Related concepts such as joint, conditional differential entropy, and relative entropy are defined in a similar fashion. Unlike the discrete analog, the differential entropy has an offset that depends on the units used to measure . For example, the differential entropy of a quantity measured in millimeters will be more than the same quantity measured in meters; a dimensionless quantity will have differential entropy of more than the s
|
https://en.wikipedia.org/wiki/Transylvania%20lottery
|
In mathematical combinatorics, the Transylvania lottery is a lottery where players selected three numbers from 1-14 for each ticket, and then three numbers are chosen randomly. A ticket wins if two of the numbers match the random ones. The problem asks how many tickets the player must buy in order to be certain of winning.
An upper bound can be given using the Fano plane with a collection of 14 tickets in two sets of seven. Each set of seven uses every line of a Fano plane, labelled with the numbers 1 to 7, and 8 to 14.
At least two of the three randomly chosen numbers must be in one Fano plane set, and any two points on a Fano plane are on a line, so there will be a ticket in the collection containing those two numbers. There is a (6/13)*(5/12)=5/26 chance that all three randomly chosen numbers are in the same Fano plane set. In this case, there is a 1/5 chance that they are on a line, and hence all three numbers are on one ticket, otherwise each of the three pairs are on three different tickets.
See also
Combinatorial design
Lottery Wheeling
References
Combinatorics
|
https://en.wikipedia.org/wiki/IStock
|
iStock is an online royalty free, international micro stock photography provider based in Calgary, Alberta, Canada. The firm offers millions of photos, illustrations, clip art, videos and audio tracks. Artists, designers and photographers worldwide contribute their work to iStock collections in return for royalties. Nearly half a million new photos, illustrations, videos and audio files, are added each month.
History
The company was founded by Bruce Livingstone in May 2000, as iStockphoto, a free stock imagery website supported by Livingstone's web development firm, Evolvs Media. iStock pioneered the crowd-sourced stock industry and became the original source for user-generated stock photos, vectors and illustrations, and video clips. It began charging money in 2001 and quickly became profitable.
On February 9, 2006, the firm was acquired by Getty Images for $50 million USD. Livingstone promised that the site would continue "functioning independently with the benefits of Getty Images, yet, very importantly for them and us, autonomy."
On September 18, 2006, the site experienced the first benefits of the new ownership: a Controlled vocabulary keyword taxonomy borrowed from Getty Images.
iStockpro closed. iStockpro was a more expensive version of iStockphoto that was never as popular as iStockphoto, and became redundant after the acquisition by Getty Images.
On April 1, 2008, Getty Images disclosed, as part of its agreement to be sold to a private equity firm, that iStockphoto's revenue in 2007 was $71.9 million USD of which $20.9 million (29%) was paid to contributors.
Founder and CEO Livingstone left iStockphoto in 2009. He went on to co-found competitor Stocksy United in 2013.
In 2013, iStockphoto was rebranded as iStock by Getty Images, removing the word 'photo' to convey that the company offers stock media other than just photography, such as vector illustrations, audio, and video.
In 2020, iStock began offering weekly complimentary stock photos from it
|
https://en.wikipedia.org/wiki/Game%20server
|
A game server (also sometimes referred to as a host) is a server which is the authoritative source of events in a multiplayer video game. The server transmits enough data about its internal state to allow its connected clients to maintain their own accurate version of the game world for display to players. They also receive and process each player's input.
Types
Dedicated server
Dedicated servers simulate game worlds without supporting direct input or output, except that required for their administration. Players must connect to the server with separate client programs in order to see and interact with the game.
The foremost advantage of dedicated servers is their suitability for hosting in professional data centers, with all of the reliability and performance benefits that entails. Remote hosting also eliminates the low-latency advantage that would otherwise be held by any player who hosts and connects to a server from the same machine or local network.
Dedicated servers cost money to run, however. Cost is sometimes met by a game's developers (particularly on consoles) and sometimes by clan groups, but in either case, the public is reliant on third parties providing servers to connect to. For this reason, most games which use dedicated servers also provide listen server support. Players of these games will oftentimes host servers for the public and their clans, either by hosting a server instance from their own hardware, or by renting from a game server hosting provider.
Listen server
Listen servers run in the same process as a game client. They otherwise function like dedicated servers, but typically have the disadvantage of having to communicate with remote players over the residential internet connection of the hosting player. Performance is also reduced by the simple fact that the machine running the server is also generating an output image. Furthermore, listen servers grant anyone playing on them directly a large latency advantage over other players an
|
https://en.wikipedia.org/wiki/Loop%20maintenance%20operations%20system
|
The Loop Maintenance Operations System (LMOS) is a telephone company trouble ticketing system to coordinate repairs of local loops (telephone lines). When a problem is reported by a subscriber, it is filed and relayed through the Cross Front End, which is a link from the CRSAB (Centralized Repair Service Answering Bureau) to the LMOS network. The trouble report is then sent to the Front End via the Datakit network, where a Basic Output Report is requested (usually by a screening agent or lineman). The BOR provides line information including past trouble history and MLT (Mechanized Loop Testing) tests. As LMOS is responsible for trouble reports, analysis, and similar related functions, MLT does the actual testing of customer loops. MLT hardware is located in the Repair Service Bureau. Test trunks connect MLT hardware to the telephone exchanges or wire centers, which in turn connect with the subscriber loops.
The LMOS database is a proprietary file system, designed with 11 access methods (variable index, index, hash tree, fixed partition file, etc.). This is highly tuned for the various pieces of data used by LMOS.
LMOS, which was first brought on line as a mainframe application in the 1970s, was one of the first telephone company operations support systems to be ported to the UNIX operating system. The first port of LMOS was to Digital Equipment Corporation's PDP 11/70 machines and was completed in 1981. Later versions used VAX-11/780s. Today, LMOS runs on HP-UX 11i systems.
References
See also
Operations support systems
Local loop
|
https://en.wikipedia.org/wiki/Video%20Encoded%20Invisible%20Light
|
Video Encoded Invisible Light (VEIL) is a technology for encoding low-bandwidth digital data bitstream in video signal, developed by VEIL Interactive Technologies. VEIL is compatible with multiple formats of video signals, including PAL, SECAM, and NTSC. The technology is based on a steganographically encoded data stream in the luminance of the videosignal.
A recent application of VEIL, the VEIL Rights Assertion Mark (VRAM or V-RAM) is a copy-restriction signal that can be used to ask devices to apply DRM technology. This has been seen as analogous to the broadcast flag. It is also known as "CGMS-A plus VEIL" and "broadcast flag on steroids."
There are two versions of VEIL on the market:
VEIL-I, or VEIL 1, has raw speed of 120 bits per second. It is used for unidirectional communication (TV→devices) with simple devices or toys, and to deliver coupons with TV advertising. It manipulates the luminance of the video signal in ways difficult to perceive to human eye.
VEIL-II, or VEIL 2, has speed of 7200-bit/s and is one of the technologies of choice for interactive television, as it allows communication with VEIL servers through devices equipped with backchannels. VEIL-II-capable set-top boxes can communicate with other devices via WiFi, Bluetooth, or other short-range wireless technologies. VEIL 2 manipulates the average luminance of the alternate lines of the signal, where one is slightly raised and the other one is slightly lowered (or vice versa), encoding a bit in every pair of lines.
The symbols (groups of 4 data bits) transmitted by VEIL-II system are encoded as "PN sequences", sequences of 16 "chips". Groups of 4 chips are encoded in pairs of lines. Each line pair is split to 4 parts, where the luminance is raised or lowered (correspondingly vice versa in the other line). In NTSC, 4-bit symbols are encoded in groups of 8 scan lines. With 224 lines per field this equals 112 bits per field, or 7200 bits per second of broadcast. VEIL-II uses scan lines 34 to
|
https://en.wikipedia.org/wiki/Laplace%20expansion
|
In linear algebra, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an matrix as a weighted sum of minors, which are the determinants of some submatrices of . Specifically, for every , the Laplace expansion along the th row is the equality
where is the entry of the th row and th column of , and is the determinant of the submatrix obtained by removing the th row and the th column of . Similarly, the Laplace expansion along the th column is the equality
(Each identity implies the other, since the determinants of a matrix and its transpose are the same.)
The term is called the cofactor of in .
The Laplace expansion is often useful in proofs, as in, for example, allowing recursion on the size of matrices. It is also of didactic interest for its simplicity and as one of several ways to view and compute the determinant. For large matrices, it quickly becomes inefficient to compute when compared to Gaussian elimination.
Examples
Consider the matrix
The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields:
Laplace expansion along the second column yields the same result:
It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.
Proof
Suppose is an n × n matrix and For clarity we also label the entries of that compose its minor matrix as
for
Consider the terms in the expansion of that have as a factor. Each has the form
for some permutation with , and a unique and evidently related permutation which selects the same minor entries as . Similarly each choice of determines a corresponding i.e. the correspondence is a bijection between and
Using Cauchy's two-line notation, the explicit relation between and can be written as
wh
|
https://en.wikipedia.org/wiki/Logic%20simulation
|
Logic simulation is the use of simulation software to predict the behavior of digital circuits and hardware description languages. Simulation can be performed at varying degrees of physical abstraction, such as at the transistor level, gate level, register-transfer level (RTL), electronic system-level (ESL), or behavioral level.
Use in verification
Logic simulation may be used as part of the verification process in designing hardware.
Simulations have the advantage of providing a familiar look and feel to the user in that it is constructed from the same language and symbols used in design. By allowing the user to interact directly with the design, simulation is a natural way for the designer to get feedback on their design.
Length of simulation
The level of effort required to debug and then verify the design is proportional to the maturity of the design. That is, early in the design's life, bugs and incorrect behavior are usually found quickly. As the design matures, the simulation will require more time and resources to run, and errors will take progressively longer to be found. This is particularly problematic when simulating components for modern-day systems; every component that changes state in a single clock cycle on the simulation will require several clock cycles to simulate.
A straightforward approach to this issue may be to emulate the circuit on a field-programmable gate array instead. Formal verification can also be explored as an alternative to simulation, although a formal proof is not always possible or convenient.
A prospective way to accelerate logic simulation is using distributed and parallel computations.
To help gauge the thoroughness of a simulation, tools exist for assessing code coverage,
functional coverage, finite state machine (FSM) coverage, and many other metrics.
Event simulation versus cycle simulation
Event simulation allows the design to contain simple timing information – the delay needed for a signal to travel from one plac
|
https://en.wikipedia.org/wiki/Functional%20verification
|
Functional verification is the task of verifying that the logic design conforms to specification. Functional verification attempts to answer the question "Does this proposed design do what is intended?" This is complex and takes the majority of time and effort (up to 70% of design and development time) in most large electronic system design projects. Functional verification is a part of more encompassing design verification, which, besides functional verification, considers non-functional aspects like timing, layout and power.
Background
Although the number of transistors increased exponentially according to Moore's law, increasing the number of engineers and time taken to produce the designs only increase linearly. As the transistors' complexity increases, the number of coding errors also increases. Most of the errors in logic coding come from careless coding (12.7%), miscommunication (11.4%), and microarchitecture challenges (9.3%). Thus, electronic design automation (EDA) tools are produced to catch up with the complexity of transistors design. Languages such as Verilog and VHDL are introduced together with the EDA tools.
Functional verification is very difficult because of the sheer volume of possible test-cases that exist in even a simple design. Frequently there are more than 10^80 possible tests to comprehensively verify a design – a number that is impossible to achieve in a lifetime. This effort is equivalent to program verification, and is NP-hard or even worse – and no solution has been found that works well in all cases. However, it can be attacked by many methods. None of them are perfect, but each can be helpful in certain circumstances:
Logic simulation simulates the logic before it is built.
Simulation acceleration applies special purpose hardware to the logic simulation problem.
Emulation builds a version of system using programmable logic. This is expensive, and still much slower than the real hardware, but orders of magnitude faster than simu
|
https://en.wikipedia.org/wiki/Upper-convected%20time%20derivative
|
In continuum mechanics, including fluid dynamics, an upper-convected time derivative or Oldroyd derivative, named after James G. Oldroyd, is the rate of change of some tensor property of a small parcel of fluid that is written in the coordinate system rotating and stretching with the fluid.
The operator is specified by the following formula:
where:
is the upper-convected time derivative of a tensor field
is the substantive derivative
is the tensor of velocity derivatives for the fluid.
The formula can be rewritten as:
By definition, the upper-convected time derivative of the Finger tensor is always zero.
It can be shown that the upper-convected time derivative of a spacelike vector field is just its Lie derivative by the velocity field of the continuum.
The upper-convected derivative is widely used in polymer rheology for the description of the behavior of a viscoelastic fluid under large deformations.
Examples for the symmetric tensor A
Simple shear
For the case of simple shear:
Thus,
Uniaxial extension of incompressible fluid
In this case a material is stretched in the direction X and compresses in the directions Y and Z, so to keep volume constant.
The gradients of velocity are:
Thus,
See also
Upper-convected Maxwell model
References
Notes
Multivariable calculus
Fluid dynamics
Non-Newtonian fluids
|
https://en.wikipedia.org/wiki/Solar%20panel
|
A solar panel is a device that converts sunlight into electricity by using photovoltaic (PV) cells. PV cells are made of materials that generate electrons when exposed to light. The electrons flow through a circuit and produce direct current (DC) electricity, which can be used to power various devices or be stored in batteries. Solar panels are also known as solar cell panels, solar electric panels, or PV modules.
Solar panels are usually arranged in groups called arrays or systems. A photovoltaic system consists of one or more solar panels, an inverter that converts DC electricity to alternating current (AC) electricity, and sometimes other components such as controllers, meters, and trackers. A photovoltaic system can be used to provide electricity for off-grid applications, such as remote homes or cabins, or to feed electricity into the grid and earn credits or payments from the utility company. This is called a grid-connected photovoltaic system.
Some advantages of solar panels are that they use a renewable and clean source of energy, reduce greenhouse gas emissions, and lower electricity bills. Some disadvantages are that they depend on the availability and intensity of sunlight, require cleaning, and have high initial costs. Solar panels are widely used for residential, commercial, and industrial purposes, as well as for space and transportation applications.
History
In 1839, the ability of some materials to create an electrical charge from light exposure was first observed by the French physicist Edmond Becquerel. Though these initial solar panels were too inefficient for even simple electric devices, they were used as an instrument to measure light.
The observation by Becquerel was not replicated again until 1873, when the English electrical engineer Willoughby Smith discovered that the charge could be caused by light hitting selenium. After this discovery, William Grylls Adams and Richard Evans Day published "The action of light on selenium" in 1876,
|
https://en.wikipedia.org/wiki/Backup%20rotation%20scheme
|
A backup rotation scheme is a system of backing up data to computer media (such as tapes) that minimizes, by re-use, the number of media used. The scheme determines how and when each piece of removable storage is used for a backup job and how long it is retained once it has backup data stored on it. Different techniques have evolved over time to balance data retention and restoration needs with the cost of extra data storage media. Such a scheme can be quite complicated if it takes incremental backups, multiple retention periods, and off-site storage into consideration.
Schemes
First in, first out
A first in, first out (FIFO) backup scheme saves new or modified files onto the "oldest" media in the set, i.e. the media that contain the oldest and thus least useful previously backed up data. Performing a daily backup onto a set of 14 media, the backup depth would be 14 days. Each day, the oldest media would be inserted when performing the backup. This is the simplest rotation scheme and is usually the first to come to mind.
This scheme has the advantage that it retains the longest possible tail of daily backups. It can be used when archived data is unimportant (or is retained separately from the short-term backup data) and data before the rotation period is irrelevant.
However, this scheme suffers from the possibility of data loss: suppose, an error is introduced into the data, but the problem is not identified until several generations of backups and revisions have taken place. Thus when the error is detected, all the backup files contain the error. It would then be useful to have at least one older version of the data, as it would not have the error.
Grandfather-father-son
Grandfather-father-son backup (GFS) is a common rotation scheme for backup media, in which there are three or more backup cycles, such as daily, weekly and monthly. The daily backups are rotated on a 3-months basis using a FIFO system as above. The weekly backups are similarly rotated on
|
https://en.wikipedia.org/wiki/Ply%20%28game%20theory%29
|
In two-or-more-player sequential games, a ply is one turn taken by one of the players. The word is used to clarify what is meant when one might otherwise say "turn".
The word "turn" can be a problem since it means different things in different traditions. For example, in standard chess terminology, one move consists of a turn by each player; therefore a ply in chess is a half-move. Thus, after 20 moves in a chess game, 40 plies have been completed—20 by white and 20 by black. In the game of Go, by contrast, a ply is the normal unit of counting moves; so for example to say that a game is 250 moves long is to imply 250 plies.
In poker with n players the word "street" is used for a full betting round consisting of n plies -each dealt card may sometimes also be called a "street". For instance in heads up Texas hold'em a street consists of 2 plies, with possible plays being check/raise/call/fold: the first by the player at the big blind, and the second by the dealer, who posts the small blind; and there are 4 streets: preflop, flop, turn, river -the latter 3 corresponding to community cards. The terms "half-street" and "half-street game" are sometimes used to describe, respectively, a single bet in a heads up game, and a simplified heads up poker game where only a single player bets.
The word "ply" used as a synonym for "layer" goes back to the 15th century. Arthur Samuel first used the term in its game-theoretic sense in his seminal paper on machine learning in checkers in 1959, but with a slightly different meaning: the "ply", in Samuel's terminology, is actually the depth of analysis ("Certain expressions were introduced which we will find useful. These are: Ply, defined as the number of moves ahead, where a ply of two consists of one proposed move by the machine and one anticipated reply by the opponent").
In computing, the concept of a ply is important because one ply corresponds to one level of the game tree. The Deep Blue chess computer which d
|
https://en.wikipedia.org/wiki/Virtual%20Object%20System
|
The Virtual Object System (VOS) is a computer software technology for creating distributed object systems. The sites hosting Vobjects are typically linked by a computer network, such as a local area network or the Internet. Vobjects may send messages to other Vobjects over these network links (remotely) or within the same host site (locally) to perform actions and synchronize state. In this way, VOS may also be called an object-oriented remote procedure call system. In addition, Vobjects may have a number of directed relations to other Vobjects, which allows them to form directed graph data structures.
VOS is patent free, and its implementation is Free Software. The primary application focus of VOS is general purpose, multiuser, collaborative 3D virtual environments or virtual reality. The primary designer and author of VOS is Peter Amstutz.
External links
Interreality.org official site
Groupware
Distributed computing architecture
|
https://en.wikipedia.org/wiki/Christos%20Papadimitriou
|
Christos Charilaos Papadimitriou (; born August 16, 1949) is a Greek theoretical computer scientist and the Donovan Family Professor of Computer Science at Columbia University.
Education
Papadimitriou studied at the National Technical University of Athens, where in 1972 he received his Bachelor of Arts degree in electrical engineering. He then pursued graduate studies at Princeton University, where he received his Ph.D. in electrical engineering and computer science in 1976 after completing a doctoral dissertation titled "The complexity of combinatorial optimization problems."
Career
Papadimitriou has taught at Harvard, MIT, the National Technical University of Athens, Stanford, UCSD, University of California, Berkeley and is currently the Donovan Family Professor of Computer Science at Columbia University.
Papadimitriou co-authored a paper on pancake sorting with Bill Gates, then a Harvard undergraduate. Papadimitriou recalled "Two years later, I called to tell him our paper had been accepted to a fine math journal. He sounded eminently disinterested. He had moved to Albuquerque, New Mexico to run a small company writing code for microprocessors, of all things. I remember thinking: 'Such a brilliant kid. What a waste.'" The company was Microsoft.
Papadimitriou co-authored "The Complexity of Computing a Nash Equilibrium" with his students Constantinos Daskalakis and Paul W. Goldberg, for which they received the 2008 Kalai Game Theory and Computer Science Prize from the Game Theory Society for "the best paper at the interface of game theory and computer science", in particular "for its key conceptual and technical contributions"; and the Outstanding Paper Prize from the Society for Industrial and Applied Mathematics.
In 2001, Papadimitriou was inducted as a Fellow of the Association for Computing Machinery and in 2002 he was awarded the Knuth Prize. Also in 2002, he became a member of the U.S. National Academy of Engineering for contributions to complexity theor
|
https://en.wikipedia.org/wiki/Home%20network
|
A home network or home area network (HAN) is a type of computer network that facilitates communication among devices within the close vicinity of a home. Devices capable of participating in this network, for example, smart devices such as network printers and handheld mobile computers, often gain enhanced emergent capabilities through their ability to interact. These additional capabilities can be used to increase the quality of life inside the home in a variety of ways, such as automation of repetitive tasks, increased personal productivity, enhanced home security, and easier access to entertainment.
Origin
IPv4 address exhaustion has forced most Internet service providers to grant only a single WAN-facing IP address for each residential account. Multiple devices within a residence or small office are provisioned with internet access by establishing a local area network (LAN) for the local devices with IP addresses reservied for private networks. A network router is configured with the provider's IP address on the WAN interface, which is shared among all devices in the LAN by network address translation.
Infrastructure devices
Certain devices on a home network are primarily concerned with enabling or supporting the communications of the kinds of end devices home-dwellers more directly interact with. Unlike their data center counterparts, these "networking" devices are compact and passively cooled, aiming to be as hands-off and non-obtrusive as possible:
A gateway establishes physical and data link layer connectivity to a WAN over a service provider's native telecommunications infrastructure. Such devices typically contain a cable, DSL, or optical modem bound to a network interface controller for Ethernet. Routers are often incorporated into these devices for additional convenience.
A router establishes network layer connectivity between a WAN and the home network. It also performs the key function of network address translation that allows independently add
|
https://en.wikipedia.org/wiki/Tunnel%20and%20Reservoir%20Plan
|
The Tunnel and Reservoir Plan (abbreviated TARP and more commonly known as the Deep Tunnel Project or the Chicago Deep Tunnel) is a large civil engineering project that aims to reduce flooding in the metropolitan Chicago area, and to reduce the harmful effects of flushing raw sewage into Lake Michigan by diverting storm water and sewage into temporary holding reservoirs. The megaproject is one of the largest civil engineering projects ever undertaken in terms of scope, cost and timeframe. Commissioned in the mid-1970s, the project is managed by the Metropolitan Water Reclamation District of Greater Chicago. Completion of the system is not anticipated until 2029, but substantial portions of the system have already opened and are currently operational. Across 30 years of construction, over $3 billion has been spent on the project.
History
19th century
The Deep Tunnel Project is the latest in a series of civil engineering projects dating back to 1834. Many of the problems experienced by the city of Chicago are directly related to its low level topography and the fact that the city is largely built upon marsh or wet prairie. This combined with a temperate wet climate and the human development of open land, leads to substantial water runoff. Lake Michigan was ineffective in carrying sewage away from the city, and in the event of a rainstorm, the water pumps that provided drinking water to Chicagoans became contaminated with sewage. Though no epidemics were caused by this system (see Chicago 1885 cholera epidemic myth), it soon became clear that the sewage system needed to be diverted to flow away from Lake Michigan in order to handle an increasing population's sanitation needs.
Between 1864 and 1867, under the leadership of Ellis S. Chesbrough, the city built the two-mile Chicago lake tunnel to a new water intake location farther from the shore. Crews began from the intake location and the shore, tunneling in two shifts a day. Clay and earth were drawn away by mule-d
|
https://en.wikipedia.org/wiki/Centrosymmetry
|
In crystallography, a centrosymmetric point group contains an inversion center as one of its symmetry elements. In such a point group, for every point (x, y, z) in the unit cell there is an indistinguishable point (-x, -y, -z). Such point groups are also said to have inversion symmetry. Point reflection is a similar term used in geometry.
Crystals with an inversion center cannot display certain properties, such as the piezoelectric effect.
The following space groups have inversion symmetry: the triclinic space group 2, the monoclinic 10-15, the orthorhombic 47-74, the tetragonal 83-88 and 123-142, the trigonal 147, 148 and 162-167, the hexagonal 175, 176 and 191-194, the cubic 200-206 and 221-230.
Point groups lacking an inversion center (non-centrosymmetric) can be polar, chiral, both, or neither.
A polar point group is one whose symmetry operations leave more than one common point unmoved. A polar point group has no unique origin because each of those unmoved points can be chosen as one. One or more unique polar axes could be made through two such collinear unmoved points. Polar crystallographic point groups include 1, 2, 3, 4, 6, m, mm2, 3m, 4mm, and 6mm.
A chiral (often also called enantiomorphic) point group is one containing only proper (often called "pure") rotation symmetry. No inversion, reflection, roto-inversion or roto-reflection (i.e., improper rotation) symmetry exists in such point group. Chiral crystallographic point groups include 1, 2, 3, 4, 6, 222, 422, 622, 32, 23, and 432. Chiral molecules such as proteins crystallize in chiral point groups.
The remaining non-centrosymmetric crystallographic point groups , 2m, , m2, 3m are neither polar nor chiral.
See also
Centrosymmetric matrix
Rule of mutual exclusion
References
Symmetry
ru:Центральная симметрия
|
https://en.wikipedia.org/wiki/Mortgage%20constant
|
Mortgage constant, also called "mortgage capitalization rate", is the capitalization rate for debt. It is usually computed monthly by dividing the monthly payment by the mortgage principal. An annualized mortgage constant can be found by multiplying the monthly constant by 12 or by dividing the annual debt service by the mortgage principal.
A mortgage constant is a rate that appraisers determine for use in the band of investment approach. It is also used in conjunction with the debt-coverage ratio that many commercial bankers use.
The mortgage constant is commonly denoted as Rm. The Rm is higher than the interest rate for an amortizing loan because the Rm includes consideration of the principal as well as the interest. The Rm could be lower than the interest for a negatively amortizing loan.
Formula
Where:
i = Interest
n = Total number of months required to pay off the loan.
m = Number of payment months in a year (12).
example:
(0.055/12)/(1-(1/(POWER(1+(0.055/12),360))))*12 for MS Excel
References
Mortgage industry of the United States
Interest rates
Mathematical finance
|
https://en.wikipedia.org/wiki/Jim%20Kent
|
William James Kent (born February 10, 1960) is an American research scientist and computer programmer. He has been a contributor to genome database projects and the 2003 winner of the Benjamin Franklin Award.
Early life
Kent was born in Hawaii and grew up in San Francisco, California, United States.
Computer animation
Kent began his programming career in 1983 with Island Graphics Inc. where he wrote the Aegis Animator program for the Amiga home computer. This program combined polygon tweening in 3D with simple 2D cel-based animation. In 1985 he founded and ran a software company, Dancing Flame, which adapted the Aegis Animator to the Atari ST, and created Cyber Paint for that machine. Cyber Paint was a 2D animation program that brought together a wide variety of animation and paint functionality and the delta-compressed animation format developed for CAD-3D. The user could move freely between animation frames and paint arbitrarily, or utilize various animation tools for automatic tweening movement across frames. Cyber Paint was one of the first, if not the first, consumer program that enabled the user to paint across time in a compressed digital video format. Later he developed a similar program, the Autodesk Animator for PC compatibles, where the image compression improved to the point it could play off of hard disk, and one could paint using "inks" that performed algorithmic transformations such as smoothing, transparency, and tiled patterns. The Autodesk Animator was used to create artwork for a wide variety of video games.
Involvement with the Human Genome Project
In 2000, he wrote a program, GigAssembler, that allowed the publicly funded Human Genome Project to assemble and publish the first human genome sequence. His efforts were motivated by the research needs of himself and his colleagues, but also out of concern that the data might be made proprietary via patents by Celera Genomics. In their close race with Celera, Kent and the UCSC Professor David
|
https://en.wikipedia.org/wiki/Strike%20and%20dip
|
In geology, strike and dip is a measurement convention used to describe the plane orientation or attitude of a planar geologic feature. A feature's strike is the azimuth of an imagined horizontal line across the plane, and its dip is the angle of inclination (or depression angle) measured downward from horizontal. They are used together to measure and document a structure's characteristics for study or for use on a geologic map. A feature's orientation can also be represented by dip and dip direction, using the azimuth of the dip rather than the strike value. Linear features are similarly measured with trend and plunge, where "trend" is analogous to dip direction and "plunge" is the dip angle.
Strike and dip are measured using a compass and a clinometer. A compass is used to measure the feature's strike by holding the compass horizontally against the feature. A clinometer measures the features dip by recording the inclination perpendicular to the strike. These can be done separately, or together using a tool such as a Brunton transit or a Silva compass.
Any planar feature can be described by strike and dip, including sedimentary bedding, fractures, faults, joints, cuestas, igneous dikes and sills, metamorphic foliation and fabric, etc. Observations about a structure's orientation can lead to inferences about certain parts of an area's history, such as movement, deformation, or tectonic activity.
Elements
When measuring or describing the attitude of an inclined feature, two quantities are needed. The angle the slope descends, or dip, and the direction of descent, which can be represented by strike or dip direction.
Dip
Dip is the inclination of a given feature, and is measured from the steepest angle of descent of a tilted bed or feature relative to a horizontal plane. True dip is always perpendicular to the strike. It is written as a number (between 0° and 90°) indicating the angle in degrees below horizontal. It can be accompanied with the rough direction o
|
https://en.wikipedia.org/wiki/Theory%20%28mathematical%20logic%29
|
In mathematical logic, a theory (also called a formal theory) is a set of sentences in a formal language. In most scenarios a deductive system is first understood from context, after which an element of a deductively closed theory is then called a theorem of the theory. In many deductive systems there is usually a subset that is called "the set of axioms" of the theory , in which case the deductive system is also called an "axiomatic system". By definition, every axiom is automatically a theorem. A first-order theory is a set of first-order sentences (theorems) recursively obtained by the inference rules of the system applied to the set of axioms.
General theories (as expressed in formal language)
When defining theories for foundational purposes, additional care must be taken, as normal set-theoretic language may not be appropriate.
The construction of a theory begins by specifying a definite non-empty conceptual class , the elements of which are called statements. These initial statements are often called the primitive elements or elementary statements of the theory—to distinguish them from other statements that may be derived from them.
A theory is a conceptual class consisting of certain of these elementary statements. The elementary statements that belong to are called the elementary theorems of and are said to be true. In this way, a theory can be seen as a way of designating a subset of that only contain statements that are true.
This general way of designating a theory stipulates that the truth of any of its elementary statements is not known without reference to . Thus the same elementary statement may be true with respect to one theory but false with respect to another. This is reminiscent of the case in ordinary language where statements such as "He is an honest person" cannot be judged true or false without interpreting who "he" is, and, for that matter, what an "honest person" is under this theory.
Subtheories and extensions
A theory is
|
https://en.wikipedia.org/wiki/ArchiveGrid
|
ArchiveGrid is a collection of over five million archival material descriptions, including MARC records from WorldCat and finding aids harvested from the web. It contains archival collections held by thousands of libraries, museums, historical societies, and archives. Contribution to the system is available to any institution. Most of the contributions are from United States based institutions, but many other countries are represented, including Canada, Australia, and the United Kingdom. ArchiveGrid is associated with OCLC Research and helps to advance their goals of making archival collections and materials easier to find. ArchiveGrid is described as "the ultimate destination for searching through family histories, political papers, and historical records held in archives around the world."
History
Research Libraries Group (RLG) was founded in 1974 by three universities (Columbia, Harvard, and Yale) and The New York Public Library. In 1998, RLG launched the RLG Archival Resources database, which offered online access to the holdings of archival collections. RLG began to redesign the database in 2004 in order to make it more useful for researchers. As a result of this redesign, RLG launched ArchiveGrid in March 2006. As a result of a grant, ArchiveGrid was freely accessible until May 31, 2006.
RLG/OCLC Partnership
In 2006, the RLG and the Online Computer Learning Center, Inc. (OCLC) announced the combining of the two organizations. RLG Programs was formed on July 1, 2006 and became part of the OCLC Programs and Research division. ArchiveGrid was offered as an OCLC subscription-based discovery service from 2006 until it was discontinued in 2012. In 2009, RLG Programs became known as RLG Partnership. The OCLC Research Library Partnership replaced the RLG Partnership in 2011. The five-year period of successfully integrating the RLG Partnership into OCLC was completed 30 June 2011. In 2012, ArchiveGrid became a free system, while remaining a part of the
|
https://en.wikipedia.org/wiki/Lead%20shielding
|
Lead shielding refers to the use of lead as a form of radiation protection to shield people or objects from radiation so as to reduce the effective dose. Lead can effectively attenuate certain kinds of radiation because of its high density and high atomic number; principally, it is effective at stopping gamma rays and x-rays.
Operation
Lead's high density is caused by the combination of its high atomic number and the relatively short bond lengths and atomic radius. The high atomic number means that more electrons are needed to maintain a neutral charge and the short bond length and a small atomic radius means that many atoms can be packed into a particular lead structure.
Because of lead's density and large number of electrons, it is well suited to scattering x-rays and gamma-rays. These rays form photons, a type of boson, which impart energy onto electrons when they come into contact. Without a lead shield, the electrons within a person's body would be affected, which could damage their DNA. When the radiation attempts to pass through lead, its electrons absorb and scatter the energy. Eventually though, the lead will degrade from the energy to which it is exposed. However, lead is not effective against all types of radiation. High energy electrons (including beta radiation) incident on lead may create bremsstrahlung radiation, which is potentially more dangerous to tissue than the original radiation. Furthermore, lead is not a particularly effective absorber of neutron radiation.
Types
Lead is used for shielding in x-ray machines, nuclear power plants, labs, medical facilities, military equipment, and other places where radiation may be encountered. There is great variety in the types of shielding available both to protect people and to shield equipment and experiments. In gamma-spectroscopy for example, lead castles are constructed to shield the probe from environmental radiation. Personal shielding includes lead aprons (such as the familiar garment used d
|
https://en.wikipedia.org/wiki/NetCDF%20Operators
|
NCO (netCDF Operators) is a suite of programs designed to facilitate manipulation and analysis of self-describing data stored in the netCDF format.
Program Suite
ncap2 netCDF arithmetic processor
ncatted netCDF attribute editor
ncbo netCDF binary operator (includes addition, multiplication and others)
ncclimo netCDF climatology generator
nces netCDF ensemble statistics
ncecat netCDF ensemble concatenator
ncflint netCDF file interpolator
ncks netCDF kitchen sink
ncpdq netCDF permute dimensions quickly, pack data quietly
ncra netCDF record averager
ncrcat netCDF record concatenator
ncremap netCDF remaper
ncrename netCDF renamer
ncwa netCDF weighted averager
References
External links
Meteorological data and networks
|
https://en.wikipedia.org/wiki/Nigoda
|
In Jainism cosmology, the Nigoda is a realm existing in which the lowest forms of invisible life reside in endless numbers, and without any hope of release by self-effort. Jain scriptures describe nigodas which are microorganisms living in large clusters, having only one sense, having a very short life and are said to pervade each and every part of universe, even in tissues of plants and flesh of animals. The Nigoda exists in contrast to the Supreme Abode, also located at the Siddhashila (top of the universe) where liberated souls exist in omnisciencent and eternal bliss. According to Jain tradition, it is said that when a human being achieves liberation (Moksha) or if a human would be born as a Nigoda due to karma, another from the Nigoda is given the potential of self-effort and hope.
Characteristics
The life in Nigoda is that of a sub-microscopic organism possessing only one sense, i.e., of touch.
Notes
References
Jain cosmology
|
https://en.wikipedia.org/wiki/Kelly%20hose
|
A Kelly hose (also known as a mud hose or rotary hose) is a flexible, steel reinforced, high pressure hose that connects the standpipe to the kelly (or more specifically to the goose-neck on the swivel above the kelly) and allows free vertical movement of the kelly while facilitating the flow of drilling fluid through the system and down the drill string. The Kelly hose has a diameter of 3-5 inches (inside diameter).
References
Petroleum engineering
Drilling technology
Hoses
|
https://en.wikipedia.org/wiki/Pseudo-range%20multilateration
|
Pseudo-range multilateration, often simply multilateration (MLAT) when in context, is a technique for determining the position of an unknown point, such as a vehicle, based on measurement of the times of arrival (TOAs) of energy waves traveling between the unknown point and multiple stations at known locations. When the waves are transmitted by the vehicle, MLAT is used for surveillance; when the waves are transmitted by the stations, MLAT is used for navigation (hyperbolic navigation). In either case, the stations' clocks are assumed synchronized but the vehicle's clock is not.
Prior to computing a solution, the common time of transmission (TOT) of the waves is unknown to the receiver(s), either on the vehicle (one receiver, navigation) or at the stations (multiple receivers, surveillance). Consequently, also unknown is the wave times of flight (TOFs) the ranges of the vehicle from the stations divided by the wave propagation speed. Each pseudo-range is the corresponding TOA multiplied by the propagation speed with the same arbitrary constant added (representing the unknown TOT).
In navigation applications, the vehicle is often termed the "user"; in surveillance applications, the vehicle may be termed the "target". For a mathematically exact solution, the ranges must not change during the period the signals are received (between first and last to arrive at a receiver). Thus, for navigation, an exact solution requires a stationary vehicle; however, multilateration is often applied to the navigation of moving vehicles whose speed is much less than the wave propagation speed.
If is the number of physical dimensions being considered (thus, vehicle coordinates sought) and is the number of signals received (thus, TOAs measured), it is required that . Then, the fundamental set of measurement equations is:
TOAs ( measurements) = TOFs ( unknown variables embedded in expressions) + TOT (one unknown variable replicated times).
Processing is usually required to extr
|
https://en.wikipedia.org/wiki/Acer%20PICA
|
The M6100 PICA is a system logic chipset designed by Acer Laboratories introduced in 1993. PICA stands for Performance-enhanced Input-output and CPU Architecture. It was based on the Jazz architecture developed by Microsoft and supported the MIPS Technologies R4000 or R4400 microprocessors. The chipset was designed for computers that run Windows NT, and therefore used ARC firmware to boot Windows NT. The chipset consisted of six chips: a CPU and secondary cache controller, a buffer, a I/O cache and bus controller, a memory controller, and two data buffers.
PICA was used by Acer in its Formula 4000 personal workstation, which NEC sold under the OEM name RISCstation Image.
References
"Acer Launches Set For Building R4400 NT Machines". (29 March 1993). Computergram International.
See also
DeskStation Technology
PICA
PICA
MIPS architecture
|
https://en.wikipedia.org/wiki/Roller%20Coaster%20DataBase
|
Roller Coaster DataBase (RCDB) is a roller coaster and amusement park database begun in 1996 by Duane Marden. It has grown to feature statistics and pictures of over 10,000 roller coasters from around the world.
Publications that have mentioned RCDB include The New York Times, Los Angeles Times, Toledo Blade, Orlando Sentinel, Time, Forbes, Mail & Guardian, and Chicago Sun-Times.
History
RCDB was started in 1996 by Duane Marden, a computer programmer from Brookfield, Wisconsin. The website is run off web servers in Marden's basement and a location in St. Louis.
Content
Each roller coaster entry includes any of the following information for the ride: current amusement park location, type, status (existing, standing but not operating (SBNO), defunct), opening date, make/model, cost, capacity, length, height, drop, number of inversions, speed, duration, maximum vertical angle, trains, and special notes. Entries may also feature reader-contributed photos and/or press releases.
The site also categorizes the rides into special orders, including a list of the tallest coasters, a list of the fastest coasters, a list of the most inversions on a coaster, a list of the parks with the most inversions, etc., each sortable by steel, wooden, or both. Each roller coaster entry links back to a page which lists all of that park's roller coasters, past and present, and includes a brief history and any links to fan web pages saluting the park.
Languages
The site is available in ten languages: English, German, French, Spanish, Dutch, Portuguese, Italian, Swedish, Japanese and Simplified Chinese.
References
External links
Internet properties established in 1996
1996 establishments in Wisconsin
Online databases
Roller coasters
Entertainment databases
|
https://en.wikipedia.org/wiki/Global%20Sea%20Level%20Observing%20System
|
Established in 1985, The Global Sea Level Observing System (GLOSS) is an Intergovernmental Oceanographic Commission (IOC) program whose purpose is to measure sea level globally for long-term climate change studies. The program's purpose has changed since the 2004 Indian Ocean earthquake and the program now collects real time measurements of sea level. The project is currently upgrading the over 290 stations it currently runs, so that they can send real time data via satellite to newly set up national tsunami centres. They are also fitting the stations with solar panels so they can continue to operate even if the mains power supply is interrupted by severe weather. The Global Sea Level Observing System does not compete with Deep-ocean Assessment and Reporting of Tsunamis as most GLOSS transducers are located close to land masses while DART's transducers are far out in the ocean.
The concept for GLOSS was proposed to the IOC by oceanographers David Pugh and Klaus Wyrtki in order to develop the Permanent Service for Mean Sea Level (PSMSL) data bank. The PSMSL states that "GLOSS provides oversight and coordination for global and regional sea level networks in support of, and with direction from, the oceanographic and climate research communities."
The Global Sea Level Observation System utilizes 290 tide gauge stations and watches over 90 countries and territories to have a global coverage.
The research that is provided by GLOSS is important for many things including research into sea level change and ocean circulation, coastal protection during events such as storm surges, providing flood warning and monitoring tsunamis, tide tables for port operations, fisherman, and recreation, to define datums for national or state boundaries.
GLOSS Core Network
The operation and maintenance of the GLOSS Core Network fulfills a range of research and operational requirements for the GLOSS Network. The goal of this network is to be 100% effective. Each gauge that is placed may di
|
https://en.wikipedia.org/wiki/Transistor%20model
|
Transistors are simple devices with complicated behavior. In order to ensure the reliable operation of circuits employing transistors, it is necessary to scientifically model the physical phenomena observed in their operation using transistor models. There exists a variety of different models that range in complexity and in purpose. Transistor models divide into two major groups: models for device design and models for circuit design.
Models for device design
The modern transistor has an internal structure that exploits complex physical mechanisms. Device design requires a detailed understanding of how device manufacturing processes such as ion implantation, impurity diffusion, oxide growth, annealing, and etching affect device behavior. Process models simulate the manufacturing steps and provide a microscopic description of device "geometry" to the device simulator. "Geometry" does not mean readily identified geometrical features such as a planar or wrap-around gate structure, or raised or recessed forms of source and drain (see Figure 1 for a memory device with some unusual modeling challenges related to charging the floating gate by an avalanche process). It also refers to details inside the structure, such as the doping profiles after completion of device processing.
With this information about what the device looks like, the device simulator models the physical processes taking place in the device to determine its electrical behavior in a variety of circumstances: DC current–voltage behavior, transient behavior (both large-signal and small-signal), dependence on device layout (long and narrow versus short and wide, or interdigitated versus rectangular, or isolated versus proximate to other devices). These simulations tell the device designer whether the device process will produce devices with the electrical behavior needed by the circuit designer, and is used to inform the process designer about any necessary process improvements. Once the process gets close
|
https://en.wikipedia.org/wiki/Stress%20migration
|
Stress migration is a failure mechanism that often occurs in integrated circuit metallization (aluminum, copper). Voids form as result of vacancy migration driven by the hydrostatic stress gradient. Large voids may lead to open circuit or unacceptable resistance increase that impedes the IC performance. 'Stress migration is often referred as stress voiding, stress induced voiding or SIV.
High temperature processing of copper dual damascene structures leaves the copper with a large tensile stress due to a mismatch in coefficient of thermal expansion of the materials involved. The stress can relax with time through the diffusion of vacancies leading to the formation of voids and ultimately open circuit failures.
References
Semiconductor device fabrication
|
https://en.wikipedia.org/wiki/Melvin%20Hochster
|
Melvin Hochster (born August 2, 1943) is an American mathematician working in commutative algebra. He is currently the Jack E. McLaughlin Distinguished University Professor Emeritus of Mathematics at the University of Michigan.
Education
Hochster attended Stuyvesant High School, where he was captain of the Math Team, and received a B.A. from Harvard University. While at Harvard, he was a Putnam Fellow in 1960. He earned his Ph.D. in 1967 from Princeton University, where he wrote a dissertation under Goro Shimura characterizing the prime spectra of commutative rings.
Career
He held positions at the University of Minnesota and Purdue University before joining the faculty at Michigan in 1977.
Hochster's work is primarily in commutative algebra, especially the study of modules over local rings. He has established classic theorems concerning Cohen–Macaulay rings, invariant theory and homological algebra. For example, the Hochster–Roberts theorem states that the invariant ring of a linearly reductive group acting on a regular ring is Cohen–Macaulay. His best-known work is on the homological conjectures, many of which he established for local rings containing a field, thanks to his proof of the existence of big Cohen–Macaulay modules and his technique of reduction to prime characteristic. His most recent work on tight closure, introduced in 1986 with Craig Huneke, has found unexpected applications throughout commutative algebra and algebraic geometry.
He has had more than 40 doctoral students, and the Association for Women in Mathematics has pointed out his outstanding role in mentoring women students pursuing a career in mathematics. He served as the chair of the department of Mathematics at the University of Michigan from 2008 to 2017.
Awards
Hochster shared the 1980 Cole Prize with Michael Aschbacher, received a Guggenheim Fellowship in 1981, and has been a member of both the National Academy of Sciences and the American Academy of Arts and Sciences since 19
|
https://en.wikipedia.org/wiki/XOR%20gate
|
XOR gate (sometimes EOR, or EXOR and pronounced as Exclusive OR) is a digital logic gate that gives a true (1 or HIGH) output when the number of true inputs is odd. An XOR gate implements an exclusive or () from mathematical logic; that is, a true output results if one, and only one, of the inputs to the gate is true. If both inputs are false (0/LOW) or both are true, a false output results. XOR represents the inequality function, i.e., the output is true if the inputs are not alike otherwise the output is false. A way to remember XOR is "must have one or the other but not both".
An XOR gate may serve as a "programmable inverter" in which one input determines whether to invert the other input, or to simply pass it along with no change. Hence it functions as a inverter (a NOT gate) which may be activated or deactivated by a switch.
XOR can also be viewed as addition modulo 2. As a result, XOR gates are used to implement binary addition in computers. A half adder consists of an XOR gate and an AND gate. The gate is also used in subtractors and comparators.
The algebraic expressions or or all represent the XOR gate with inputs A and B. The behavior of XOR is summarized in the truth table shown on the right.
Symbols
There are three schematic symbols for XOR gates: the traditional ANSI and DIN symbols and the IEC symbol. In some cases, the DIN symbol is used with ⊕ instead of ≢. For more information see Logic Gate Symbols.
The "=1" on the IEC symbol indicates that the output is activated by only one active input.
The logic symbols ⊕, Jpq, and ⊻ can be used to denote an XOR operation in algebraic expressions.
C-like languages use the caret symbol ^ to denote bitwise XOR. (Note that the caret does not denote logical conjunction (AND) in these languages, despite the similarity of symbol.)
Implementation
The XOR gate is most commonly implemented using MOSFETs circuits. Some of those implementations include:
CMOS
The Complementary metal–oxide–semiconductor (CM
|
https://en.wikipedia.org/wiki/Exhaustion%20by%20compact%20sets
|
In mathematics, especially general topology and analysis, an exhaustion by compact sets of a topological space is a nested sequence of compact subsets of (i.e. ), such that is contained in the interior of , i.e. for each and . A space admitting an exhaustion by compact sets is called exhaustible by compact sets.
For example, consider and the sequence of closed balls
Occasionally some authors drop the requirement that is in the interior of , but then the property becomes the same as the space being σ-compact, namely a countable union of compact subsets.
Properties
The following are equivalent for a topological space :
is exhaustible by compact sets.
is σ-compact and weakly locally compact.
is Lindelöf and weakly locally compact.
(where weakly locally compact means locally compact in the weak sense that each point has a compact neighborhood).
The hemicompact property is intermediate between exhaustible by compact sets and σ-compact. Every space exhaustible by compact sets is hemicompact and every hemicompact space is σ-compact, but the reverse implications do not hold. For example, the Arens-Fort space and the Appert space are hemicompact, but not exhaustible by compact sets (because not weakly locally compact), and the set of rational numbers with the usual topology is σ-compact, but not hemicompact.
Every regular space exhaustible by compact sets is paracompact.
Notes
References
Leon Ehrenpreis, Theory of Distributions for Locally Compact Spaces, American Mathematical Society, 1982. .
Hans Grauert and Reinhold Remmert, Theory of Stein Spaces, Springer Verlag (Classics in Mathematics), 2004. .
External links
Compactness (mathematics)
Mathematical analysis
General topology
|
https://en.wikipedia.org/wiki/TOSEC
|
The Old School Emulation Center (TOSEC) is a retrocomputing initiative founded in February 2000 initially for the renaming and cataloging of software files intended for use in emulators, that later extended their work to the cataloging and preservation of also applications, firmware, device drivers, games, operating systems, magazines and magazine cover disks, comic books, product box art, videos of advertisements and training, related TV series and more. The catalogs provide an overview and cryptographic identification of media that allows for automatic integrity checking and renaming of files, checking for the completeness of software collections and more, using management utilities like ClrMamePro or ROMVault.
As the project grew in popularity it started to become a de facto standard for the management of retrocomputing and emulation resources. In 2013 many TOSEC catalogued files started to be included in the Internet Archive after the work quality and attention to detail put into the catalogs was praised by some of their archivists.
TOSEC usually makes two releases per year.
As of release 2023-01-23, TOSEC catalogs span ~195 unique brands and hundreds of unique computing platforms and continues to grow. As of this time the project had identified and cataloged more than 1.2 million different software images and sets (more than half of that for Commodore systems), describing a source set of about 8TB of software and resources.
See also
Digital preservation
MobyGames - Video game cataloging project
References
Bibliography
External links
TOSEC Project Homepage
TOSEC Forum
TOSEC Discord
ClrMamePro Homepage
ROMVault Homepage
Computing culture
Discipline-oriented digital libraries
Online databases
|
https://en.wikipedia.org/wiki/NAND%20logic
|
The NAND Boolean function has the property of functional completeness. This means that any Boolean expression can be re-expressed by an equivalent expression utilizing only NAND operations. For example, the function NOT(x) may be equivalently expressed as NAND(x,x). In the field of digital electronic circuits, this implies that it is possible to implement any Boolean function using just NAND gates.
The mathematical proof for this was published by Henry M. Sheffer in 1913 in the Transactions of the American Mathematical Society (Sheffer 1913). A similar case applies to the NOR function, and this is referred to as NOR logic.
NAND
A NAND gate is an inverted AND gate. It has the following truth table:
In CMOS logic, if both of the A and B inputs are high, then both the NMOS transistors (bottom half of the diagram) will conduct, neither of the PMOS transistors (top half) will conduct, and a conductive path will be established between the output and Vss (ground), bringing the output low. If both of the A and B inputs are low, then neither of the NMOS transistors will conduct, while both of the PMOS transistors will conduct, establishing a conductive path between the output and Vdd (voltage source), bringing the output high. If either of the A or B inputs is low, one of the NMOS transistors will not conduct, one of the PMOS transistors will, and a conductive path will be established between the output and Vdd (voltage source), bringing the output high. As the only configuration of the two inputs that results in a low output is when both are high, this circuit implements a NAND (NOT AND) logic gate.
Making other gates by using NAND gates
A NAND gate is a universal gate, meaning that any other gate can be represented as a combination of NAND gates.
NOT
A NOT gate is made by joining the inputs of a NAND gate together. Since a NAND gate is equivalent to an AND gate followed by a NOT gate, joining the inputs of a NAND gate leaves only the NOT gate.
AND
An AND gate is m
|
https://en.wikipedia.org/wiki/XNOR%20gate
|
The XNOR gate (sometimes ENOR, EXNOR, NXOR, XAND and pronounced as Exclusive NOR) is a digital logic gate whose function is the logical complement of the Exclusive OR (XOR) gate. It is equivalent to the logical connective () from mathematical logic, also known as the material biconditional. The two-input version implements logical equality, behaving according to the truth table to the right, and hence the gate is sometimes called an "equivalence gate". A high output (1) results if both of the inputs to the gate are the same. If one but not both inputs are high (1), a low output (0) results.
The algebraic notation used to represent the XNOR operation is . The algebraic expressions and both represent the XNOR gate with inputs A and B.
Symbols
There are two symbols for XNOR gates: one with distinctive shape and one with rectangular shape and label. Both symbols for the XNOR gate are that of the XOR gate with an added inversion bubble.
Hardware description
XNOR gates are represented in most TTL and CMOS IC families. The standard 4000 series CMOS IC is the 4077, and the TTL IC is the 74266 (although an open-collector implementation). Both include four independent, two-input, XNOR gates. The (now obsolete) 74S135 implemented four two-input XOR/XNOR gates or two three-input XNOR gates.
Both the TTL 74LS implementation, the 74LS266, as well as the CMOS gates (CD4077, 74HC4077 and 74HC266 and so on) are available from most semiconductor manufacturers such as Texas Instruments or NXP, etc. They are usually available in both through-hole DIP and SOIC formats (SOIC-14, SOC-14 or TSSOP-14).
Datasheets are readily available in most datasheet databases and suppliers.
Pinout
Both the 4077 and 74x266 devices (SN74LS266, 74HC266, 74266, etc.) have the same pinout diagram, as follows:
Pinout diagram of the 74HC266N, 74LS266 and CD4077 quad XNOR plastic dual in-line package 14-pin package (PDIP-14) ICs.
Alternatives
If a specific type of gate is not available, a circuit t
|
https://en.wikipedia.org/wiki/Truncated%20trapezohedron
|
In geometry, an truncated trapezohedron is a polyhedron formed by a trapezohedron with pyramids truncated from its two polar axis vertices. If the polar vertices are completely truncated (diminished), a trapezohedron becomes an antiprism.
The vertices exist as 4 in four parallel planes, with alternating orientation in the middle creating the pentagons.
The regular dodecahedron is the most common polyhedron in this class, being a Platonic solid, with 12 congruent pentagonal faces.
A truncated trapezohedron has all vertices with 3 faces. This means that the dual polyhedra, the set of gyroelongated dipyramids, have all triangular faces. For example, the icosahedron is the dual of the dodecahedron.
Forms
Triangular truncated trapezohedron (Dürer's solid) – 6 pentagons, 2 triangles, dual gyroelongated triangular dipyramid
Truncated square trapezohedron – 8 pentagons, 2 squares, dual gyroelongated square dipyramid
Truncated pentagonal trapezohedron or regular dodecahedron – 12 pentagonal faces, dual icosahedron
Truncated hexagonal trapezohedron – 12 pentagons, 2 hexagons, dual gyroelongated hexagonal dipyramid
...
Truncated n-gonal trapezohedron – 2n pentagons, 2 n-gons, dual gyroelongated dipyramids
See also
Diminished trapezohedron
External links
Conway Notation for Polyhedra Try: "tndAn", where n=4,5,6... example "t5dA5" is a dodecahedron.
Polyhedra
Truncated tilings
|
https://en.wikipedia.org/wiki/Distributed%20parameter%20system
|
In control theory, a distributed-parameter system (as opposed to a lumped-parameter system) is a system whose state space is infinite-dimensional. Such systems are therefore also known as infinite-dimensional systems. Typical examples are systems described by partial differential equations or by delay differential equations.
Linear time-invariant distributed-parameter systems
Abstract evolution equations
Discrete-time
With U, X and Y Hilbert spaces and ∈ L(X), ∈ L(U, X), ∈ L(X, Y) and ∈ L(U, Y) the following difference equations determine a discrete-time linear time-invariant system:
with (the state) a sequence with values in X, (the input or control) a sequence with values in U and (the output) a sequence with values in Y.
Continuous-time
The continuous-time case is similar to the discrete-time case but now one considers differential equations instead of difference equations:
,
.
An added complication now however is that to include interesting physical examples such as partial differential equations and delay differential equations into this abstract framework, one is forced to consider unbounded operators. Usually A is assumed to generate a strongly continuous semigroup on the state space X. Assuming B, C and D to be bounded operators then already allows for the inclusion of many interesting physical examples, but the inclusion of many other interesting physical examples forces unboundedness of B and C as well.
Example: a partial differential equation
The partial differential equation with and given by
fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be L2(0, 1). The operator A is defined as
It can be shown that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as
Example: a delay differential equation
The delay differential equation
fits into th
|
https://en.wikipedia.org/wiki/Service%20Assurance%20Agent
|
IP SLA (Internet Protocol Service Level Agreement) is an active computer network measurement technology that was initially developed by Cisco Systems. IP SLA was previously known as Service Assurance Agent (SAA) or Response Time Reporter (RTR). IP SLA is used to track network performance like latency, ping response, and jitter, it also helps us to provide service quality.
Functions
Routers and switches enabled with IP SLA perform periodic network tests or measurements such as
Hypertext Transfer Protocol (HTTP) GET
File Transfer Protocol (FTP) downloads
Domain Name System (DNS) lookups
User Datagram Protocol (UDP) echo, for VoIP jitter and mean opinion score (MOS)
Data-Link Switching (DLSw) (Systems Network Architecture (SNA) tunneling protocol)
Dynamic Host Configuration Protocol (DHCP) lease requests
Transmission Control Protocol (TCP) connect
Internet Control Message Protocol (ICMP) echo (remote ping)
The exact number and types of available measurements depends on the IOS version. IP SLA is very widely used in service provider networks to generate time-based performance data. It is also used together with Simple Network Management Protocol (SNMP) and NetFlow, which generate volume-based data.
Usage considerations
For IP SLA tests, devices with IP SLA support are required. IP SLA is supported on Cisco routers and switches since IOS version 12.1. Other vendors like Juniper Networks or Enterasys Networks support IP SLA on some of their devices.
IP SLA tests and data collection can be configured either via a console (command-line interface) or via SNMP.
When using SNMP, both read and write community strings are needed.
The IP SLA voice quality feature was added starting with IOS version 12.3(4)T. All versions after this, including 12.4 mainline, contain the MOS and ICPIF voice quality calculation for the UDP jitter measurement.
References
Computer networks
|
https://en.wikipedia.org/wiki/NOR%20gate
|
The NOR gate is a digital logic gate that implements logical NOR - it behaves according to the truth table to the right. A HIGH output (1) results if both the inputs to the gate are LOW (0); if one or both input is HIGH (1), a LOW output (0) results. NOR is the result of the negation of the OR operator. It can also in some senses be seen as the inverse of an AND gate. NOR is a functionally complete operation—NOR gates can be combined to generate any other logical function. It shares this property with the NAND gate. By contrast, the OR operator is monotonic as it can only change LOW to HIGH but not vice versa.
In most, but not all, circuit implementations, the negation comes for free—including CMOS and TTL. In such logic families, OR is the more complicated operation; it may use a NOR followed by a NOT. A significant exception is some forms of the domino logic family.
Symbols
There are three symbols for NOR gates: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol, as well as the deprecated DIN symbol. For more information see Logic Gate Symbols. The ANSI symbol for the NOR gate is a standard OR gate with an inversion bubble connected.
The bubble indicates that the function of the or gate has been inverted.
Hardware description and pinout
NOR Gates are basic logic gates, and as such they are recognised in TTL and CMOS ICs. The standard, 4000 series, CMOS IC is the 4001, which includes four independent, two-input, NOR gates. The pinout diagram is as follows:
Availability
These devices are available from most semiconductor manufacturers such as Fairchild Semiconductor, Philips or Texas Instruments. These are usually available in both through-hole DIP and SOIC format. Datasheets are readily available in most datasheet databases.
In the popular CMOS and TTL logic families, NOR gates with up to 8 inputs are available:
CMOS
4001: Quad 2-input NOR gate
4025: Triple 3-input NOR gate
4002: Dual 4-input NOR gate
4078: Single
|
https://en.wikipedia.org/wiki/Hauppauge%20MediaMVP
|
The Hauppauge MediaMVP is a network media player. It consists of a hardware unit with remote control, along with software for a Windows PC. Out of the box, it is capable of playing video and audio, displaying pictures, and "tuning in" to Internet radio stations. Alternative software is also available to extend its capabilities. It can be used as a front-end for various PVR projects.
The MediaMVP is popular with some PVR enthusiasts because it is inexpensive and relatively easy to modify.
Capabilities
The MediaMVP can stream audio and video content from a host PC running Windows. It can display photos stored on the host PC. It can stream Internet radio via the host PC as well. It can display live TV with full PVR features with SageTV PVR software for Windows or Linux.
The capabilities listed below refer to the official software and firmware supplied by Hauppauge.
Video
The MediaMVP supports the MPEG (MPEG-1 and MPEG-2) video format (and only that format). However, depending on the MediaMVP host software running on the host computer, the host software may be able to seamlessly transcode other video file formats before sending them to the MediaMVP in the MPEG format. The maximum un-transcoded playable video size is SDTV (480i). HDTV mpeg streams (e.g. 720p) need to be transcoded in real-time on the computer to SD format. Note: transcoding video can tax some slower computers.
With a hardware MPEG decoder as part of its PowerPC processor, it renders moving video images more smoothly than many software PVR implementations.
Audio
Supported audio file formats include MP3 and WMA. Playlist formats supported include M3U, PLS, ASX and B4S.
See also Internet radio below.
Photos
Supported image file formats include JPG and GIF.
Slideshows are supported. Listening to music (including streaming Internet radio) during slideshows is supported as well.
Internet radio
Supports streaming Internet radio stations via the host PC.
Other capabilities
Can schedule recordi
|
https://en.wikipedia.org/wiki/Triangular%20function
|
A triangular function (also known as a triangle function, hat function, or tent function) is a function whose graph takes the shape of a triangle. Often this is an isosceles triangle of height 1 and base 2 in which case it is referred to as the triangular function. Triangular functions are useful in signal processing and communication systems engineering as representations of idealized signals, and the triangular function specifically as an integral transform kernel function from which more realistic signals can be derived, for example in kernel density estimation. It also has applications in pulse-code modulation as a pulse shape for transmitting digital signals and as a matched filter for receiving the signals. It is also used to define the triangular window sometimes called the Bartlett window.
Definitions
The most common definition is as a piecewise function:
Equivalently, it may be defined as the convolution of two identical unit rectangular functions:
The triangular function can also be represented as the product of the rectangular and absolute value functions:
Note that some authors instead define the triangle function to have a base of width 1 instead of width 2:
In its most general form a triangular function is any linear B-spline:
Whereas the definition at the top is a special case
where , , and .
A linear B-spline is the same as a continuous piecewise linear function , and this general triangle function is useful to formally define as
where for all integer .
The piecewise linear function passes through every point expressed as coordinates with ordered pair , that is,
.
Scaling
For any parameter :
Fourier transform
The transform is easily determined using the convolution property of Fourier transforms and the Fourier transform of the rectangular function:
where is the normalized sinc function.
See also
Källén function, also known as triangle function
Tent map
Triangular distribution
Triangle wave, a piecewise linear periodic function
|
https://en.wikipedia.org/wiki/Right-to-left%20script
|
In a script (commonly shortened to right to left or abbreviated RTL, RL-TB or R2L), writing starts from the right of the page and continues backwards to the left, proceeding from top to bottom for new lines. Arabic, Hebrew, Persian, Urdu, Kashmiri, Pashto, Uighur, Sorani Kurdish, and Sindhi are the most widespread RTL writing systems in modern times.
Right-to-left can also refer to (TB-RL or vertical) scripts of tradition, such as Chinese, Japanese, and Korean, though in modern times they are also commonly written (with lines going from top to bottom). Books designed for predominantly vertical TBRL text open in the same direction as those for RTL horizontal text: the spine is on the right and pages are numbered from right to left.
These scripts can be contrasted with many common modern writing systems, where writing starts from the left of the page and continues to the right.
The Arabic script is mostly but not exclusively right-to-left; mathematical expressions, numeric dates and numbers bearing units are embedded from left to right.
Uses
Hebrew, Arabic, and Persian are the most widespread RTL writing systems in modern times. As usage of the Arabic script spread, the repertoire of 28 characters used to write the Arabic language was supplemented to accommodate the sounds of many other languages such as Kashmiri, Pashto, etc. While the Hebrew alphabet is used to write the Hebrew language, it is also used to write other Jewish languages such as Yiddish and Judaeo-Spanish.
Syriac and Mandaean (Mandaic) scripts are derived from Aramaic and are written RTL. Samaritan is similar, but developed from Proto-Hebrew rather than Aramaic. Many other ancient and historic scripts derived from Aramaic inherited its right-to-left direction.
Several languages have both Arabic RTL and non-Arabic LTR writing systems. For example, Sindhi is commonly written in Arabic and Devanagari scripts, and a number of others have been used. Kurdish may be written in the Arabic or Latin s
|
https://en.wikipedia.org/wiki/Lithotroph
|
Lithotrophs are a diverse group of organisms using an inorganic substrate (usually of mineral origin) to obtain reducing equivalents for use in biosynthesis (e.g., carbon dioxide fixation) or energy conservation (i.e., ATP production) via aerobic or anaerobic respiration. While lithotrophs in the broader sense include photolithotrophs like plants, chemolithotrophs are exclusively microorganisms; no known macrofauna possesses the ability to use inorganic compounds as electron sources. Macrofauna and lithotrophs can form symbiotic relationships, in which case the lithotrophs are called "prokaryotic symbionts". An example of this is chemolithotrophic bacteria in giant tube worms or plastids, which are organelles within plant cells that may have evolved from photolithotrophic cyanobacteria-like organisms. Chemolithotrophs belong to the domains Bacteria and Archaea. The term "lithotroph" was created from the Greek terms 'lithos' (rock) and 'troph' (consumer), meaning "eaters of rock". Many but not all lithoautotrophs are extremophiles.
The last universal common ancestor of life is thought to be a chemolithotroph (due to its presence in the prokaryotes). Different from a lithotroph is an organotroph, an organism which obtains its reducing agents from the catabolism of organic compounds.
History
The term was suggested in 1946 by Lwoff and collaborators.
Biochemistry
Lithotrophs consume reduced inorganic compounds (electron donors).
Chemolithotrophs
A chemolithotroph is able to use inorganic reduced compounds in its energy-producing reactions. This process involves the oxidation of inorganic compounds coupled to ATP synthesis. The majority of chemolithotrophs are chemolithoautotrophs, able to fix carbon dioxide (CO2) through the Calvin cycle, a metabolic pathway in which CO2 is converted to glucose. This group of organisms includes sulfur oxidizers, nitrifying bacteria, iron oxidizers, and hydrogen oxidizers.
The term "chemolithotrophy" refers to a cell's acquisitio
|
https://en.wikipedia.org/wiki/Organotroph
|
An organotroph is an organism that obtains hydrogen or electrons from organic substrates. This term is used in microbiology to classify and describe organisms based on how they obtain electrons for their respiration processes. Some organotrophs such as animals and many bacteria, are also heterotrophs. Organotrophs can be either anaerobic or aerobic.
Antonym: Lithotroph, Adjective: Organotrophic.
History
The term was suggested in 1946 by Lwoff and collaborators.
See also
Autotroph
Chemoorganotroph
Primary nutritional groups
References
Michael Allaby. "organotroph." A Dictionary of Zoology. 1999, Retrieved 2012-03-30 from Encyclopedia.com: http://www.encyclopedia.com/doc/1O8-organotroph.html
The Prokaryotes - A Handbook on the Biology of Bacteria 3rd Ed., Vol 1, CHAPTER 1.4, Prokaryote Characterization and Identification 7, Retrieved from https://www.scribd.com/doc/9724380/1The-Prokaryotes-A-Handbook-on-the-Biology-of-Bacteria-3rd-Ed-Vol-1
Respiration in aquatic ecosystems Paul A. Del Giorgio, Peter J. leB. Williams, Science, 2005, Retrieved 2012-04-24 from https://books.google.com/books?id=pD5RUDW1m7IC&pg=PP1
External links
Hydrogen biology
Molecular biology
Biochemistry
|
https://en.wikipedia.org/wiki/Van%20der%20Waerden%20notation
|
In theoretical physics, Van der Waerden notation refers to the usage of two-component spinors (Weyl spinors) in four spacetime dimensions. This is standard in twistor theory and supersymmetry. It is named after Bartel Leendert van der Waerden.
Dotted indices
Undotted indices (chiral indices)
Spinors with lower undotted indices have a left-handed chirality, and are called chiral indices.
Dotted indices (anti-chiral indices)
Spinors with raised dotted indices, plus an overbar on the symbol (not index), are right-handed, and called anti-chiral indices.
Without the indices, i.e. "index free notation", an overbar is retained on right-handed spinor, since ambiguity arises between chirality when no index is indicated.
Hatted indices
Indices which have hats are called Dirac indices, and are the set of dotted and undotted, or chiral and anti-chiral, indices. For example, if
then a spinor in the chiral basis is represented as
where
In this notation the Dirac adjoint (also called the Dirac conjugate) is
See also
Dirac equation
Infeld–Van der Waerden symbols
Lorentz transformation
Pauli equation
Ricci calculus
Notes
References
Spinors in physics
Spinors
Mathematical notation
|
https://en.wikipedia.org/wiki/Incomplete%20markets
|
In economics, incomplete markets are markets in which there does not exist an Arrow–Debreu security for every possible state of nature. In contrast with complete markets, this shortage of securities will likely restrict individuals from transferring the desired level of wealth among states.
An Arrow security purchased or sold at date t is a contract promising to deliver one unit of income in one of the possible contingencies which can occur at date t + 1. If at each date-event there exists a complete set of such contracts, one for each contingency that can occur at the following date, individuals will trade these contracts in order to insure against future risks, targeting a desirable and budget feasible level of consumption in each state (i.e. consumption smoothing). In most set ups when these contracts are not available, optimal risk sharing between agents will not be possible. For this scenario, agents (homeowners, workers, firms, investors, etc.) will lack the instruments to insure against future risks such as employment status, health, labor income, prices, among others.
Markets, securities and market incompleteness
In a competitive market, each agent makes intertemporal choices in a stochastic environment. Their attitudes toward risk, the production possibility set, and the set of available trades determine the equilibrium quantities and prices of assets that are traded. In an "idealized" representation agents are assumed to have costless contractual enforcement and perfect knowledge of future states and their likelihood. With a complete set of state contingent claims (also known as Arrow–Debreu securities) agents can trade these securities to hedge against undesirable or bad outcomes.
When a market is incomplete, it typically fails to make the optimal allocation of assets. That is, the First Welfare Theorem no longer holds. The competitive equilibrium in an Incomplete Market is generally constrained suboptimal. The notion of constrained suboptimality wa
|
https://en.wikipedia.org/wiki/International%20Safe%20Harbor%20Privacy%20Principles
|
The International Safe Harbor Privacy Principles or Safe Harbour Privacy Principles were principles developed between 1998 and 2000 in order to prevent private organizations within the European Union or United States which store customer data from accidentally disclosing or losing personal information. They were overturned on October 6, 2015, by the European Court of Justice (ECJ), which enabled some US companies to comply with privacy laws protecting European Union and Swiss citizens. US companies storing customer data could self-certify that they adhered to 7 principles, to comply with the EU Data Protection Directive and with Swiss requirements. The US Department of Commerce developed privacy frameworks in conjunction with both the European Union and the Federal Data Protection and Information Commissioner of Switzerland.
Within the context of a series of decisions on the adequacy of the protection of personal data transferred to other countries, the European Commission made a decision in 2000 that the United States' principles did comply with the EU Directive – the so-called "Safe Harbour decision". However, after a customer complained that his Facebook data were insufficiently protected, the ECJ declared in October 2015 that the Safe Harbour decision was invalid, leading to further talks being held by the Commission with the US authorities towards "a renewed and sound framework for transatlantic data flows".
The European Commission and the United States agreed to establish a new framework for transatlantic data flows on 2 February 2016, known as the "EU–US Privacy Shield", which was closely followed by the Swiss-US Privacy Shield Framework.
Background history
In 1980, the OECD issued recommendations for protection of personal data in the form of eight principles. These were non-binding and in 1995, the European Union (EU) enacted a more binding form of governance, i.e. legislation, to protect personal data privacy in the form of the Data Protection Directive
|
https://en.wikipedia.org/wiki/Restaurant%20rating
|
Restaurant ratings identify restaurants according to their quality, using notations such as stars or other symbols, or numbers. Stars are a familiar and popular symbol, with scales of one to three or five stars commonly used. Ratings appear in guide books as well as in the media, typically in newspapers, lifestyle magazines and webzines. Websites featuring consumer-written reviews and ratings are increasingly popular, but are far less reliable.
In addition, there are ratings given by public health agencies rating the level of sanitation practiced by an establishment.
Restaurant guides
One of the best known guides is the Michelin series which award one to three stars to restaurants they perceive to be of high culinary merit. One star indicates a "very good restaurant"; two stars indicate a place "worth a detour"; three stars means "exceptional cuisine, worth a special journey".
Several bigger newspapers employ restaurant critics and publish online dining guides for the cities they serve, such as the Irish Independent for Irish restaurants.
List of notable restaurant guides
Europe (original working area)
The Americas
Asia
Internet restaurant review sites have empowered regular people to generate non-expert reviews. This has sparked criticism from restaurant establishments about the non-editorial, non-professional critiques. Those reviews can be falsified or faked.
Rating criteria
The different guides have their own criteria. Not every guide looks behind the scenes or decorum. Others look particularly sharply to value for money. This is why a restaurant can be missing in one guide, while mentioned in another. Because the guides work independently, it is possible to have simultaneous multiple recognitions.
Ratings impact
A top restaurant rating can mean success or failure for a restaurant, particularly when bestowed by an influential sources such as Michelin. Still, a good rating is not enough for economic success and many Michelin starred and/or highly ra
|
https://en.wikipedia.org/wiki/Intel%20HEX
|
Intel hexadecimal object file format, Intel hex format or Intellec Hex is a file format that conveys binary information in ASCII text form, making it possible to store on non-binary media such as paper tape, punch cards, etc., to display on text terminals or be printed on line-oriented printers. The format is commonly used for programming microcontrollers, EPROMs, and other types of programmable logic devices and hardware emulators. In a typical application, a compiler or assembler converts a program's source code (such as in C or assembly language) to machine code and outputs it into a HEX file. Some also use it as a container format holding packets of stream data. Common file extensions used for the resulting files are .HEX or .H86. The HEX file is then read by a programmer to write the machine code into a PROM or is transferred to the target system for loading and execution.
History
The Intel hex format was originally designed for Intel's Intellec Microcomputer Development Systems (MDS) in 1973 in order to load and execute programs from paper tape. It was also used to specify memory contents to Intel for ROM production, which previously had to be encoded in the much less efficient BNPF (Begin-Negative-Positive-Finish) format. In 1973, Intel's "software group" consisted only of Bill Byerly and Ken Burget, and Gary Kildall as an external consultant doing business as Microcomputer Applications Associates (MAA) and founding Digital Research in 1974. Beginning in 1975, the format was utilized by Intellec Series II ISIS-II systems supporting diskette drives, with files using the file extension HEX. Many PROM and EPROM programming devices accept this format.
Format
Intel HEX consists of lines of ASCII text that are separated by line feed or carriage return characters or both. Each text line contains uppercase hexadecimal characters that encode multiple binary numbers. The binary numbers may represent data, memory addresses, or other values, depending on their position
|
https://en.wikipedia.org/wiki/Rose%20%28topology%29
|
In mathematics, a rose (also known as a bouquet of n circles) is a topological space obtained by gluing together a collection of circles along a single point. The circles of the rose are called petals. Roses are important in algebraic topology, where they are closely related to free groups.
Definition
A rose is a wedge sum of circles. That is, the rose is the quotient space C/S, where C is a disjoint union of circles and S a set consisting of one point from each circle. As a cell complex, a rose has a single vertex, and one edge for each circle. This makes it a simple example of a topological graph.
A rose with n petals can also be obtained by identifying n points on a single circle. The rose with two petals is known as the figure eight.
Relation to free groups
The fundamental group of a rose is free, with one generator for each petal. The universal cover is an infinite tree, which can be identified with the Cayley graph of the free group. (This is a special case of the presentation complex associated to any presentation of a group.)
The intermediate covers of the rose correspond to subgroups of the free group. The observation that any cover of a rose is a graph provides a simple proof that every subgroup of a free group is free (the Nielsen–Schreier theorem)
Because the universal cover of a rose is contractible, the rose is actually an Eilenberg–MacLane space for the associated free group F. This implies that the cohomology groups Hn(F) are trivial for n ≥ 2.
Other properties
Any connected graph is homotopy equivalent to a rose. Specifically, the rose is the quotient space of the graph obtained by collapsing a spanning tree.
A disc with n points removed (or a sphere with n + 1 points removed) deformation retracts onto a rose with n petals. One petal of the rose surrounds each of the removed points.
A torus with one point removed deformation retracts onto a figure eight, namely the union of two generating circles. More generally, a surface
|
https://en.wikipedia.org/wiki/Ensoniq%20ES-5506%20OTTO
|
The Ensoniq ES-5506 "OTTO" is a chip used in implementations of sample-based synthesis. Musical instruments and IBM PC compatible sound cards were the most popular applications.
OTTO is capable of altering the pitch and timbre of a digital recording and is capable of operating with up to 32 channels at once. Each channel can have several parameters altered, such as pitch, volume, waveform, and filtering. The chip is a VLSI device designed to be manufactured on a 1.5 micrometre double-metal CMOS process. It consists of approximately 80,000 transistors. It was part of the fourth generation of Ensoniq audio technology.
Major features
Real-time digital filters
Frequency interpolation
32 independent voices
Loop start and stop positions for each voice (bidirectional and reverse looping)
Motorola 68000 compatibility for asynchronous bus communication
Separate host and sound memory interface
At least 18-bit accuracy
6-channel stereo serial communication port
Programmable clocks for defining a serial protocol
Internal volume multiplication and stereo panning
ADC input for pots and wheels
Hardware support for envelopes
Support for dual OTTO systems
Optional compressed data format for sample data
Up to 16 MHz operation
Implementations
Taito Cybercore/F3 System
Seta SSV System
Ensoniq TS10/TS12 Synthesizers
Ensoniq Soundscape S-2000
Ensoniq Soundscape Elite
Ensoniq SoundscapeDB daughterboard
Gravis Ultrasound Gravis GF1 chip (Ensoniq based)
Westacott Organs DRE (Digital Rank Emulator)
QRS Pianomation Chili and Sonata MIDI players 1998
Boom Theory Corp 0.0 Drum Module Interface
References
Sound chips
Sound cards
|
https://en.wikipedia.org/wiki/Parametric%20stereo
|
Parametric stereo (abbreviated as PS) is an audio compression algorithm used as an audio coding format for digital audio. It is considered an Audio Object Type of MPEG-4 Part 3 (MPEG-4 Audio) that serves to enhance the coding efficiency of low bandwidth stereo audio media. Parametric Stereo digitally codes a stereo audio signal by storing the audio as monaural alongside a small amount of extra information. This extra information (defined as "parametric overhead") describes how the monaural signal will behave across both stereo channels, which allows for the signal to exist in true stereo upon playback.
History
Background
Advanced Audio Coding Low Complexity (AAC LC) combined with Spectral Band Replication (SBR) and Parametric Stereo (PS) was defined as HE-AAC v2. An HE-AAC v1 decoder will only give mono sound when decoding an HE-AAC v2 bitstream. Parametric Stereo performs sparse coding in the spatial domain, somewhat similar to what SBR does in the frequency domain. An AAC HE v2 bitstream is obtained by downmixing the stereo audio to mono at the encoder along with 2–3 kbit/s of side info (the Parametric Stereo information) in order to describe the spatial intensity stereo generation and ambience regeneration at the decoder. By having the Parametric Stereo side info along with the mono audio stream, the decoder (player) can regenerate a faithful spatial approximation of the original stereo panorama at very low bitrates. Because only one audio channel is transmitted, along with the parametric side info, a 24 kbit/s coded audio signal with Parametric Stereo will be substantially improved in quality relative to discrete stereo audio signals encoded with conventional means. The additional bitrate spent on the single mono channel (combined with some PS side info) will substantially improve the perceived quality of the audio compared to a standard stereo stream at similar bitrate. However, this technique is only useful at the lowest bitrates (approx. 16–48 kbit/s and
|
https://en.wikipedia.org/wiki/Mathers%20table
|
The Mathers table of Hebrew and "Chaldee" (Aramaic) letters is a tabular display of the pronunciation, appearance, numerical values, transliteration, names, and symbolism of the twenty-two letters of the Hebrew alphabet appearing in The Kabbalah Unveiled, S.L. MacGregor Mathers' late 19th century English translation of Kabbala Denudata, itself a Latin translation by Christian Knorr von Rosenroth of the Zohar, a primary Kabbalistic text.
This table has been used as a primary reference for a basic understanding of the Hebrew alphabet as it applies to the Kabbalah, generally outside of traditional Jewish mysticism, by many modern Hermeticists and students of the occult, including members of the Hermetic Order of the Golden Dawn and other magical organizations deriving from it. It has been reproduced and adapted in many books published from the early 20th century to the present.
See also
Gematria
Hebrew language
Kabbalah
Mysticism
Notaricon
References
Books that reproduce the Mathers table either substantially or exactly:
External links
The Kabbalah Unveiled at www.sacred-texts.com
Hebrew alphabet
Hermetic Qabalah
Numerology
|
https://en.wikipedia.org/wiki/Integrated%20Encryption%20Scheme
|
Integrated Encryption Scheme (IES) is a hybrid encryption scheme which provides semantic security against an adversary who is able to use chosen-plaintext or chosen-ciphertext attacks. The security of the scheme is based on the computational Diffie–Hellman problem.
Two variants of IES are specified: Discrete Logarithm Integrated Encryption Scheme (DLIES) and Elliptic Curve Integrated Encryption Scheme (ECIES), which is also known as the Elliptic Curve Augmented Encryption Scheme or simply the Elliptic Curve Encryption Scheme. These two variants are identical up to the change of an underlying group.
Informal description of DLIES
As a brief and informal description and overview of how IES works, a Discrete Logarithm Integrated Encryption Scheme (DLIES) is used, focusing on illuminating the reader's understanding, rather than precise technical details.
Alice learns Bob's public key through a public key infrastructure or some other distribution method.Bob knows his own private key .
Alice generates a fresh, ephemeral value , and its associated public value .
Alice then computes a symmetric key using this information and a key derivation function (KDF) as follows:
Alice computes her ciphertext from her actual message (by symmetric encryption of ) encrypted with the key (using an authenticated encryption scheme) as follows:
Alice transmits (in a single message) both the public ephemeral and the ciphertext .
Bob, knowing and , can now compute and decrypt from .
Note that the scheme does not provide Bob with any assurance as to who really sent the message: This scheme does nothing to stop anyone from pretending to be Alice.
Formal description of ECIES
Required information
To send an encrypted message to Bob using ECIES, Alice needs the following information:
The cryptography suite to be used, including a key derivation function (e.g., ANSI-X9.63-KDF with SHA-1 option), a message authentication code (e.g., HMAC-SHA-1-160 with 160-bit keys or HMAC-SHA-1
|
https://en.wikipedia.org/wiki/Discontinuous%20linear%20map
|
In mathematics, linear maps form an important class of "simple" functions which preserve the algebraic structure of linear spaces and are often used as approximations to more general functions (see linear approximation). If the spaces involved are also topological spaces (that is, topological vector spaces), then it makes sense to ask whether all linear maps are continuous. It turns out that for maps defined on infinite-dimensional topological vector spaces (e.g., infinite-dimensional normed spaces), the answer is generally no: there exist discontinuous linear maps. If the domain of definition is complete, it is trickier; such maps can be proven to exist, but the proof relies on the axiom of choice and does not provide an explicit example.
A linear map from a finite-dimensional space is always continuous
Let X and Y be two normed spaces and a linear map from X to Y. If X is finite-dimensional, choose a basis in X which may be taken to be unit vectors. Then,
and so by the triangle inequality,
Letting
and using the fact that
for some C>0 which follows from the fact that any two norms on a finite-dimensional space are equivalent, one finds
Thus, is a bounded linear operator and so is continuous. In fact, to see this, simply note that f is linear,
and therefore for some universal constant K. Thus for any
we can choose so that ( and
are the normed balls around and ), which gives continuity.
If X is infinite-dimensional, this proof will fail as there is no guarantee that the supremum M exists. If Y is the zero space {0}, the only map between X and Y is the zero map which is trivially continuous. In all other cases, when X is infinite-dimensional and Y is not the zero space, one can find a discontinuous map from X to Y.
A concrete example
Examples of discontinuous linear maps are easy to construct in spaces that are not complete; on any Cauchy sequence of linearly independent vectors which does not have a limit, there is a linear operator such
|
https://en.wikipedia.org/wiki/SWR%20meter
|
The standing wave ratio meter, SWR meter, ISWR meter (current "" SWR), or VSWR meter (voltage SWR) measures the standing wave ratio (SWR) in a transmission line. The meter indirectly measures the degree of mismatch between a transmission line and its load (usually an antenna). Electronics technicians use it to adjust radio transmitters and their antennas and feedlines to be impedance matched so they work together properly, and evaluate the effectiveness of other impedance matching efforts.
Directional SWR meter
A directional SWR meter measures the magnitude of the forward and reflected waves by sensing each one individually, with directional couplers. A calculation then produces the SWR.
Referring to the above diagram, the transmitter (TX) and antenna (ANT) terminals connect via an internal transmission line. This main line is electromagnetically coupled to two smaller sense lines (directional couplers). These are terminated with resistors at one end and diode rectifiers at the other. Some meters use a printed circuit board with three parallel traces to make the transmission line and two sensing lines. The resistors match the characteristic impedance of the sense lines. The diodes convert the magnitudes of the forward and reverse waves to the terminals FWD and REV, respectively, as DC voltages, which are smoothed by the capacitors. The meter or amplifier (not shown) connected to the FWD and REV terminals acts as the required drain resistor, and determines the dwell-time of the meter reading.
To calculate the SWR, first calculate the reflection coefficient:
(the voltages should include a relative phase factor).
Then calculate the SWR:
In a passive meter, this is usually indicated on a non-linear scale.
Radio operators' SWR meters
For decades radio operators have built and used SWR meters as a simple tuning and diagnostic tool. With shielding compromised, a pair of coax or twin line transmission lines, placed close enough, suffer crosstalk. A wave moving in t
|
https://en.wikipedia.org/wiki/Metaman
|
Metaman: The Merging of Humans and Machines into a Global Superorganism () is a 1993 book by author Gregory Stock. The title refers to a superorganism comprising humanity and its technology.
While many people have had ideas about a global brain, they have tended to suppose that this can be improved or altered by humans according to their will. Metaman can be seen as a development that directs humanity's will to its own ends, whether it likes it or not, through the operation of market forces. While it is difficult to think of making a life-form based on metals that can mine its own 'food', it is possible to imagine a superorganism that incorporates humans as its "cells" and entices them to sustain it (communalness), just as our cells interwork to sustain us.
External links
Review of Metaman, by Hans Moravec
Review of Metaman, by Patric Hedlund
Systems theory
Cybernetics
Superorganisms
Futurology books
1993 non-fiction books
|
https://en.wikipedia.org/wiki/Phycobiliprotein
|
Phycobiliproteins are water-soluble proteins present in cyanobacteria and certain algae (rhodophytes, cryptomonads, glaucocystophytes). They capture light energy, which is then passed on to chlorophylls during photosynthesis. Phycobiliproteins are formed of a complex between proteins and covalently bound phycobilins that act as chromophores (the light-capturing part). They are most important constituents of the phycobilisomes.
Major phycobiliproteins
Characteristics
Phycobiliproteins demonstrate superior fluorescent properties compared to small organic fluorophores, especially when high sensitivity or multicolor detection required :
Broad and high absorption of light suits many light sources
Very intense emission of light: 10-20 times brighter than small organic fluorophores
Relative large Stokes shift gives low background, and allows multicolor detections.
Excitation and emission spectra do not overlap compared to conventional organic dyes.
Can be used in tandem (simultaneous use by FRET) with conventional chromophores (i.e. PE and FITC, or APC and SR101 with the same light source).
Longer fluorescence retention period.
High water solubility
Applications
Phycobiliproteins allow very high detection sensitivity, and can be used in various fluorescence based techniques fluorimetric microplate assays , FISH and multicolor detection.
They are under development for use in artificial photosynthesis, limited by the relatively low conversion efficiency of 4-5%.
References
Photosynthetic pigments
Cyanobacteria proteins
Algae
Bacterial proteins
|
https://en.wikipedia.org/wiki/Lipoteichoic%20acid
|
Lipoteichoic acid (LTA) is a major constituent of the cell wall of gram-positive bacteria. These organisms have an inner (or cytoplasmic) membrane and, external to it, a thick (up to 80 nanometer) peptidoglycan layer. The structure of LTA varies between the different species of Gram-positive bacteria and may contain long chains of ribitol or glycerol phosphate. LTA is anchored to the cell membrane via a diacylglycerol. It acts as regulator of autolytic wall enzymes (muramidases). It has antigenic properties being able to stimulate specific immune response.
LTA may bind to target cells non-specifically through membrane phospholipids, or specifically to CD14 and to Toll-like receptors. Binding to TLR-2 has shown to induce NF-κB expression(a central transcription factor), elevating expression of both pro- and anti-apoptotic genes. Its activation also induces mitogen-activated protein kinases (MAPK) activation along with phosphoinositide 3-kinase activation.
Studies
LTA's molecular structure has been found to have the strongest hydrophobic bonds of an entire bacteria.
Said et al. showed that LTA causes an IL-10-dependent inhibition of CD4 T-cell expansion and function by up-regulating PD-1 levels on monocytes which leads to IL-10 production by monocytes after binding of PD-1 by PD-L.
Lipoteichoic acid (LTA) from Gram-positive bacteria exerts different immune effects depending on the bacterial source from which it is isolated. For example, LTA from Enterococcus faecalis is a virulence factor positively correlating to inflammatory damage to teeth during acute infection. On the other hand, a study reported Lacticaseibacillus rhamnosus GG LTA (LGG-LTA) oral administration reduces UVB-induced immunosuppression and skin tumor development in mice. In animal studies, specific bacterial LTA has been correlated with induction of arthritis, nephritis, uveitis, encephalomyelitis, meningeal inflammation, and periodontal lesions, and also triggered cascades resulting in
|
https://en.wikipedia.org/wiki/The%20Sword%20of%20Damocles%20%28virtual%20reality%29
|
The Sword of Damocles was the name for an early virtual reality (VR) head-mounted display and tracking system. It is widely considered to be the first augmented reality HMD system, although Morton Heilig had already created a stereoscopic head-mounted viewing apparatus without head tracking (known as "Stereoscopic-Television Apparatus for Individual Use" or "Telesphere Mask") earlier, patented in 1960.
The Sword of Damocles was created in 1968 by computer scientist Ivan Sutherland with the help of his students Bob Sproull, Quintin Foster, and Danny Cohen. Before he began working toward what he termed "the ultimate display", Ivan Sutherland was already well respected for his accomplishments in computer graphics (see Sketchpad). At MIT's Lincoln Laboratory beginning in 1966, Sutherland and his colleagues performed what are widely believed to be the first experiments with head-mounted displays of different kinds.
Features
The device was primitive both in terms of user interface and realism, and the graphics comprising the virtual environment were simple wireframe rooms. Sutherland's system displayed output from a computer program in the stereoscopic display. The perspective that the software showed the user would depend on the position of the user's gaze – which is why head tracking was necessary. The HMD had to be attached to a mechanical arm suspended from the ceiling of the lab partially due to its weight, and primarily to track head movements via linkages. The formidable appearance of the mechanism inspired its name. While using The Sword of Damocles, a user had to have his or her head securely fastened into the device to perform the experiments. At this time, the various components being tested were not fully integrated with one another.
Development
When Sutherland moved to the University of Utah in the late 1960s, work on integrating the various components into a single HMD system was begun. By the end of the decade, the first fully functional inte
|
https://en.wikipedia.org/wiki/Binet%E2%80%93Cauchy%20identity
|
In algebra, the Binet–Cauchy identity, named after Jacques Philippe Marie Binet and Augustin-Louis Cauchy, states that
for every choice of real or complex numbers (or more generally, elements of a commutative ring).
Setting and , it gives Lagrange's identity, which is a stronger version of the Cauchy–Schwarz inequality for the Euclidean space . The Binet-Cauchy identity is a special case of the Cauchy–Binet formula for matrix determinants.
The Binet–Cauchy identity and exterior algebra
When , the first and second terms on the right hand side become the squared magnitudes of dot and cross products respectively; in dimensions these become the magnitudes of the dot and wedge products. We may write it
where , , , and are vectors. It may also be written as a formula giving the dot product of two wedge products, as
which can be written as
in the case.
In the special case and , the formula yields
When both and are unit vectors, we obtain the usual relation
where is the angle between the vectors.
This is a special case of the inner product on the exterior algebra of a vector space, which is defined on wedge-decomposable elements as the Gram determinant of their components.
Einstein notation
A relationship between the Levi–Cevita symbols and the generalized Kronecker delta is
The form of the Binet–Cauchy identity can be written as
Proof
Expanding the last term,
where the second and fourth terms are the same and artificially added to complete the sums as follows:
This completes the proof after factoring out the terms indexed by i.
Generalization
A general form, also known as the Cauchy–Binet formula, states the following:
Suppose A is an m×n matrix and B is an n×m matrix. If S is a subset of {1, ..., n} with m elements, we write AS for the m×m matrix whose columns are those columns of A that have indices from S. Similarly, we write BS for the m×m matrix whose rows are those rows of B that have indices from S.
Then the determinant of the matrix prod
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.