source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Jacobian%20conjecture
|
In mathematics, the Jacobian conjecture is a famous unsolved problem concerning polynomials in several variables. It states that if a polynomial function from an n-dimensional space to itself has Jacobian determinant which is a non-zero constant, then the function has a polynomial inverse. It was first conjectured in 1939 by Ott-Heinrich Keller, and widely publicized by Shreeram Abhyankar, as an example of a difficult question in algebraic geometry that can be understood using little beyond a knowledge of calculus.
The Jacobian conjecture is notorious for the large number of attempted proofs that turned out to contain subtle errors. As of 2018, there are no plausible claims to have proved it. Even the two-variable case has resisted all efforts. There are currently no known compelling reasons for believing the conjecture to be true, and according to van den Essen there are some suspicions that the conjecture is in fact false for large numbers of variables (indeed, there is equally also no compelling evidence to support these suspicions). The Jacobian conjecture is number 16 in Stephen Smale's 1998 list of Mathematical Problems for the Next Century.
The Jacobian determinant
Let N > 1 be a fixed integer and consider polynomials f1, ..., fN in variables X1, ..., XN with coefficients in a field k. Then we define a vector-valued function F: kN → kN by setting:
F(X1, ..., XN) = (f1(X1, ...,XN),..., fN(X1,...,XN)).
Any map F: kN → kN arising in this way is called a polynomial mapping.
The Jacobian determinant of F, denoted by JF, is defined as the determinant of the N × N Jacobian matrix consisting of the partial derivatives of fi with respect to Xj:
then JF is itself a polynomial function of the N variables X1, ..., XN.
Formulation of the conjecture
It follows from the multivariable chain rule that if F has a polynomial inverse function G: kN → kN, then JF has a polynomial reciprocal, so is a nonzero constant. The Jacobian conjecture is the following partial con
|
https://en.wikipedia.org/wiki/Schmutzdecke
|
Schmutzdecke (German, "dirt cover" or dirty skin, sometimes wrongly spelled schmutzedecke) is a hypogeal biological layer formed on the surface of a slow sand filter. The schmutzdecke is the layer that provides the effective purification in potable water treatment, the underlying sand providing the support medium for this biological treatment layer.
The composition of any particular schmutzdecke varies, but will typically consist of a gelatinous biofilm matrix of bacteria, fungi, protozoa, rotifera and a range of aquatic insect larvae. As a schmutzdecke ages, more algae tend to develop, and larger aquatic organisms may be present including some bryozoa, snails and annelid worms.
References
Environmental soil science
Mycology
Water treatment
German words and phrases
|
https://en.wikipedia.org/wiki/Yamaha%20YM2413
|
The YM2413, a.k.a. OPLL, is a cost-reduced FM synthesis sound chip manufactured by Yamaha Corporation and based on their YM3812 (OPL2).
To make the chip cheaper to manufacture, many of the internal registers were removed. The result of this is that the YM2413 can only play one user-defined instrument at a time; the other 15 instrument settings are hard-coded and cannot be altered by the user. There were also some other cost-cutting modifications: the number of waveforms was reduced to two, additive mode was removed along with the 6-bit carrier volume control (channels instead have 15 levels of volume), and the channels are not mixed using an adder; instead, the chip's built-in DAC uses time-division multiplexing to play short segments of each channel in sequence, which was also done in the YM2612 much later.
Applications
The YM2413 was used in:
the FM Sound Unit add-on for the Sega Mark III sold exclusively in Japan, that improved the sound quality of all compatible games. The Japanese model of the Sega Master System came with this add-on built-in;
an arcade board design produced by SNK and Alpha Denshi in the late 1980s for a number of their games, including Time Soldiers, Sky Soldiers, and Gang Wars;
the Atari Games Rampart arcade game;
the Yamaha PSS-170 and PSS-270 keyboards in 1986;
the Yamaha SHS-10 shoulder keyboard in 1987, and the Yamaha PSS-140 and Yamaha SHS-200 in 1988;
the Yamaha PSR-6 keyboard in 1988;
several sound enhancement cartridges for MSX computers. It is also built into select MSX2 and MSX2+ systems, and all MSX Turbo R machines, as part of the MSX-Music standard; and
JTES Japanese teletext receivers.
Variants and clones
Yamaha YM2420 (OPLL2) is a variant with slightly changed registers (intentionally undocumented to avoid hardware piracy), used in Yamaha's own home keyboards. It has the same pinout and built-in FM patches as the YM2413, but several registers have parts of the bit order reversed.
Yamaha DS1001 (Konami VRC7 MMC) co
|
https://en.wikipedia.org/wiki/Heine%E2%80%93Cantor%20theorem
|
In mathematics, the Heine–Cantor theorem, named after Eduard Heine and Georg Cantor, states that if is a continuous function between two metric spaces and , and is compact, then is uniformly continuous. An important special case is that every continuous function from a closed bounded interval to the real numbers is uniformly continuous.
Proof
Suppose that and are two metric spaces with metrics and , respectively. Suppose further that a function is continuous and is compact. We want to show that is uniformly continuous, that is, for every positive real number there exists a positive real number such that for all points in the function domain , implies that .
Consider some positive real number . By continuity, for any point in the domain , there exists some positive real number such that when , i.e., a fact that is within of implies that is within of .
Let be the open -neighborhood of , i.e. the set
Since each point is contained in its own , we find that the collection is an open cover of . Since is compact, this cover has a finite subcover where . Each of these open sets has an associated radius . Let us now define , i.e. the minimum radius of these open sets. Since we have a finite number of positive radii, this minimum is well-defined and positive. We now show that this works for the definition of uniform continuity.
Suppose that for any two in . Since the sets form an open (sub)cover of our space , we know that must lie within one of them, say . Then we have that . The triangle inequality then implies that
implying that and are both at most away from . By definition of , this implies that and are both less than . Applying the triangle inequality then yields the desired
For an alternative proof in the case of , a closed interval, see the article Non-standard calculus.
See also
Cauchy-continuous function
External links
Theory of continuous functions
Metric geometry
Theorems in analysis
Articles containing pro
|
https://en.wikipedia.org/wiki/Flexible%20organic%20light-emitting%20diode
|
A flexible organic light-emitting diode (FOLED) is a type of organic light-emitting diode (OLED) incorporating a flexible plastic substrate on which the electroluminescent organic semiconductor is deposited. This enables the device to be bent or rolled while still operating. Currently the focus of research in industrial and academic groups, flexible OLEDs form one method of fabricating a rollable display.
Technical details and applications
An OLED emits light due to the electroluminescence of thin films of organic semiconductors approximately 100 nm thick. Regular OLEDs are usually fabricated on a glass substrate, but by replacing glass with a flexible plastic such as polyethylene terephthalate (PET) among others, OLEDs can be made both bendable and lightweight.
Such materials may not be suitable for comparable devices based on inorganic semiconductors due to the need for lattice matching and the high temperature fabrication procedure involved.
In contrast, flexible OLED devices can be fabricated by deposition of the organic layer onto the substrate using a method derived from inkjet printing, allowing the inexpensive and roll-to-roll fabrication of printed electronics.
Flexible OLEDs may be used in the production of rollable displays, electronic paper, or bendable displays which can be integrated into clothing, wallpaper or other curved surfaces. Prototype displays have been exhibited by companies such as Sony, which are capable of being rolled around the width of a pencil.
Disadvantages
Both flexible substrate itself as well as the process of bending the device introduce stress into the materials. There may be residual stress from the deposition of layers onto a flexible substrate, thermal stresses due to the different coefficient of thermal expansion of materials in the device, in addition to the external stress from the bending of the device.
Stress introduced into the organic layers may lower the efficiency or brightness of the device as it is deformed
|
https://en.wikipedia.org/wiki/Fialka
|
In cryptography, Fialka (M-125) is the name of a Cold War-era Soviet cipher machine. A rotor machine, the device uses 10 rotors, each with 30 contacts along with mechanical pins to control stepping. It also makes use of a punched card mechanism. Fialka means "violet" in Russian. Information regarding the machine was quite scarce until c. 2005 because the device had been kept secret.
Fialka contains a five-level paper tape reader on the right hand side at the front of the machine, and a paper tape punch and tape printing mechanism on top. The punched-card input for keying the machine is located on the left hand side. The Fialka requires 24 volt DC power and comes with a separate power supply that accepts power at 100 to 250 VAC, 50–400 Hz by means of an external selector switch.
The machine's rotors are labelled with Cyrillic, requiring 30 points on the rotors; this is in contrast to many comparable Western machines with 26-contact rotors, corresponding to the Latin alphabet. The keyboard, at least in the examples of East German origin, had both Cyrillic and Latin markings. There are at least two versions known to exist, the M-125-MN and the M-125-3MN. The M-125-MN had a typewheel that could handle Latin and Cyrillic letters. The M-125-3MN had separate typewheels for Latin and Cyrillic. The M-125-3MN had three modes, single shift letters, double shift with letters and symbols, and digits only, for use with code books and to superencrypt numeric ciphers.
Encryption mechanism
The Fialka rotor assembly has 10 rotors mounted on an axle and a 30 by 30 commutator (Kc 30x30). The commutator consists of two sets of 30 contact strips set at right angles to each other. A punched card is placed between the two sets of contacts via a door on the left hand side of the unit. Each punched card has 30 holes, with exactly one hole per row and column pair, and thereby specifies a permutation of the 30 rotor contact lines. This feature is comparable to the plug board on the Enigma
|
https://en.wikipedia.org/wiki/Percy%20Ludgate
|
Percy Edwin Ludgate (2 August 1883 – 16 October 1922) was an Irish amateur scientist who designed the second analytical engine (general-purpose Turing-complete computer) in history.
Life
Ludgate was born on 2 August 1883 in Skibbereen, County Cork, to Michael Ludgate and Mary McMahon. In the 1901 census, he is listed as Civil Servant National Education (Boy Copyist) in Dublin. In the 1911 census, he is also in Dublin, as a Commercial Clerk (Corn Merchant). He studied accountancy at Rathmines College of Commerce, earning a gold medal based on the results his final examinations in 1917. At some date before or after then, he joined Kevans & Son, accountants.
Work on analytical engine
It seems that Ludgate worked as a clerk for an unknown corn merchants, in Dublin, and pursued his interest in calculating machines at night. Charles Babbage in 1843 and Ludgate in 1909 designed the only two mechanical analytical engines before the electromechanical analytical engine of Leonardo Torres Quevedo of 1920 and its few successors, and the six first-generation electronic analytical engines of 1949.
Working alone, Ludgate designed an analytical engine while unaware of Babbage's designs, although he later went on to write about Babbage's machine. Ludgate's engine used multiplication as its base mechanism (unlike Babbage's which used addition). It incorporated the first multiplier-accumulator, and was the first to exploit a multiplier-accumulator to perform division, using multiplication seeded by reciprocal, via the convergent series .
Ludgate's engine also used a mechanism similar to slide rules, but employing his unique discrete Logarithmic Indexes (now known as Irish logarithms), and provided a very novel memory using concentric cylinders, storing numbers as displacements of rods in shuttles. His design featured several other novel features, including for program control (e.g., preemption and subroutines – or microcode, depending on viewpoint). The design is so different
|
https://en.wikipedia.org/wiki/Paul%20Milgrom
|
Paul Robert Milgrom (born April 20, 1948) is an American economist. He is the Shirley and Leonard Ely Professor of Humanities and Sciences at the Stanford University School of Humanities and Sciences, a position he has held since 1987. He is a professor in the Stanford School of Engineering as well and a Senior Fellow at the Stanford Institute for Economic Research. Milgrom is an expert in game theory, specifically auction theory and pricing strategies. He is the winner of the 2020 Nobel Memorial Prize in Economic Sciences, together with Robert B. Wilson, "for improvements to auction theory and inventions of new auction formats".
He is the co-creator of the no-trade theorem with Nancy Stokey. He is the co-founder of several companies, the most recent of which, Auctionomics, provides software and services for commercial auctions and exchanges.
Milgrom and his thesis advisor Wilson designed the auction protocol the FCC uses to determine which phone company gets what cellular frequencies. Milgrom also led the team that designed the broadcast incentive auction between 2016 and 2017, which was a two-sided auction to reallocate radio frequencies from TV broadcast to wireless broadband uses.
Early life and education
Paul Milgrom was born in Detroit, Michigan, April 20, 1948, the second of four sons to Jewish parents Abraham Isaac Milgrom and Anne Lillian Finkelstein. His family moved to Oak Park, Michigan, and Milgrom attended the Dewey Elementary School and then Oak Park High School.
Milgrom graduated from the University of Michigan in 1970 with an AB in mathematics. He worked as an actuary for several years in San Francisco at the Metropolitan Insurance Company and then at the Nelson and Warren consultancy in Columbus, Ohio. Milgrom became a Fellow of the Society of Actuaries in 1974. In 1975, Milgrom enrolled for graduate studies at Stanford University and earned an MS in statistics in 1978 and a PhD in business in 1979.
Academic career
Milgrom assumed a teaching
|
https://en.wikipedia.org/wiki/American%20Regions%20Mathematics%20League
|
The American Regions Mathematics League (ARML), is an annual, national high school mathematics team competition held simultaneously at four locations in the United States: the University of Iowa, Penn State, University of Nevada, Reno, and the University of Alabama in Huntsville. Past sites have included San Jose State University, Rutgers University, UNLV, Duke University, and University of Georgia.
Teams consist of 15 members, which usually represent a large geographic region (such as a state) or a large population center (such as a major city). Some schools also field teams. The competition is held in June, on the first Saturday after Memorial Day.
In 2022, 120 teams competed with about 1800 students.
ARML problems cover a wide variety of mathematical topics including algebra, geometry, number theory, combinatorics, probability, and inequalities. Calculus is not required to successfully complete any problem, but it may facilitate solving the problem more quickly or efficiently. While part of the competition is short-answer based, there is a cooperative team round, and a proof-based power question (also completed as a team). ARML problems are harder than most high school mathematics competitions.
The contest is sponsored by D. E. Shaw & Co. Contest supporters are the American Mathematical Society, Mu Alpha Theta (the National Mathematics Honor Society for High School and Two-Year College students), Star League, Penguin Books, and Princeton University Press.
Competition format
The competition consists of four formal events:
A team round, where the entire team has 20 minutes to solve 10 problems. Each problem is worth 5 points, for a possible total of 50 points
A power question, where the entire team has one hour to solve a multiple-part (usually ten) question requiring explanations and proofs. This is usually an unusual, unique, or invented topic so students are forced to deal with complex new mathematical ideas. Each problem is weighted for a possible
|
https://en.wikipedia.org/wiki/Multiplicative%20group%20of%20integers%20modulo%20n
|
In modular arithmetic, the integers coprime (relatively prime) to n from the set of n non-negative integers form a group under multiplication modulo n, called the multiplicative group of integers modulo n. Equivalently, the elements of this group can be thought of as the congruence classes, also known as residues modulo n, that are coprime to n.
Hence another name is the group of primitive residue classes modulo n.
In the theory of rings, a branch of abstract algebra, it is described as the group of units of the ring of integers modulo n. Here units refers to elements with a multiplicative inverse, which, in this ring, are exactly those coprime to n.
This quotient group, usually denoted , is fundamental in number theory. It is used in cryptography, integer factorization, and primality testing. It is an abelian, finite group whose order is given by Euler's totient function: For prime n the group is cyclic, and in general the structure is easy to describe, but no simple general formula for finding generators is known.
Group axioms
It is a straightforward exercise to show that, under multiplication, the set of congruence classes modulo n that are coprime to n satisfy the axioms for an abelian group.
Indeed, a is coprime to n if and only if . Integers in the same congruence class satisfy , hence one is coprime to n if and only if the other is. Thus the notion of congruence classes modulo n that are coprime to n is well-defined.
Since and implies , the set of classes coprime to n is closed under multiplication.
Integer multiplication respects the congruence classes, that is, and implies .
This implies that the multiplication is associative, commutative, and that the class of 1 is the unique multiplicative identity.
Finally, given a, the multiplicative inverse of a modulo n is an integer x satisfying .
It exists precisely when a is coprime to n, because in that case and by Bézout's lemma there are integers x and y satisfying . Notice that the equation im
|
https://en.wikipedia.org/wiki/Degree%20%28angle%29
|
A degree (in full, a degree of arc, arc degree, or arcdegree), usually denoted by ° (the degree symbol), is a measurement of a plane angle in which one full rotation is 360 degrees.
It is not an SI unit—the SI unit of angular measure is the radian—but it is mentioned in the SI brochure as an accepted unit. Because a full rotation equals 2 radians, one degree is equivalent to radians.
History
The original motivation for choosing the degree as a unit of rotations and angles is unknown. One theory states that it is related to the fact that 360 is approximately the number of days in a year. Ancient astronomers noticed that the sun, which follows through the ecliptic path over the course of the year, seems to advance in its path by approximately one degree each day. Some ancient calendars, such as the Persian calendar and the Babylonian calendar, used 360 days for a year. The use of a calendar with 360 days may be related to the use of sexagesimal numbers.
Another theory is that the Babylonians subdivided the circle using the angle of an equilateral triangle as the basic unit, and further subdivided the latter into 60 parts following their sexagesimal numeric system. The earliest trigonometry, used by the Babylonian astronomers and their Greek successors, was based on chords of a circle. A chord of length equal to the radius made a natural base quantity. One sixtieth of this, using their standard sexagesimal divisions, was a degree.
Aristarchus of Samos and Hipparchus seem to have been among the first Greek scientists to exploit Babylonian astronomical knowledge and techniques systematically. Timocharis, Aristarchus, Aristillus, Archimedes, and Hipparchus were the first Greeks known to divide the circle in 360 degrees of 60 arc minutes. Eratosthenes used a simpler sexagesimal system dividing a circle into 60 parts.
Another motivation for choosing the number 360 may have been that it is readily divisible: 360 has 24 divisors, making it one of only 7 numbers such th
|
https://en.wikipedia.org/wiki/Fujitsu%20VP2000
|
The VP2000 was the second series of vector supercomputers from Fujitsu. Announced in December 1988, they replaced Fujitsu's earlier FACOM VP Model E Series. The VP2000 was succeeded in 1995 by the VPP300, a massively parallel supercomputer with up to 256 vector processors.
The VP2000 was similar in many ways to their earlier designs, and in turn to the Cray-1, using a register-based vector processor for performance. For additional performance the vector units supported a special multiply-and-add instruction that could retire two results per clock cycle. This instruction "chain" is particularly common in many supercomputer applications.
Another difference is that the main scalar units of the processor ran at half the speed of the vector unit. According to Amdahl's Law computers tend to run at the speed of their slowest unit, and in this case unless the program spent most of its time in the vector units, the slower scalar performance would make it 1/2 the performance of a Cray-1 at the same speed. The reason for this seemingly odd "feature" is unclear.
One of the major complaints about the earlier VP series was their limited memory bandwidth—while the machines themselves had excellent performance in the processors, they were often starved for data. For the VP2000 series this was addressed by adding a second load/store unit to the scalar units, doubling memory bandwidth.
Several versions of the machines were sold at different price points. The low-end VP2100 ran at an 8 ns cycle time and delivered only 0.5 GFLOPS (about 4-8 times the performance of a Cray), while the VP2200 and VP2400 decreased the cycle time to 4 ns and delivered between 1.25 and 2.5 GFLOPS peak. The high-end VP2600 ran at 3.2 ns and delivered 5 GFLOPS. All of the models came in the /10 versions with a single scalar processor, or the /20 with a second, while the 2200 and 2400 also came in a /40 configuration with four. Due to the additional load/store units, adding additional scalar units improved
|
https://en.wikipedia.org/wiki/Fujitsu%20VP
|
The Fujitsu FACOM VP is a series of vector supercomputers designed, manufactured, and marketed by Fujitsu. Announced in July 1982, the FACOM VP were the first of the three initial Japanese commercial supercomputers, followed by the Hitachi HITAC S-810 in August 1982 and the NEC SX-2 in April 1983.
Context in the supercomputer market
The FACOM VP were sold until they were replaced by the VP2000 family in 1990. Developed with funding from the Ministry of International Trade and Industry, the FACOM VP was part of an effort designed to wrest control of the supercomputer market from the collection of small US-based companies like Cray Research. The FACOM VP was marketed in Japan by Fujitsu, where the majority of installations were located. Amdahl marketed the systems in the US and Siemens in Europe. The ending of the cold war during this period made the market for supercomputers dry up almost overnight, and the Japanese firms decided that their mass-production capabilities were better spent elsewhere.
Development
Fujitsu had built a prototype vector co-processor known as the F230-75, which was installed attached to their own mainframe machines in the Japanese Atomic Energy Commission and National Aerospace Laboratory in 1977. The processor was similar in most ways to the famed Cray-1, but did not have vector chaining capabilities and was therefore somewhat slower. Nevertheless, the machines were rather inexpensive, and during the late 1970s supercomputers were seen as a source of national pride, and an effort started to commercialize the design by combining it with a scalar processor to create an all-in-one design.
The result was the VP-100 and VP-200, announced in July 1982. These two models differed primarily in clock rates. Lower-end models were spun off as the VP-30 and VP-50. In 1986 a two-pipeline version was released as the VP-400. The next year the entire series was updated with the addition of a new vector unit that supported a multiply-and-add unit that coul
|
https://en.wikipedia.org/wiki/Opacifier
|
An opacifier is a substance added to a material in order to make the ensuing system opaque. An example of a chemical opacifier is titanium dioxide (TiO2), which is used as an opacifier in paints, in paper, and in plastics. It has very high refraction index (rutile modification 2.7 and anatase modification 2.55) and optimum refraction is obtained with crystals about 225 nanometers. Impurities in the crystal alter the optical properties. It is also used to opacify ceramic glazes and milk glass; bone ash is also used.
Opacifiers must have a refractive index (RI) substantially different from the system. Conversely, clarity may be achieved in a system by choosing components with very similar refractive indices.
Glasses
Ancient milk glasses used crystals of calcium antimonate, formed in the melt from calcium present in the glass and an antimony additive. Opaque yellow glasses contained crystals of lead antimonate; bindheimite mineral may have been used as the additive. Under oxidizing condition, lead also forms incompletely dissolved lead pyroantimonate (Pb2Sb2O7). From 2nd century BC tin oxide appears in use as opacifier, likely in the form of cassiterite mineral. Opaque yellow can be produced as lead stannate; the color is paler than the lead antimonate one. Later calcium and sodium phosphates became used; bone ash contains calcium phosphate in a high proportion. Calcium fluoride was also used, especially in China.
For dental ceramics, several approaches are in use. Spodumene or mica crystals can be precipitated. Fluorides of aluminium, calcium, barium, and magnesium can be used with suitable heat treatment. Tin oxide can be used, but zirconia and titania give better results; for titania, the appropriate resulting particle size is between submicron to 20 μm. Another desirable opacifier is zinc oxide.
Opacifiers must also form small particles in the system. Opacifiers are generally inert.
X-ray opacifiers
In context of x-rays, opacifiers are additives with high abso
|
https://en.wikipedia.org/wiki/Evans%20%26%20Sutherland%20ES-1
|
The ES-1 was Evans & Sutherland's abortive attempt to enter the supercomputer market. It was aimed at technical and scientific users who would normally buy a machine like a Cray-1 but did not need that level of power or throughput for graphics-heavy workloads. About to be released just as the market was drying up in the post-cold war military wind-down, only a handful were built and only two sold.
Background
Jean-Yves Leclerc was a computer designer who was unable to find funding in Europe for a high-performance server design. In 1985 he visited Dave Evans, his former PhD. adviser, looking for advice. After some discussion he eventually convinced him that since most of their customers were running E&S graphics hardware on Cray Research machines and other supercomputers, it would make sense if E&S could offer their own low-cost platform instead.
Eventually a new Evans & Sutherland Computer Division, or ESCD, was set up in 1986 to work on the design. Unlike the rest of E&S's operations which are headquartered in Salt Lake City, Utah, it was felt that the computer design would need to be in the "heart of things" in Silicon Valley, and the new division was set up in Mountain View, California.
Basic design
Instead of batch mode number crunching, the design would be tailored specifically to interactive use. This would include a built-in graphics engine and 2 GB of RAM, running BSD Unix 4.2. The machine would offer performance on par with contemporary Cray and ETA Systems.
8 × 8 crossbar
The basic idea of Leclerc's system was to use an 8×8 crossbar switch to connect eight custom CMOS CPUs together at high speed. An extra channel on the crossbar allowed it to be connected to another crossbar, forming a single 16-processor unit. The units were 16-sized (instead of 8) in order to fully utilize a 16-bank high-speed memory that had been designed along with the rest of the system. Since memory was logically organized on the "far side" of the crossbars, the memory controller
|
https://en.wikipedia.org/wiki/List%20of%20interactive%20geometry%20software
|
Interactive geometry software (IGS) or dynamic geometry environments (DGEs) are computer programs which allow one to create and then manipulate geometric constructions, primarily in plane geometry. In most IGS, one starts construction by putting a few points and using them to define new objects such as lines, circles or other points. After some construction is done, one can move the points one started with and see how the construction changes.
History
The earliest IGS was the Geometric Supposer, which was developed in the early 1980s. This was soon followed by Cabri in 1986 and The Geometer's Sketchpad.
Comparison
There are three main types of computer environments for studying school geometry: supposers, dynamic geometry environments (DGEs) and Logo-based programs. Most are DGEs: software that allows the user to manipulate ("drag") the geometric object into different shapes or positions. The main example of a supposer is the Geometric Supposer, which does not have draggable objects, but allows students to study pre-defined shapes. Nearly all of the following programs are DGEs. For a related, comparative physical example of these algorithms, see Lenart Sphere.
License and platform
The following table provides a first comparison of the different software according to their license and platform.
3D Software
General features
The following table provides a more detailed comparison :
Macros
Features related to macro constructions: (TODO)
Loci
Loci features related to IGS: (TODO)
Proof
We detail here the proof related features. (TODO)
Measurements and calculation
Measurement and calculation features related to IGS: (TODO)
Graphics export formats
Object attributes
2D programs
C.a.R.
C.a.R. is a free GPL analog of The Geometer's Sketchpad (GSP), written in Java.
Cabri
Cabri
Cabri was developed by the French school of mathematics education in Grenoble (Laborde, 1993)
CaRMetal
CaRMetal is a free GPL software written in Java. Derived from C.
|
https://en.wikipedia.org/wiki/Elongated%20triangular%20cupola
|
In geometry, the elongated triangular cupola is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a triangular cupola () by attaching a hexagonal prism to its base.
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
Dual polyhedron
The dual of the elongated triangular cupola has 15 faces: 6 isosceles triangles, 3 rhombi, and 6 quadrilaterals.
Related polyhedra and honeycombs
The elongated triangular cupola can form a tessellation of space with tetrahedra and square pyramids.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Gyroelongated%20triangular%20cupola
|
In geometry, the gyroelongated triangular cupola is one of the Johnson solids (J22). It can be constructed by attaching a hexagonal antiprism to the base of a triangular cupola (J3). This is called "gyroelongation", which means that an antiprism is joined to the base of a solid, or between the bases of more than one solid.
The gyroelongated triangular cupola can also be seen as a gyroelongated triangular bicupola (J44) with one triangular cupola removed. Like all cupolae, the base polygon has twice as many sides as the top (in this case, the bottom polygon is a hexagon because the top is a triangle).
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
Dual polyhedron
The dual of the gyroelongated triangular cupola has 15 faces: 6 kites, 3 rhombi, and 6 pentagons.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Triangular%20orthobicupola
|
In geometry, the triangular orthobicupola is one of the Johnson solids (). As the name suggests, it can be constructed by attaching two triangular cupolas () along their bases. It has an equal number of squares and triangles at each vertex; however, it is not vertex-transitive. It is also called an anticuboctahedron, twisted cuboctahedron or disheptahedron. It is also a canonical polyhedron.
The triangular orthobicupola is the first in an infinite set of orthobicupolae.
Relation to cuboctahedra
The triangular orthobicupola has a superficial resemblance to the cuboctahedron, which would be known as the triangular gyrobicupola in the nomenclature of Johnson solids — the difference is that the two triangular cupolas which make up the triangular orthobicupola are joined so that pairs of matching sides abut (hence, "ortho"); the cuboctahedron is joined so that triangles abut squares and vice versa. Given a triangular orthobicupola, a 60-degree rotation of one cupola before the joining yields a cuboctahedron. Hence, another name for the triangular orthobicupola is the anticuboctahedron.
The elongated triangular orthobicupola (J35), which is constructed by elongating this solid, has a (different) special relationship with the rhombicuboctahedron.
The dual of the triangular orthobicupola is the trapezo-rhombic dodecahedron. It has 6 rhombic and 6 trapezoidal faces, and is similar to the rhombic dodecahedron.
Formulae
The following formulae for volume, surface area, and circumradius can be used if all faces are regular, with edge length a:
The circumradius of a triangular orthobicupola is the same as the edge length (C = a).
Related polyhedra and honeycombs
The rectified cubic honeycomb can be dissected and rebuilt as a space-filling lattice of
triangular orthobicupolae and square pyramids.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Pentagonal%20orthocupolarotunda
|
In geometry, the pentagonal orthocupolarotunda is one of the Johnson solids (). As the name suggests, it can be constructed by joining a pentagonal cupola () and a pentagonal rotunda () along their decagonal bases, matching the pentagonal faces. A 36-degree rotation of one of the halves before the joining yields a pentagonal gyrocupolarotunda ().
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Pentagonal%20gyrocupolarotunda
|
In geometry, the pentagonal gyrocupolarotunda is one of the Johnson solids (). Like the pentagonal orthocupolarotunda (), it can be constructed by joining a pentagonal cupola () and a pentagonal rotunda () along their decagonal bases. The difference is that in this solid, the two halves are rotated 36 degrees with respect to one another.
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Elongated%20triangular%20orthobicupola
|
In geometry, the elongated triangular orthobicupola or cantellated triangular prism is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a triangular orthobicupola () by inserting a hexagonal prism between its two halves. The resulting solid is superficially similar to the rhombicuboctahedron (one of the Archimedean solids), with the difference that it has threefold rotational symmetry about its axis instead of fourfold symmetry.
Volume
The volume of J35 can be calculated as follows:
J35 consists of 2 cupolae and hexagonal prism.
The two cupolae makes 1 cuboctahedron = 8 tetrahedra + 6 half-octahedra.
1 octahedron = 4 tetrahedra, so total we have 20 tetrahedra.
What is the volume of a tetrahedron?
Construct a tetrahedron having vertices in common with alternate vertices of a
cube (of side , if tetrahedron has unit edges). The 4 triangular
pyramids left if the tetrahedron is removed from the cube form half an
octahedron = 2 tetrahedra. So
The hexagonal prism is more straightforward. The hexagon has area , so
Finally
numerical value:
Related polyhedra and honeycombs
The elongated triangular orthobicupola forms space-filling honeycombs with tetrahedra and square pyramids.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Elongated%20triangular%20gyrobicupola
|
In geometry, the elongated triangular gyrobicupola is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a "triangular gyrobicupola," or cuboctahedron, by inserting a hexagonal prism between its two halves, which are congruent triangular cupolae (). Rotating one of the cupolae through 60 degrees before the elongation yields the triangular orthobicupola ().
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
Related polyhedra and honeycombs
The elongated triangular gyrobicupola forms space-filling honeycombs with tetrahedra and square pyramids.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Gyroelongated%20triangular%20bicupola
|
In geometry, the gyroelongated triangular bicupola is one of the Johnson solids (). As the name suggests, it can be constructed by gyroelongating a triangular bicupola (either triangular orthobicupola, , or the cuboctahedron) by inserting a hexagonal antiprism between its congruent halves.
The gyroelongated triangular bicupola is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each square face on the bottom half of the figure is connected by a path of two triangular faces to a square face above it and to the right. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom square would be connected to a square face above it and to the left. The two chiral forms of are not considered different Johnson solids.
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
References
External links
Johnson solids
Chiral polyhedra
|
https://en.wikipedia.org/wiki/Return%20receipt
|
In email, a return receipt is an acknowledgment by the recipient's email client to the sender of receipt of an email message. What acknowledgment, if any, is sent by the recipient to the sender is dependent on the email software of the recipient.
Two notification services are available for email: delivery status notifications (DSNs) and message disposition notifications (MDNs). Whether such an acknowledgment of receipt is sent depends on the configuration of the recipient's email software.
Delivery status notifications
DSN is both a service that may optionally be provided by Message Transfer Agents (MTAs) using the Simple Mail Transfer Protocol (SMTP), or a message format used to return indications of message delivery to the sender of the message. Specifically, the DSN SMTP service is used to request indications of successful delivery or delivery failure (in the DSN format) be returned. Issuance of a DSN upon delivery failure is the default behavior, whereas issuance of a DSN upon successful delivery requires a specific request from the sender.
However, for various reasons, it is possible for a message to be delivered, and a DSN is returned to the sender indicating successful delivery, but the message subsequently fails to be seen by the recipient or even made available to them.
The DSN SMTP extension, message format, and associated delivery status codes are specified in RFCs 3461 through 3464 and 6522.
Message disposition notifications
MDNs provide a notification of the "disposition" of a message - indicating, for example, whether it is read by a recipient, discarded before being read, etc. However, for privacy reasons, and also for backward compatibility, requests for MDNs are entirely advisory in nature - i.e. recipients are free to ignore such requests. The format and usage of MDNs are specified in RFC 3798.
A description of how multiple Mail User Agents (MUAs) should handle the generation of MDNs in an Internet Message Access Protocol (IMAP4) environm
|
https://en.wikipedia.org/wiki/Laws%20of%20robotics
|
Laws of robotics are any set of laws, rules, or principles, which are intended as a fundamental framework to underpin the behavior of robots designed to have a degree of autonomy. Robots of this degree of complexity do not yet exist, but they have been widely anticipated in science fiction, films and are a topic of active research and development in the fields of robotics and artificial intelligence.
The best known set of laws are those written by Isaac Asimov in the 1940s, or based upon them, but other sets of laws have been proposed by researchers in the decades since then.
Isaac Asimov's "Three Laws of Robotics"
The best known set of laws are Isaac Asimov's "Three Laws of Robotics". These were introduced in his 1942 short story "Runaround", although they were foreshadowed in a few earlier stories. The Three Laws are:
A robot may not injure a human being or, through inaction, allow a human being to come to harm.
A robot must obey the orders by human beings except where such orders would conflict with the First Law.
A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.
In The Evitable Conflict the machines generalize the First Law to mean:
No machine may harm humanity; or, through inaction, allow humanity to come to harm.
This was refined in the end of Foundation and Earth, a zeroth law was introduced, with the original three suitably rewritten as subordinate to it:
Adaptations and extensions exist based upon this framework. As of 2021 they remain a "fictional device".
EPSRC / AHRC principles of robotics
In 2011, the Engineering and Physical Sciences Research Council (EPSRC) and the Arts and Humanities Research Council (AHRC) of United Kingdom jointly published a set of five ethical "principles for designers, builders and users of robots" in the real world, along with seven "high-level messages" intended to be conveyed, based on a September 2010 research workshop:
Robots should not be de
|
https://en.wikipedia.org/wiki/Enriched%20flour
|
Enriched flour is flour with specific nutrients returned to it that have been lost while being prepared. These restored nutrients include iron and B vitamins (folic acid, riboflavin, niacin, and thiamine). Calcium may also be supplemented. The purpose of enriching flour is to replenish the nutrients in the flour to match the nutritional status of the unrefined product. This differentiates enrichment from fortification, which is the process of introducing new nutrients to a food.
79 countries have fortification or enrichment for wheat or maize flour made "mandatory", according to the Global Fortification Data Exchange.
History
White flour became adopted in many cultures because it was thought to be healthier than dark flours during the late Middle Ages. As white flour was more expensive it became a fashionable indicator of perceived social status and tended to be consumed mostly by the richer classes. Another factor was that mold and fungus in the grains, which led to several diseases, were significantly reduced in the processing that resulted in white flour.
In the 1920s, Benjamin R. Jacobs began to document the loss of essential nutrients, however, through this processing of cereals and grains and to demonstrate a method by which the end products could be enriched with some of the lost nutrients. These nutrients promote good health and help to prevent some diseases. Enrichment was not possible until 1936, when the synthesis of thiamin was elucidated.
The international effort to start enriching flour was launched during the 1940s as a means to improve the health of the wartime populations of the United Kingdom and United States while food was being rationed and alternative sources of the nutrients were scarce. The decision to choose flour for enrichment was based on its commonality in the diets of those wartime populations, ranging from the rich to the poor. These wartime campaigns resulted in 40% of flour being enriched by 1942. In February 1942, the U.S
|
https://en.wikipedia.org/wiki/CellML
|
CellML is an XML based markup language for describing mathematical models. Although it could theoretically describe any mathematical model, it was originally created with the Physiome Project in mind, and hence used primarily to describe models relevant to the field of biology. This is reflected in its name CellML, although this is simply a name, not an abbreviation. CellML is growing in popularity as a portable description format for computational models, and groups throughout the world are using CellML for modelling or developing software tools based on CellML. CellML is similar to Systems Biology Markup Language SBML but provides greater scope for model modularity and reuse, and is not specific to descriptions of biochemistry.
History
The CellML language grew from a need to share models of cardiac cell dynamics among researchers at a number of sites across the world. The original working group formed in 1998 consisted of David Bullivant, Warren Hedley, and Poul Nielsen; all three were at that time members of the Department of Engineering Science at the University of Auckland. The language was an application of the XML specification developed by the World Wide Web Consortium – the decision to use XML was based on late 1998 recommendations from Warren Hedley and André (David) Nickerson. Existing XML-based languages were leveraged to describe the mathematics (content MathML), metadata (RDF), and links between resources (XLink). The CellML working group first became aware of the SBML effort in late 2000, when Warren Hedley attended the 2nd workshop on Software Platforms for Systems Biology in Tokyo.
The working group collaborated with a number of researchers at Physiome Sciences Inc. (particularly Melanie Nelson, Scott Lett, Mark Grehlinger, Prasad Ramakrishna, Jeremy Rice, Adam Muzikant, and Kam-Chuen Jim) to draft the initial CellML 1.0 specification, which was published on the 11th of August 2001. This first draft was followed by specifications for CellML Metad
|
https://en.wikipedia.org/wiki/Formal%20proof
|
In logic and mathematics, a formal proof or derivation is a finite sequence of sentences (called well-formed formulas in the case of a formal language), each of which is an axiom, an assumption, or follows from the preceding sentences in the sequence by a rule of inference. It differs from a natural language argument in that it is rigorous, unambiguous and mechanically verifiable. If the set of assumptions is empty, then the last sentence in a formal proof is called a theorem of the formal system. The notion of theorem is not in general effective, therefore there may be no method by which we can always find a proof of a given sentence or determine that none exists. The concepts of Fitch-style proof, sequent calculus and natural deduction are generalizations of the concept of proof.
The theorem is a syntactic consequence of all the well-formed formulas preceding it in the proof. For a well-formed formula to qualify as part of a proof, it must be the result of applying a rule of the deductive apparatus (of some formal system) to the previous well-formed formulas in the proof sequence.
Formal proofs often are constructed with the help of computers in interactive theorem proving (e.g., through the use of proof checker and automated theorem prover). Significantly, these proofs can be checked automatically, also by computer. Checking formal proofs is usually simple, while the problem of finding proofs (automated theorem proving) is usually computationally intractable and/or only semi-decidable, depending upon the formal system in use.
Background
Formal language
A formal language is a set of finite sequences of symbols. Such a language can be defined without reference to any meanings of any of its expressions; it can exist before any interpretation is assigned to it – that is, before it has any meaning. Formal proofs are expressed in some formal languages.
Formal grammar
A formal grammar (also called formation rules) is a precise description of the well-formed for
|
https://en.wikipedia.org/wiki/Direct%20digital%20synthesis
|
Direct digital synthesis (DDS) is a method employed by frequency synthesizers used for creating arbitrary waveforms from a single, fixed-frequency reference clock. DDS is used in applications such as signal generation, local oscillators in communication systems, function generators, mixers, modulators, sound synthesizers and as part of a digital phase-locked loop.
Overview
A basic Direct Digital Synthesizer consists of a frequency reference (often a crystal or SAW oscillator), a numerically controlled oscillator (NCO) and a digital-to-analog converter (DAC) as shown in Figure 1.
The reference oscillator provides a stable time base for the system and determines the frequency accuracy of the DDS. It provides the clock to the NCO, which produces at its output a discrete-time, quantized version of the desired output waveform (often a sinusoid) whose period is controlled by the digital word contained in the Frequency Control Register. The sampled, digital waveform is converted to an analog waveform by the DAC. The output reconstruction filter rejects the spectral replicas produced by the zero-order hold inherent in the analog conversion process.
Performance
A DDS has many advantages over its analog counterpart, the phase-locked loop (PLL), including much better frequency agility, improved phase noise, and precise control of the output phase across frequency switching transitions. Disadvantages include spurious responses mainly due to truncation effects in the NCO, crossing spurs resulting from high order (>1) Nyquist images, and a higher noise floor at large frequency offsets due mainly to the digital-to-analog converter.
Because a DDS is a sampled system, in addition to the desired waveform at output frequency Fout, Nyquist images are also generated (the primary image is at Fclk-Fout, where Fclk is the reference clock frequency). In order to reject these undesired images, a DDS is generally used in conjunction with an analog reconstruction lowpass filter as shown
|
https://en.wikipedia.org/wiki/Netgear
|
Netgear, Inc. is an American computer networking company based in San Jose, California, with offices in about 22 other countries. It produces networking hardware for consumers, businesses, and service providers. The company operates in three business segments: retail, commercial, and as a service provider.
Netgear's products cover a variety of widely used technologies such as wireless (Wi-Fi, LTE and 5G), Ethernet and powerline, with a focus on reliability and ease-of-use. The products include wired and wireless devices for broadband access and network connectivity, and are available in multiple configurations to address the needs of the end-users in each geographic region and sector in which the company's products are sold.
As of 2020, Netgear products are sold in approximately 24,000 retail locations around the globe, and through approximately 19,000 value-added resellers, as well as multiple major cable, mobile and wireline service providers around the world.
History
Netgear was founded by Patrick Lo in 1996. Lo graduated from Brown University with a B.S. degree in electrical engineering. Prior to founding Netgear, Lo was a manager at Hewlett-Packard. Netgear received initial funding from Bay Networks.
The company was listed on the NASDAQ stock exchange in 2003.
Product range
Netgear's focus is primarily on the networking market, with products for home and business use, as well as pro-gaming, including wired and wireless technology.
Netgear also has a wide range of Wifi Range Extenders
ProSAFE switches
Netgear markets network products for the business sector, most notably the ProSAFE switch range. , Netgear provides limited lifetime warranties for ProSAFE products for as long as the original buyer owns the product. Currently focusing on Multimedia segment and business product.
Network appliances
Netgear also markets network appliances for the business sector, including managed switches and wired and wireless VPN firewalls. In 2016, Netgear released
|
https://en.wikipedia.org/wiki/Bipolar%20coordinates
|
Bipolar coordinates are a two-dimensional orthogonal coordinate system based on the Apollonian circles. Confusingly, the same term is also sometimes used for two-center bipolar coordinates. There is also a third system, based on two poles (biangular coordinates).
The term "bipolar" is further used on occasion to describe other curves having two singular points (foci), such as ellipses, hyperbolas, and Cassini ovals. However, the term bipolar coordinates is reserved for the coordinates described here, and never used for systems associated with those other curves, such as elliptic coordinates.
Definition
The system is based on two foci F1 and F2. Referring to the figure at right, the σ-coordinate of a point P equals the angle F1 P F2, and the τ-coordinate equals the natural logarithm of the ratio of the distances d1 and d2:
If, in the Cartesian system, the foci are taken to lie at (−a, 0) and (a, 0), the coordinates of the point P are
The coordinate τ ranges from (for points close to F1) to (for points close to F2). The coordinate σ is only defined modulo 2π, and is best taken to range from -π to π, by taking it as the negative of the acute angle F1 P F2 if P is in the lower half plane.
Proof that coordinate system is orthogonal
The equations for x and y can be combined to give
or
This equation shows that σ and τ are the real and imaginary parts of an analytic function of x+iy (with logarithmic branch points at the foci), which in turn proves (by appeal to the general theory of conformal mapping) (the Cauchy-Riemann equations) that these particular curves of σ and τ intersect at right angles, i.e., it is an orthogonal coordinate system.
Curves of constant and
The curves of constant σ correspond to non-concentric circles
that intersect at the two foci. The centers of the constant-σ circles lie on the y-axis at with radius . Circles of positive σ are centered above the x-axis, whereas those of negative σ lie below the axis. As the magnitude |σ|
|
https://en.wikipedia.org/wiki/PC12%20minicomputer
|
PC12 by Artronix was a minicomputer built with 7400-series TTL technology and ferrite core memory. Computers were manufactured at the Artronix facility in suburban St. Louis, Missouri.
The instruction set architecture was adapted from the LINC, the only significant change was to
expand addressable memory to 4K, which required addition of an origin register. It was an accumulator machine with 12-bit addresses to manipulate 12-bit data. Later versions included "origin registers" that were used to extend the addressability of memory. Arithmetic was one's complement.
For mass storage it had a LINCtape dual unit. It also used a Tektronix screen with tube memory and an ADC/DAC to capture and display images. There was an optional plotter to draw the results. To speed up the calculations it had a separate floating point unit that interfaced like any other peripheral.
It ran an operating system LAP6-PC with support for assembly language and Fortran programming and usually came with end user software for Radiation Treatment Planning (RTP), for use by a radiation therapist or radiation oncologist, and Hospital Patient Records. Software for implant dosimetry was available for the PC12. With extended hardware it became a multiuser system running MUMPS.
Latter additions included an 8" floppy disk and hard disk of larger capacity.
The PC12 initially controlled the Artronix brain scanner (computed axial tomography), but this was for prototyping.
The PC12 was also the core of an ultrasound system and a gamma camera system.
The PC12 was eventually superseded by the "Modulex" system built by Artronix around the 16-bit Lockheed SUE processor, roughly around 1976. The PC12 continued in production, but was phased out over time.
Sites which used the Artronix PC12 included the Lutheran Hospital Cancer Center in Moline, Illinois, where it was used to store the medical records of patients undergoing treatment for cancer. A 1974 paper describes the use of a PC-12 as a frontend to an
|
https://en.wikipedia.org/wiki/Hamiltonian%20system
|
A Hamiltonian system is a dynamical system governed by Hamilton's equations. In physics, this dynamical system describes the evolution of a physical system such as a planetary system or an electron in an electromagnetic field. These systems can be studied in both Hamiltonian mechanics and dynamical systems theory.
Overview
Informally, a Hamiltonian system is a mathematical formalism developed by Hamilton to describe the evolution equations of a physical system. The advantage of this description is that it gives important insights into the dynamics, even if the initial value problem cannot be solved analytically. One example is the planetary movement of three bodies: while there is no closed-form solution to the general problem, Poincaré showed for the first time that it exhibits deterministic chaos.
Formally, a Hamiltonian system is a dynamical system characterised by the scalar function , also known as the Hamiltonian. The state of the system, , is described by the generalized coordinates and , corresponding to generalized momentum and position respectively. Both and are real-valued vectors with the same dimension N. Thus, the state is completely described by the 2N-dimensional vector
and the evolution equations are given by Hamilton's equations:
The trajectory is the solution of the initial value problem defined by Hamilton's equations and the initial condition .
Time-independent Hamiltonian systems
If the Hamiltonian is not explicitly time-dependent, i.e. if , then the Hamiltonian does not vary with time at all:
and thus the Hamiltonian is a constant of motion, whose constant equals the total energy of the system: . Examples of such systems are the undamped pendulum, the harmonic oscillator, and dynamical billiards.
Example
An example of a time-independent Hamiltonian system is the harmonic oscillator. Consider the system defined by the coordinates and . Then the Hamiltonian is given by
The Hamiltonian of this system does not depend on time an
|
https://en.wikipedia.org/wiki/Thermal%20interface%20material
|
A thermal interface material (often abbreviated as TIM) is any material that is inserted between two components in order to enhance the thermal coupling between them. A common use is heat dissipation, in which the TIM is inserted between a heat-producing device (e.g. an integrated circuit) and a heat-dissipating device (e.g. a heat sink). At each interface, a thermal resistance exists and impedes heat dissipation. In addition, the electronic performance and device lifetime can degrade dramatically under continuous overheating and large thermal stress at the interfaces. Therefore, there have been intensive efforts to develop improved TIMs, with the aim of minimizing the thermal boundary resistance between layers and enhancing thermal management performance, as well as tackling application requirements such as low thermal stress between materials of different thermal expansion coefficients, low elastic modulus or viscosity, flexibility, and reusability. Popularly used categories of TIMs include:
Thermal paste: Mostly used in the electronics industry, thermal pastes provide a very thin bond line and therefore a very small thermal resistance. They have no mechanical strength (other than the surface tension of the paste and the resulting adhesive effect) and require an external mechanical fixation mechanism. Because they do not cure, thermal pastes are typically only used where the material can be contained, or in thin applications where the viscosity of the paste will allow it to stay in position during use.
Thermal adhesive: As with thermal pastes, thermal adhesives provide a very thin bond line, but provide additional mechanical strength to the bond after curing. While curing TIMs like thermal adhesives may be used outside of a semiconductor package, often they are used in inside of a thermal package, as their curing properties can improve reliability over different thermal stresses. Thermal adhesives come in both single-part formulations as well as two-part formulat
|
https://en.wikipedia.org/wiki/Apache%20Flex
|
Apache Flex, formerly Adobe Flex, is a software development kit (SDK) for the development and deployment of cross-platform rich web applications based on the Adobe Flash platform. Initially developed by Macromedia and then acquired by Adobe Systems, Adobe donated Flex to the Apache Software Foundation in 2011 and it was promoted to a top-level project in December 2012.
The Flex 3 SDK was released under the MPL-1.1 license in 2008. Consequently, Flex applications can be developed using standard Integrated development environments (IDEs), such as IntelliJ IDEA, Eclipse, the free and open source IDE FlashDevelop, as well as the proprietary Adobe Flash Builder.
In 2014, the Apache Software Foundation started a new project called FlexJS to cross-compile ActionScript 3 to JavaScript to enable it to run on browsers that do not support Adobe Flash Player and on devices that do not support the Adobe AIR runtime. In 2017, FlexJS was renamed to Apache Royale. The Apache Software Foundation describes the current iteration of Apache Royale as an open-source frontend technology that allows a developer to code in ActionScript 3 and MXML and target web, mobile devices and desktop devices on Apache Cordova all at once. Apache Royale is currently in beta development stage.
Overview
Flex uses MXML to define UI layout and other non-visual static aspects, ActionScript to address dynamic aspects and as code-behind, and requires Adobe AIR or Flash Player at runtime to run the application.
Versions
Macromedia Flex 1.0 and 1.5
Macromedia targeted the enterprise application development market with its initial releases of Flex 1.0 and 1.5. The company offered the technology at a price around US$15,000 per CPU. Required for deployment, the Java EE application server compiled MXML and ActionScript on-the-fly into Flash applications (binary SWF files). Each server license included 5 licenses for the Flex Builder IDE.
Adobe Flex 2
Adobe significantly changed the licensing model for the
|
https://en.wikipedia.org/wiki/Downregulation%20and%20upregulation
|
In biochemistry, in the biological context of organisms' regulation of gene expression and production of gene products, downregulation is the process by which a cell decreases the production and quantities of its cellular components, such as RNA and proteins, in response to an external stimulus. The complementary process that involves increase in quantities of cellular components is called upregulation.
An example of downregulation is the cellular decrease in the expression of a specific receptor in response to its increased activation by a molecule, such as a hormone or neurotransmitter, which reduces the cell's sensitivity to the molecule. This is an example of a locally acting (negative feedback) mechanism.
An example of upregulation is the response of liver cells exposed to such xenobiotic molecules as dioxin. In this situation, the cells increase their production of cytochrome P450 enzymes, which in turn increases degradation of these dioxin molecules.
Downregulation or upregulation of an RNA or protein may also arise by an epigenetic alteration. Such an epigenetic alteration can cause expression of the RNA or protein to no longer respond to an external stimulus. This occurs, for instance, during drug addiction or progression to cancer.
Downregulation and upregulation of receptors
All living cells have the ability to receive and process signals that originate outside their membranes, which they do by means of proteins called receptors, often located at the cell's surface imbedded in the plasma membrane. When such signals interact with a receptor, they effectively direct the cell to do something, such as dividing, dying, or allowing substances to be created, or to enter or exit the cell. A cell's ability to respond to a chemical message depends on the presence of receptors tuned to that message. The more receptors a cell has that are tuned to the message, the more the cell will respond to it.
Receptors are created, or expressed, from instructions in the D
|
https://en.wikipedia.org/wiki/Metropolitan%20Area%20Outer%20Underground%20Discharge%20Channel
|
The is an underground water infrastructure project in Kasukabe, Saitama, Japan. It is the world's largest underground flood water diversion facility, built to mitigate overflowing of the city's major waterways and rivers during rain and typhoon seasons. It is located between Showa and Kasukabe in Saitama prefecture, on the outskirts of the city of Tokyo in the Greater Tokyo Area.
Work on the project started in 1992 and was completed by early 2006. It consists of five concrete containment silos with heights of and diameters of , connected by of tunnels, beneath the surface, as well as a large water tank with a height of , with a length of , with a width of , and with fifty-nine massive pillars connected to seventy-eight pumps that can pump up to of water into the Edo River per second.
See also
Tunnel and Reservoir Plan (in Chicago)
Basilica Cistern (in Istanbul)
Underground Construction
Stormwater
Sewerage
References
External links
(including photos)
Aqueducts in Japan
Buildings and structures in Saitama Prefecture
Flood control projects
Flood control in Japan
Geography of Saitama Prefecture
Macro-engineering
Science and technology in Japan
Tourist attractions in Saitama Prefecture
Water tunnels
Drainage tunnels
Tunnels in Japan
|
https://en.wikipedia.org/wiki/Ecocomposition
|
Ecocomposition is a way of looking at literacy using concepts from ecology. It is a postprocess theory of writing instruction that tries to account for factors beyond hierarchically defined goals within social settings; however, it does not dismiss these goals. Rather, it incorporates them within an ecological view that extends the range of factors affecting the writing process beyond the social to include aspects such as "place" and "nature." Its main motto, then, is "Writing Takes Place" (also the title of one of Sidney I. Dobrin's articles on ecocomposition).
The theory for ecocomposition dates back to Marilyn Cooper's 1986 essay "The Ecology of Writing" and Richard Coe's "Eco-Logic for the Composition Classroom" (1975). More recently, Dobrin and Weisser (2002) have assembled a more detailed theory of ecocomposition, placing it in relation to ecofeminism, ecocriticism "A Report Card on Ecocriticism", and environmental ethics. Other scholars (e.g., Reynolds, 2004) have shown its close proximity to social geography. According to ecofeminist scholar Greta Gaard (2001), "at its most inclusive, ecocomposition has the potential to address social issues such as feminism, environmental ethics, multiculturalism, politics, and economics, all by examining matters of form and style, audience and argumentation, and reliable sources and documentation" (p. 163).
Ecocomposition is one area of scholarly study discussed at the Conference on College Composition and Communication (CCCC), a national forum for writing instructors and scholars. As an educational endeavor, it is linked most closely with progressive education (Dewey, 1915), critical education (Giroux, 1987), and place-based education (Sobel, 2004).
Ecocomposition asks what effects a place has (or different places have) on the writing process. In what ways is our identity influenced by place, and what bearing does this have on our writing? What sets of relationships help us define our place—including the relationship
|
https://en.wikipedia.org/wiki/PowerDNS
|
PowerDNS is a DNS server program, written in C++ and licensed under the GPL. It runs on most Unix derivatives. PowerDNS features a large number of different backends ranging from simple BIND style zonefiles to relational databases and load balancing/failover algorithms. A DNS recursor is provided as a separate program.
History
PowerDNS development began in 1999 and was originally a commercial proprietary product. In November 2002, the source code was made public under the open-source GPL v2 license.
Features
PowerDNS Authoritative Server (pdns_server) consists of a single core, and multiple dynamically loadable backends that run multi-threaded. The core handles all packet processing and DNS intelligence, while one or more backends deliver DNS records using arbitrary storage methods.
Zone transfers and update notifications are supported, and the processes can run unprivileged and chrooted. Various caches are maintained to speed up query processing. Run-time control is available through the pdns_control command, which allows reloading of separate zones, cache purges, zone notifications and dumps statistics in Multi Router Traffic Grapher / rrdtool format. Realtime information can also be obtained through the optional built-in web server.
There are many independent projects to create management interfaces for PowerDNS.
DNSSEC
The PowerDNS Authoritative Server supports DNSSEC as of version 3.0. While pre-signed zones can be served, it is also possible to perform online signing & key management. This has the upside of being relatively easy, but the downside that the cryptographic keying material is present on the servers itself (which is also true of any HTTPS server when not used with a HSM for example).
Recursor
PowerDNS Recursor (pdns_recursor) is a resolving DNS server, that runs as a separate process.
This part of PowerDNS uses a combination of native threads and user-space threads, through the use of Boost and the MTasker library, which is a simple coop
|
https://en.wikipedia.org/wiki/Degree%20symbol
|
The degree symbol or degree sign, , is a glyph or symbol that is used, among other things, to represent degrees of arc (e.g. in geographic coordinate systems), hours (in the medical field), degrees of temperature or alcohol proof. The symbol consists of a small superscript circle.
History
The word degree is equivalent to Latin gradus which, since the medieval period, could refer to any stage in a graded system of ranks or steps. The number of the rank in question was indicated by ordinal numbers, in abbreviation with the ordinal indicator (a superscript o).
Use of "degree" specifically for the degrees of arc, used in conjunction with Arabic numerals, became common in the 16th century, but this was without the use of an ordinal marker or degree symbol.
Similarly, the introduction of the temperature scales with degrees in the 18th century was at first without such symbols, but with the word "gradus" spelled out. Use of the degree symbol was introduced for temperature in the later 18th century and became widespread in the early 19th century.
Antoine Lavoisier in his "Opuscles physiques et chymiques" (1774) used the ordinal indicator with Arabic numerals – for example, when he wrote in the introduction:
(p. vi)
(... a series of experiments [...] firstly, on the existence of that same elastic fluid [...])
The is to be read as meaning "in the first place", followed by ("in the second place"), etc. In the same work, when Lavoisier gives a temperature, he spells out the word "degree" explicitly, for example (p. 194): ("a temperature of 16 to 17 degrees of the thermometer").
An early use of the degree symbol proper is that by Henry Cavendish in 1776 for degrees of the Fahrenheit scale.
The degree symbol for degrees of temperature appears to have been transferred to the use for degrees of arc early in the 19th century. An early textbook using this notation is Charles Hutton, "A Course of Mathematics" vol. 1 (1836), page 383.
An earlier convention is found in Conrad M
|
https://en.wikipedia.org/wiki/Norton%20Utilities
|
Norton Utilities is a utility software suite designed to help analyze, configure, optimize and maintain a computer. The latest version of the original series of Norton Utilities is Norton Utilities 16 for Windows XP/Vista/7/8 was released 26 October 2012.
Peter Norton published the first version for DOS, The Norton Utilities, Release 1, in 1982. Release 2 came out about a year later, subsequent to the first hard drives for the IBM PC line. Peter Norton's company was sold to Symantec (now known as Gen Digital) in 1990 and Peter Norton himself no longer has any connection to the brand or company.
Norton Utilities for DOS and Windows 3.1
Version 1.0
The initial 1982 release supports DOS 1.x and features the UNERASE utility. This allows files to be undeleted by restoring the first letter of the directory entry (a workaround of the FAT file system used in DOS). The UNERASE utility was what launched NU on its path to success. Quoting Peter Norton, "Why did The Norton Utilities become such popular software? Well, industry wisdom has it that software becomes standard either by providing superior capabilities or by solving problems that were previously unsolvable. In 1982, when I sat down at my PC to write Unerase, I was solving a common problem to which there was no readily available solution."
14 programs are included, on three floppy disks, list price $80:
UnErase, recovers erased files
FileFix, repairs damaged files
DiskLook, complete floppy disk displays and maps
SecMod, easy changes to floppy disk sectors
FileHide, interactive hidden file control
BatHide, automatic hidden file control
TimeMark, displays date, time, elapsed time
ScrAtr, sets DOS to work in any colors
Reverse, work in black on white
Clear, clears the screen for clarity
FileSort, keeps floppy disk files by date or name
DiskOpt, speeds floppy disk access
Beep, causes the PC speaker to beep
Print, prints files
Version 2.0
The main feature of this DOS 2.x compatible version is FILEFIND, u
|
https://en.wikipedia.org/wiki/Conditional%20convergence
|
In mathematics, a series or integral is said to be conditionally convergent if it converges, but it does not converge absolutely.
Definition
More precisely, a series of real numbers is said to converge conditionally if
exists (as a finite real number, i.e. not or ), but
A classic example is the alternating harmonic series given by which converges to , but is not absolutely convergent (see Harmonic series).
Bernhard Riemann proved that a conditionally convergent series may be rearranged to converge to any value at all, including ∞ or −∞; see Riemann series theorem. The Lévy–Steinitz theorem identifies the set of values to which a series of terms in Rn can converge.
A typical conditionally convergent integral is that on the non-negative real axis of (see Fresnel integral).
See also
Absolute convergence
Unconditional convergence
References
Walter Rudin, Principles of Mathematical Analysis (McGraw-Hill: New York, 1964).
Mathematical series
Integral calculus
Convergence (mathematics)
Summability theory
|
https://en.wikipedia.org/wiki/Kuhn%20poker
|
Kuhn poker is a simplified form of poker developed by Harold W. Kuhn as a simple model zero-sum two-player imperfect-information game, amenable to a complete game-theoretic analysis. In Kuhn poker, the deck includes only three playing cards, for example a King, Queen, and Jack. One card is dealt to each player, which may place bets similarly to a standard poker. If both players bet or both players pass, the player with the higher card wins, otherwise, the betting player wins.
Game description
In conventional poker terms, a game of Kuhn poker proceeds as follows:
Each player antes 1.
Each player is dealt one of the three cards, and the third is put aside unseen.
Player one can check or bet 1.
If player one checks then player two can check or bet 1.
If player two checks there is a showdown for the pot of 2 (i.e. the higher card wins 1 from the other player).
If player two bets then player one can fold or call.
If player one folds then player two takes the pot of 3 (i.e. winning 1 from player 1).
If player one calls there is a showdown for the pot of 4 (i.e. the higher card wins 2 from the other player).
If player one bets then player two can fold or call.
If player two folds then player one takes the pot of 3 (i.e. winning 1 from player 2).
If player two calls there is a showdown for the pot of 4 (i.e. the higher card wins 2 from the other player).
Optimal strategy
The game has a mixed-strategy Nash equilibrium; when both players play equilibrium strategies, the first player should expect to lose at a rate of −1/18 per hand (as the game is zero-sum, the second player should expect to win at a rate of +1/18). There is no pure-strategy equilibrium.
Kuhn demonstrated there are infinitely many equilibrium strategies for the first player, forming a continuum governed by a single parameter. In one possible formulation, player one freely chooses the probability with which he will bet when having a Jack (otherwise he checks; if the other player bets, he should always fo
|
https://en.wikipedia.org/wiki/Anomaly-based%20intrusion%20detection%20system
|
An anomaly-based intrusion detection system, is an intrusion detection system for detecting both network and computer intrusions and misuse by monitoring system activity and classifying it as either normal or anomalous. The classification is based on heuristics or rules, rather than patterns or signatures, and attempts to detect any type of misuse that falls out of normal system operation. This is as opposed to signature-based systems, which can only detect attacks for which a signature has previously been created.
In order to positively identify attack traffic, the system must be taught to recognize normal system activity. The two phases of a majority of anomaly detection systems consist of the training phase (where a profile of normal behaviors is built) and testing phase (where current traffic is compared with the profile created in the training phase). Anomalies are detected in several ways, most often with artificial intelligence type techniques. Systems using artificial neural networks have been used to great effect. Another method is to define what normal usage of the system comprises using a strict mathematical model, and flag any deviation from this as an attack. This is known as strict anomaly detection. Other techniques used to detect anomalies include data mining methods, grammar based methods, and Artificial Immune System.
Network-based anomalous intrusion detection systems often provide a second line of defense to detect anomalous traffic at the physical and network layers after it has passed through a firewall or other security appliance on the border of a network. Host-based anomalous intrusion detection systems are one of the last layers of defense and reside on computer end points. They allow for fine-tuned, granular protection of end points at the application level.
Anomaly-based Intrusion Detection at both the network and host levels have a few shortcomings; namely a high false-positive rate and the ability to be fooled by a correctly delivere
|
https://en.wikipedia.org/wiki/POWER1
|
The POWER1 is a multi-chip CPU developed and fabricated by IBM that implemented the POWER instruction set architecture (ISA). It was originally known as the RISC System/6000 CPU or, when in an abbreviated form, the RS/6000 CPU, before introduction of successors required the original name to be replaced with one that used the same naming scheme (POWERn) as its successors in order to differentiate it from the newer designs.
History
The POWER1 was introduced in 1990, with the introduction of the IBM RS/6000 POWERserver servers and POWERstation workstations, which featured the POWER1 clocked at 20, 25 or 30 MHz. The POWER1 received two upgrades, one in 1991, with the introduction of the POWER1+ and in 1992, with the introduction of POWER1++. These upgraded versions were clocked higher than the original POWER1, made possible by improved semiconductor processes. The POWER1+ was clocked slightly higher than the original POWER1, at frequencies of 25, 33 and 41 MHz, while the POWER1++ took the microarchitecture to its highest frequencies — 25, 33, 41.6, 45, 50 and 62.5 MHz. In September 1993, the POWER1 and its variants was succeeded by the POWER2 (known briefly as the "RIOS2"), an evolution of the POWER1 microarchitecture.
The direct derivatives of the POWER1 are the RISC Single Chip (RSC), feature-reduced single-chip variant for entry-level RS/6000 systems, and the RAD6000, a radiation-hardened variant of the RSC for space applications. An indirect derivative of the POWER1 is the PowerPC 601, a feature-reduced variant of the RSC intended for consumer applications.
The POWER1 is notable as it represented a number of firsts for IBM and computing in general. It was IBM's first RISC processor intended for high-end applications (the ROMP was considered a commercial failure and was not used in high-end workstations), it was the first to implement the then new POWER instruction set architecture and it was IBM's first successful RISC processor. For computing firsts, the POWER1
|
https://en.wikipedia.org/wiki/Penetration%20test
|
A penetration test, colloquially known as a pentest or ethical hacking, is an authorized simulated cyberattack on a computer system, performed to evaluate the security of the system; this is not to be confused with a vulnerability assessment. The test is performed to identify weaknesses (or vulnerabilities), including the potential for unauthorized parties to gain access to the system's features and data, as well as strengths, enabling a full risk assessment to be completed.
The process typically identifies the target systems and a particular goal, then reviews available information and undertakes various means to attain that goal. A penetration test target may be a white box (about which background and system information are provided in advance to the tester) or a black box (about which only basic information other than the company name is provided). A gray box penetration test is a combination of the two (where limited knowledge of the target is shared with the auditor). A penetration test can help identify a system's vulnerabilities to attack and estimate how vulnerable it is.
Security issues that the penetration test uncovers should be reported to the system owner. Penetration test reports may also assess potential impacts to the organization and suggest countermeasures to reduce the risk.
The UK National Cyber Security Center describes penetration testing as: "A method for gaining assurance in the security of an IT system by attempting to breach some or all of that system's security, using the same tools and techniques as an adversary might."
The goals of a penetration test vary depending on the type of approved activity for any given engagement, with the primary goal focused on finding vulnerabilities that could be exploited by a nefarious actor, and informing the client of those vulnerabilities along with recommended mitigation strategies.
Penetration tests are a component of a full security audit. For example, the Payment Card Industry Data Security Sta
|
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20gyrocupolarotunda
|
In geometry, the elongated pentagonal gyrocupolarotunda is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a pentagonal gyrocupolarotunda () by inserting a decagonal prism between its halves. Rotating either the pentagonal cupola () or the pentagonal rotunda () through 36 degrees before inserting the prism yields an elongated pentagonal orthocupolarotunda ().
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20orthocupolarotunda
|
In geometry, the elongated pentagonal orthocupolarotunda is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a pentagonal orthocupolarotunda () by inserting a decagonal prism between its halves. Rotating either the cupola or the rotunda through 36 degrees before inserting the prism yields an elongated pentagonal gyrocupolarotunda ().
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Gyroelongated%20pentagonal%20cupolarotunda
|
In geometry, the gyroelongated pentagonal cupolarotunda is one of the Johnson solids (). As the name suggests, it can be constructed by gyroelongating a pentagonal cupolarotunda ( or ) by inserting a decagonal antiprism between its two halves.
The gyroelongated pentagonal cupolarotunda is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each pentagonal face on the bottom half of the figure is connected by a path of two triangular faces to a square face above it and to the left. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom pentagon would be connected to a square face above it and to the right. The two chiral forms of are not considered different Johnson solids.
Area and Volume
With edge length a, the surface area is
and the volume is
External links
Johnson solids
Chiral polyhedra
|
https://en.wikipedia.org/wiki/Augmented%20truncated%20tetrahedron
|
In geometry, the augmented truncated tetrahedron is one of the Johnson solids (). It is created by attaching a triangular cupola () to one hexagonal face of a truncated tetrahedron.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Comodule
|
In mathematics, a comodule or corepresentation is a concept dual to a module. The definition of a comodule over a coalgebra is formed by dualizing the definition of a module over an associative algebra.
Formal definition
Let K be a field, and C be a coalgebra over K. A (right) comodule over C is a K-vector space M together with a linear map
such that
,
where Δ is the comultiplication for C, and ε is the counit.
Note that in the second rule we have identified with .
Examples
A coalgebra is a comodule over itself.
If M is a finite-dimensional module over a finite-dimensional K-algebra A, then the set of linear functions from A to K forms a coalgebra, and the set of linear functions from M to K forms a comodule over that coalgebra.
A graded vector space V can be made into a comodule. Let I be the index set for the graded vector space, and let be the vector space with basis for . We turn into a coalgebra and V into a -comodule, as follows:
Let the comultiplication on be given by .
Let the counit on be given by .
Let the map on V be given by , where is the i-th homogeneous piece of .
In algebraic topology
One important result in algebraic topology is the fact that homology over the dual Steenrod algebra forms a comodule. This comes from the fact the Steenrod algebra has a canonical action on the cohomologyWhen we dualize to the dual Steenrod algebra, this gives a comodule structureThis result extends to other cohomology theories as well, such as complex cobordism and is instrumental in computing its cohomology ring . The main reason for considering the comodule structure on homology instead of the module structure on cohomology lies in the fact the dual Steenrod algebra is a commutative ring, and the setting of commutative algebra provides more tools for studying its structure.
Rational comodule
If M is a (right) comodule over the coalgebra C, then M is a (left) module over the dual algebra C∗, but the converse is not true in general: a
|
https://en.wikipedia.org/wiki/Midparent
|
The midparent value is defined as the average of the trait value of father and a scaled version of the mother. This value can be used in the study to analyze the data set without heeding sex effects. Studying quantitative traits in heritability studies may be complicated by sex differences observed for the trait.
Well-known examples include studies of stature height, whose midparent value hmp is given by:
where hf and hm are, respectively, the father's and mother's heights.
The coefficient 1.08 serves as a scaling factor. After the 1.08 scaling, the mean of the mother's height is the same as that of the father's, and the variance is closer to the father's; in this way, sex difference can be ignored.
References
Genetics
|
https://en.wikipedia.org/wiki/Multivision%20%28television%20technology%29
|
MultiVision was one of the earliest implementations of PIP (picture-in-picture) television available for purchase by users, pioneered by engineer George Schnurle III and sold by the San Jose, California-based company Multivision Products Inc.
The original MultiVision model was a box that measured by and was high. It required a VCR to operate and used its own tuner and the VCR to display two television channels. The television antenna was plugged into the MultiVision unit, which was then plugged into the television receiver's antenna input. The program selected on the MultiVision tuner was displayed in a small window inserted into the main TV picture at a position selected by the user. It also functioned as a switching device to connect additional peripherals (such as a laserdisc player) and offered audio outputs to connect external speakers and provide stereo sound. For monaural broadcasts and VHS tapes, the device could provide synthesized stereo audio.
The MultiVision 3.1 model was an unusually shaped device, similar in size to the original, that lacked any form of controls on the device itself. It used its own two tuners and/or a VCR and/or other devices to display two video sources at once. The tunerless MultiVision 1.1 model looked virtually identical to the 3.1 except in rear view, and featured 4 composite, plus left and right audio input sets, plus switchable external audio and video processor loops. Both provided composite and left and right audio outputs for TV input.
On the 1.1 and 3.1 models, the audio could be set in sync to either the main source or the PIP or selected independently. The 1.1 model's remote had 12 color-coded buttons, 4 each for the main picture, PIP picture, and audio, and like the 3.1's remote included other buttons for swapping main and inset picture, PIP on/off, PIP size, PIP position, audio sync on or off, mute, and more. Their remotes featured angled output ends, which facilitated accurate button selection whilst reclined.
R
|
https://en.wikipedia.org/wiki/Superposition%20principle
|
The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually. So that if input A produces response X and input B produces response Y then input (A + B) produces response (X + Y).
A function that satisfies the superposition principle is called a linear function. Superposition can be defined by two simpler properties: additivity
and homogeneity
for scalar .
This principle has many applications in physics and engineering because many physical systems can be modeled as linear systems. For example, a beam can be modeled as a linear system where the input stimulus is the load on the beam and the output response is the deflection of the beam. The importance of linear systems is that they are easier to analyze mathematically; there is a large body of mathematical techniques, frequency domain linear transform methods such as Fourier and Laplace transforms, and linear operator theory, that are applicable. Because physical systems are generally only approximately linear, the superposition principle is only an approximation of the true physical behavior.
The superposition principle applies to any linear system, including algebraic equations, linear differential equations, and systems of equations of those forms. The stimuli and responses could be numbers, functions, vectors, vector fields, time-varying signals, or any other object that satisfies certain axioms. Note that when vectors or vector fields are involved, a superposition is interpreted as a vector sum. If the superposition holds, then it automatically also holds for all linear operations applied on these functions (due to definition), such as gradients, differentials or integrals (if they exist).
Relation to Fourier analysis and similar methods
By writing a very general stimulus (in a linear system) as the superposition of stimuli of a s
|
https://en.wikipedia.org/wiki/Road%20Fighter
|
is a racing arcade video game developed by Konami and released in 1984, and it was the first racing game from the company. The goal is to reach the finish line within the stages without running out of time, hitting other cars or running out of fuel (which is refilled by hitting a special type of car). The game spawned a spiritual successor, Konami GT (1986), and two sequels, Midnight Run: Road Fighter 2 (1995) and Winding Heat (1996). A Japan-only sequel was also released 14 years later, Road Fighters (2010).
Gameplay
The first two levels contain four courses, ranging from grassy plains to an over-water bridge to a seashore, mountains and finally a forest area. In the arcade version, six stages were contained. The player controls a red Chevrolet Corvette and pressing the B accelerates the car to around 224 km/h while the A button increases it to 400. The player has a limited amount of fuel points (equal to about 100 seconds) and can earn more by touching special multi-colored cars. If the player collides into any other car or slips on occasionally appearing patches of oil, the car will spin out and if not corrected, may crash into the side barriers, causing a loss of five to six fuel points. The NES and Famicom versions have a total of six types of opponents, one yellow and red, three blue and one truck. Yellow cars travel along a straight line and occur in large numbers. Red cars are less likely to appear, but they will change the lane they are travelling in once to get in the way of the player. Blue cars are the game's main "enemies"; they vary in the way they change their lane and attempts to hit the player. Trucks go on a straight way, but colliding with them instantly destroys the player's car. Konami Man will make a cameo appearance, flying by the side of the road if the player progresses to a certain point in the level without crashing (not included on course two in NES and Famicom versions).
Ports
The game was later released for the MSX home computer syste
|
https://en.wikipedia.org/wiki/Pulse%20generator
|
A pulse generator is either an electronic circuit or a piece of electronic test equipment used to generate rectangular pulses. Pulse generators are used primarily for working with digital circuits; related function generators are used primarily for analog circuits.
Bench pulse generators
Simple bench pulse generators usually allow control of the pulse repetition rate (frequency), pulse width, delay with respect to an internal or external trigger and the high- and low-voltage levels of the pulses. More sophisticated pulse generators may allow control over the rise time and fall time of the pulses. Pulse generators are available for generating output pulses having widths (duration) ranging from minutes to under 1 picosecond.
Pulse generators are generally voltage sources, with true current pulse generators being available only from a few suppliers.
Pulse generators may use digital techniques, analog techniques, or a combination of both techniques to form the output pulses. For example, the pulse repetition rate and duration may be digitally controlled but the pulse amplitude and rise and fall times may be determined by analog circuitry in the output stage of the pulse generator. With correct adjustment, pulse generators can also produce a 50% duty cycle square wave. Pulse generators are generally single-channel, providing one frequency, delay, width and output.
Optical pulse generators
Light pulse generators are the optical equivalent to electrical pulse generators with rep rate, delay, width and amplitude control. The output in this case is light, typically from a LED or laser diode.
Multiple-channels
A new family of pulse generators can produce multiple channels of independent widths and delays and independent outputs and polarities. Often called digital delay/pulse generators, the newest designs even offer differing repetition rates with each channel. These digital delay generators are useful in synchronizing, delaying, gating and triggering multiple devi
|
https://en.wikipedia.org/wiki/Round%20number
|
A round number is an integer that ends with one or more "0"s (zero-digit) in a given base. So, 590 is rounder than 592, but 590 is less round than 600. In both technical and informal language, a round number is often interpreted to stand for a value or values near to the nominal value expressed. For instance, a round number such as 600 might be used to refer to a value whose magnitude is actually 592, because the actual value is more cumbersome to express exactly. Likewise, a round number may refer to a range of values near the nominal value that expresses imprecision about a quantity. Thus, a value reported as 600 might actually represent any value near 600, possibly as low as 550 or as high as 650, all of which would round to 600.
In decimal notation, a number ending in the digit "5" is also considered more round than one ending in another non-zero digit (but less round than any which ends with "0"). For example, the number 25 tends to be seen as more round than 24. Thus someone might say, upon turning 45, that their age is more round than when they turn 44 or 46. These notions of roundness are also often applied to non-integer numbers; so, in any given base, 2.3 is rounder than 2.297, because 2.3 can be written as 2.300. Thus, a number with fewer digits which are not trailing "0"s is considered to be rounder than others of the same or greater precision.
Numbers can also be considered "round" in numbering systems other than decimal (base 10). For example, the number 1024 would not be considered round in decimal, but the same number ends with a zero in several other numbering systems including binary (base 2: 10000000000), octal (base 8: 2000), and hexadecimal (base 16: 400). The previous discussion about the digit "5" generalizes to the digit representing b/2 for base-b notation, if b is even.
Psychology and sociology
Psychologically, round numbers form waypoints in pricing and negotiation. So, starting salaries are usually round numbers. Prices are often pitc
|
https://en.wikipedia.org/wiki/KeyKOS
|
KeyKOS is a persistent, pure capability-based operating system for the IBM S/370 mainframe computers. It allows emulating the environments of VM, MVS, and Portable Operating System Interface (POSIX). It is a predecessor of the Extremely Reliable Operating System (EROS), and its successor operating systems, CapROS, and Coyotos. KeyKOS is a nanokernel-based operating system.
In the mid-1970s, development of KeyKOS began at Tymshare, Inc., under the name GNOSIS. In 1984, McDonnell Douglas (MD) bought Tymshare. A year later MD spun off Key Logic, which bought GNOSIS and renamed it KeyKOS.
References
External links
, Norman Hardy
GNOSIS: A Prototype Operating System for the 1990s, a 1979 paper, Tymshare Inc.
KeyKOS - A Secure, High-Performance Environment for S/370, a 1988 paper, Key Logic, Inc.
Nanokernels
Capability systems
Microkernel-based operating systems
|
https://en.wikipedia.org/wiki/Homotopy%20sphere
|
In algebraic topology, a branch of mathematics, a homotopy sphere is an n-manifold that is homotopy equivalent to the n-sphere. It thus has the same homotopy groups and the same homology groups as the n-sphere, and so every homotopy sphere is necessarily a homology sphere.
The topological generalized Poincaré conjecture is that any n-dimensional homotopy sphere is homeomorphic to the n-sphere; it was solved by Stephen Smale in dimensions five and higher, by Michael Freedman in dimension 4, and for dimension 3 (the original Poincaré conjecture) by Grigori Perelman in 2005.
The resolution of the smooth Poincaré conjecture in dimensions 5 and larger implies that homotopy spheres in those dimensions are precisely exotic spheres. It is still an open question () whether or not there are non-trivial smooth homotopy spheres in dimension 4.
See also
Homology sphere
Homotopy groups of spheres
Poincaré conjecture
References
External links
Homotopy theory
Topological spaces
|
https://en.wikipedia.org/wiki/Voigt%20profile
|
The Voigt profile (named after Woldemar Voigt) is a probability distribution given by a convolution of a Cauchy-Lorentz distribution and a Gaussian distribution. It is often used in analyzing data from spectroscopy or diffraction.
Definition
Without loss of generality, we can consider only centered profiles, which peak at zero. The Voigt profile is then
where x is the shift from the line center, is the centered Gaussian profile:
and is the centered Lorentzian profile:
The defining integral can be evaluated as:
where Re[w(z)] is the real part of the Faddeeva function evaluated for
In the limiting cases of and then simplifies to and , respectively.
History and applications
In spectroscopy, a Voigt profile results from the convolution of two broadening mechanisms, one of which alone would produce a Gaussian profile (usually, as a result of the Doppler broadening), and the other would produce a Lorentzian profile. Voigt profiles are common in many branches of spectroscopy and diffraction. Due to the expense of computing the Faddeeva function, the Voigt profile is sometimes approximated using a pseudo-Voigt profile.
Properties
The Voigt profile is normalized:
since it is a convolution of normalized profiles. The Lorentzian profile has no moments (other than the zeroth), and so the moment-generating function for the Cauchy distribution is not defined. It follows that the Voigt profile will not have a moment-generating function either, but the characteristic function for the Cauchy distribution is well defined, as is the characteristic function for the normal distribution. The characteristic function for the (centered) Voigt profile will then be the product of the two:
Since normal distributions and Cauchy distributions are stable distributions, they are each closed under convolution (up to change of scale), and it follows that the Voigt distributions are also closed under convolution.
Cumulative distribution function
Using the above definition for z ,
|
https://en.wikipedia.org/wiki/Relational%20operator
|
In computer science, a relational operator is a programming language construct or operator that tests or defines some kind of relation between two entities. These include numerical equality (e.g., ) and inequalities (e.g., ).
In programming languages that include a distinct boolean data type in their type system, like Pascal, Ada, or Java, these operators usually evaluate to true or false, depending on if the conditional relationship between the two operands holds or not. In languages such as C, relational operators return the integers 0 or 1, where 0 stands for false and any non-zero value stands for true.
An expression created using a relational operator forms what is termed a relational expression or a condition. Relational operators can be seen as special cases of logical predicates.
Equality
Usage
Equality is used in many programming language constructs and data types. It is used to test if an element already exists in a set, or to access to a value through a key. It is used in switch statements to dispatch the control flow to the correct branch, and during the unification process in logic programming.
One possible meaning of equality is that "if a equals b, then either a or b can be used interchangeably in any context without noticing any difference." But this statement does not necessarily hold, particularly when taking into account mutability together with content equality.
Location equality vs. content equality
Sometimes, particularly in object-oriented programming, the comparison raises questions of data types and inheritance, equality, and identity. It is often necessary to distinguish between:
two different objects of the same type, e.g., two hands
two objects being equal but distinct, e.g., two $10 banknotes
two objects being equal but having different representation, e.g., a $1 bill and a $1 coin
two different references to the same object, e.g., two nicknames for the same person
In many modern programming languages, objects and data struct
|
https://en.wikipedia.org/wiki/Personal%20software%20process
|
The Personal Software Process (PSP) is a structured software development process that is designed to help software engineers better understand and improve their performance by bringing discipline to the way they develop software and tracking their predicted and actual development of the code. It clearly shows developers how to manage the quality of their products, how to make a sound plan, and how to make commitments. It also offers them the data to justify their plans. They can evaluate their work and suggest improvement direction by analyzing and reviewing development time, defects, and size data. The PSP was created by Watts Humphrey to apply the underlying principles of the Software Engineering Institute's (SEI) Capability Maturity Model (CMM) to the software development practices of a single developer. It claims to give software engineers the process skills necessary to work on a team software process (TSP) team.
"Personal Software Process" and "PSP" are registered service marks of the Carnegie Mellon University.
Objectives
The PSP aims to provide software engineers with disciplined methods for improving personal software development processes. The PSP helps software engineers to:
Improve their estimating and planning skills.
Make commitments they can keep.
Manage the quality of their projects.
Reduce the number of defects in their work.
PSP structure
PSP training follows an evolutionary improvement approach: an engineer learning to integrate the PSP into his or her process begins at the first level – PSP0 – and progresses in process maturity to the final level – PSP2.1. Each Level has detailed scripts, checklists and templates to guide the engineer through required steps and helps the engineer improve their own personal software process. Humphrey encourages proficient engineers to customize these scripts and templates as they gain an understanding of their own strengths and weaknesses.
Process
The input to PSP is the requirements; requirements docume
|
https://en.wikipedia.org/wiki/MySQL%20Cluster
|
MySQL Cluster is a technology providing shared-nothing clustering and auto-sharding for the MySQL database management system. It is designed to provide high availability and high throughput with low latency, while allowing for near linear scalability. MySQL Cluster is implemented through the NDB or NDBCLUSTER storage engine for MySQL ("NDB" stands for Network Database).
Architecture
MySQL Cluster is designed around a distributed, multi-master ACID compliant architecture with no single point of failure. MySQL Cluster uses automatic sharding (partitioning) to scale out read and write operations on commodity hardware and can be accessed via SQL and Non-SQL (NoSQL) APIs
Replication
Internally MySQL Cluster uses synchronous replication through a two-phase commit mechanism in order to guarantee that data is written to multiple nodes upon committing the data. (This is in contrast to what is usually referred to as "MySQL Replication", which is .) Two copies (known as replicas) of the data are required to guarantee availability. MySQL Cluster automatically creates “node groups” from the number of replicas and data nodes specified by the user. Updates are synchronously replicated between members of the node group to protect against data loss and support fast failover between nodes.
It is also possible to replicate asynchronously between clusters; this is sometimes referred to as "MySQL Cluster Replication" or "geographical replication". This is typically used to replicate clusters between data centers for disaster recovery or to reduce the effects of network latency by locating data physically closer to a set of users. Unlike standard MySQL replication, MySQL Cluster's geographic replication uses optimistic concurrency control and the concept of Epochs to provide a mechanism for conflict detection and resolution, enabling active/active clustering between data centers.
Starting with MySQL Cluster 7.2, support for synchronous replication between data centers was supported
|
https://en.wikipedia.org/wiki/Virtual%20herbarium
|
In botany, a virtual herbarium is a herbarium in a digitized form. That is, it concerns a collection of digital images of preserved plants or plant parts. Virtual herbaria often are established to improve availability of specimens to a wider audience. However, there are digital herbaria that are not suitable for internet access because of the high resolution of scans and resulting large file sizes (several hundred megabytes per file). Additional information about each specimen, such as the location, the collector, and the botanical name are attached to every specimen. Frequently, further details such as related species and growth requirements are mentioned.
Specimen imaging
The standard hardware used for herbarium specimen imaging is the "HerbScan" scanner. It is an inverted flat-bed scanner which raises the specimen up to the scanning surface. This technology was developed because it is standard practice to never turn a herbarium specimen upside-down. Alternatively, some herbaria employ a flat-bed book scanner or a copy stand to achieve the same effect.
A small color chart and a ruler must be included on a herbarium sheet when it is imaged. The JSTOR Plant Science requires that the ruler bears the herbarium name and logo, and that a ColorChecker chart is used for any specimens to be contributed to the Global Plants Initiative (GPI).
Uses
Virtual herbaria are established in part to increase the longevity of specimens. Major herbaria participate in international loan programs, where a researcher can request specimens to be shipped in for study. This shipping contributes to the wear and tear of specimens. If, however, digital images are available, images of the specimens can be sent electronically. These images may be a sufficient substitute for the specimens themselves, or alternatively, the researcher can use the images to "preview" the specimens, to which ones should be sent out for further study. This process cuts down on the shipping, and thus the wear and
|
https://en.wikipedia.org/wiki/AMD%20K8
|
The AMD K8 Hammer, also code-named SledgeHammer, is a computer processor microarchitecture designed by AMD as the successor to the AMD K7 Athlon microarchitecture. The K8 was the first implementation of the AMD64 64-bit extension to the x86 instruction set architecture.
Features
Processors
Processors based on the K8 core include:
Athlon 64 - first 64-bit consumer desktop
Athlon 64 X2 - first dual-core ('X2') desktop
Athlon X2 - later model dual-core desktop with '64' omitted
Athlon 64 FX - enthusiast desktop (multipliers unlocked)
Sempron - low-end, low-cost desktop
Opteron - server market
Turion 64 - mobile computing market
Turion 64 X2 - dual-core mobile processor
The K8 core is very similar to the K7. The most radical change is the integration of the AMD64 instructions and an on-chip memory controller. The memory controller drastically reduces memory latency and is largely responsible for most of the performance gains from K7 to K8.
Nomenclature
It is perceived by the PC community that after the use of the codename K8 for the Athlon 64 processor family, AMD no longer uses K-nomenclatures (which originally stood for Kryptonite) since no K-nomenclature naming convention beyond K8 has appeared in official AMD documents and press releases after the beginning of 2005. AMD now refers to the codename K8 processors as the Family 0Fh processors. 10h and 0Fh refer to the main result of the CPUID x86 processor instruction. In hexadecimal numbering, 0F(h) (where the h represents hexadecimal numbering) equals the decimal number 15, and 10(h) equals the decimal number 16. (The "K10h" form that sometimes pops up is an improper hybrid of the "K" code and Family identifier number.)
See also
List of AMD Athlon 64 processors - desktop
List of AMD Athlon X2 processors - desktop
List of AMD Sempron processors - low end
List of AMD Opteron processors - server
List of AMD Turion processors - mobile
AMD K9
AMD 10h
Jim Keller (engineer)
References
K08
AMD mic
|
https://en.wikipedia.org/wiki/Constructed%20wetland
|
A constructed wetland is an artificial wetland to treat sewage, greywater, stormwater runoff or industrial wastewater. It may also be designed for land reclamation after mining, or as a mitigation step for natural areas lost to land development. Constructed wetlands are engineered systems that use the natural functions of vegetation, soil, and organisms to provide secondary treatment to wastewater. The design of the constructed wetland has to be adjusted according to the type of wastewater to be treated. Constructed wetlands have been used in both centralized and decentralized wastewater systems. Primary treatment is recommended when there is a large amount of suspended solids or soluble organic matter (measured as biochemical oxygen demand and chemical oxygen demand).
Similar to natural wetlands, constructed wetlands also act as a biofilter and/or can remove a range of pollutants (such as organic matter, nutrients, pathogens, heavy metals) from the water. Constructed wetlands are designed to remove water pollutants such as suspended solids, organic matter and nutrients (nitrogen and phosphorus). All types of pathogens (i.e., bacteria, viruses, protozoans and helminths) are expected to be removed to some extent in a constructed wetland. Subsurface wetlands provide greater pathogen removal than surface wetlands.
There are two main types of constructed wetlands: subsurface flow and surface flow. The planted vegetation plays an important role in contaminant removal. The filter bed, consisting usually of sand and gravel, has an equally important role to play. Some constructed wetlands may also serve as a habitat for native and migratory wildlife, although that is not their main purpose. Subsurface flow constructed wetlands are designed to have either horizontal flow or vertical flow of water through the gravel and sand bed. Vertical flow systems have a smaller space requirement than horizontal flow systems.
Terminology
Many terms are used to denote constructed wetl
|
https://en.wikipedia.org/wiki/Level-set%20method
|
Level-set methods (LSM) are a conceptual framework for using level sets as a tool for numerical analysis of surfaces and shapes. The advantage of the level-set model is that one can perform numerical computations involving curves and surfaces on a fixed Cartesian grid without having to parameterize these objects (this is called the Eulerian approach). Also, the level-set method makes it very easy to follow shapes that change topology, for example, when a shape splits in two, develops holes, or the reverse of these operations. All these make the level-set method a great tool for modeling time-varying objects, like inflation of an airbag, or a drop of oil floating in water.
The figure on the right illustrates several important ideas about the level-set method. In the upper-left corner we see a shape; that is, a bounded region with a well-behaved boundary. Below it, the red surface is the graph of a level set function determining this shape, and the flat blue region represents the xy plane. The boundary of the shape is then the zero-level set of , while the shape itself is the set of points in the plane for which is positive (interior of the shape) or zero (at the boundary).
In the top row we see the shape changing its topology by splitting in two. It would be quite hard to describe this transformation numerically by parameterizing the boundary of the shape and following its evolution. One would need an algorithm able to detect the moment the shape splits in two, and then construct parameterizations for the two newly obtained curves. On the other hand, if we look at the bottom row, we see that the level set function merely translated downward. This is an example of when it can be much easier to work with a shape through its level-set function than with the shape directly, where using the shape directly would need to consider and handle all the possible deformations the shape might undergo.
Thus, in two dimensions, the level-set method amounts to representing a
|
https://en.wikipedia.org/wiki/Carrier%20current
|
Carrier current transmission, originally called wired wireless, employs guided low-power radio-frequency signals, which are transmitted along electrical conductors. The transmissions are picked up by receivers that are either connected to the conductors, or a short distance from them. Carrier current transmission is used to send audio and telemetry to selected locations, and also for low-power broadcasting that covers a small geographical area, such as a college campus. The most common form of carrier current uses longwave or medium wave AM radio signals that are sent through existing electrical wiring, although other conductors can be used, such as telephone lines.
Technology
Carrier current generally uses low-power transmissions. In cases where the signals are being carried over electrical wires, special preparations must be made for distant transmissions, as the signals cannot pass through standard utility transformers. Signals can bridge transformers if the utility company has installed high-pass filters, which typically has already been done when carrier current-based data systems are in operation. Signals can also be impressed onto the neutral leg of the three-phase electric power system, a practice known as "neutral loading", in order to reduce or eliminate mains hum (60 hertz in North American installations), and to extend effective transmission line distance.
For a broadcasting installation, a typical carrier current transmitter has an output in the range 5 to 30 watts. However, electrical wiring is a very inefficient antenna, and this results in a transmitted effective radiated power of less than one watt, and the distance over which signals can be picked up is usually less than 60 meters (200 feet) from the wires. Transmission sound quality can be good, although it sometimes includes the low-frequency mains hum interference produced by the alternating current. However, not all listeners notice this hum, nor is it reproduced well by all receivers.
Exte
|
https://en.wikipedia.org/wiki/Client-side%20prediction
|
Client-side prediction is a network programming technique used in video games intended to conceal negative effects of high latency connections. The technique attempts to make the player's input feel more instantaneous while governing the player's actions on a remote server.
The process of client-side prediction refers to having the client locally react to user input before the server has acknowledged the input and updated the game state. So, instead of the client only sending control input to the server and waiting for an updated game state in return, the client also, in parallel with this, predicts the game state locally, and gives the user feedback without awaiting an updated game state from the server.
Client-side prediction reduces latency problems, since there no longer will be a delay between input and client-side visual feedback due to network ping times. However, it also introduces a desynchronization of the client and server game states, which needs to be handled to keep the game playable. Usually, the desync is corrected when the client receives the updated game state, but as instantaneous correction would lead to "snapping", there are usually some "smoothing" algorithms involved. For example, one common smoothing algorithm would be to check each visible object's client-side location to see if it is within some error epsilon of its server-side location. If not, the client-side's information is updated to the server-side directly (snapped because of too much desynchronization). However, if the client-side location is not too far, a new position between the client-side and server-side is interpolated; this position is set to be within some small step delta from the client-side location, which is generally judged to be "small enough" to be unintrusive to the user.
Another solution to the desynchronization issue, commonly used in conjunction with client-side prediction, is called server reconciliation. The client includes a sequence number in every input
|
https://en.wikipedia.org/wiki/Gluing%20axiom
|
In mathematics, the gluing axiom is introduced to define what a sheaf on a topological space must satisfy, given that it is a presheaf, which is by definition a contravariant functor
to a category which initially one takes to be the category of sets. Here is the partial order of open sets of ordered by inclusion maps; and considered as a category in the standard way, with a unique morphism
if is a subset of , and none otherwise.
As phrased in the sheaf article, there is a certain axiom that must satisfy, for any open cover of an open set of . For example, given open sets and with union and intersection , the required condition is that
is the subset of With equal image in
In less formal language, a section of over is equally well given by a pair of sections : on and respectively, which 'agree' in the sense that and have a common image in under the respective restriction maps
and
.
The first major hurdle in sheaf theory is to see that this gluing or patching axiom is a correct abstraction from the usual idea in geometric situations. For example, a vector field is a section of a tangent bundle on a smooth manifold; this says that a vector field on the union of two open sets is (no more and no less than) vector fields on the two sets that agree where they overlap.
Given this basic understanding, there are further issues in the theory, and some will be addressed here. A different direction is that of the Grothendieck topology, and yet another is the logical status of 'local existence' (see Kripke–Joyal semantics).
Removing restrictions on C
To rephrase this definition in a way that will work in any category that has sufficient structure, we note that we can write the objects and morphisms involved in the definition above in a diagram which we will call (G), for "gluing":
Here the first map is the product of the restriction maps
and each pair of arrows represents the two restrictions
and
.
It is worthwhile to note that these maps exhau
|
https://en.wikipedia.org/wiki/Linearizability
|
In concurrent programming, an operation (or set of operations) is linearizable if it consists of an ordered list of invocation and response events, that may be extended by adding response events such that:
The extended list can be re-expressed as a sequential history (is serializable).
That sequential history is a subset of the original unextended list.
Informally, this means that the unmodified list of events is linearizable if and only if its invocations were serializable, but some of the responses of the serial schedule have yet to return.
In a concurrent system, processes can access a shared object at the same time. Because multiple processes are accessing a single object, a situation may arise in which while one process is accessing the object, another process changes its contents. Making a system linearizable is one solution to this problem. In a linearizable system, although operations overlap on a shared object, each operation appears to take place instantaneously. Linearizability is a strong correctness condition, which constrains what outputs are possible when an object is accessed by multiple processes concurrently. It is a safety property which ensures that operations do not complete unexpectedly or unpredictably. If a system is linearizable it allows a programmer to reason about the system.
History of linearizability
Linearizability was first introduced as a consistency model by Herlihy and Wing in 1987. It encompassed more restrictive definitions of atomic, such as "an atomic operation is one which cannot be (or is not) interrupted by concurrent operations", which are usually vague about when an operation is considered to begin and end.
An atomic object can be understood immediately and completely from its sequential definition, as a set of operations run in parallel which always appear to occur one after the other; no inconsistencies may emerge. Specifically, linearizability guarantees that the invariants of a system are observed and preserve
|
https://en.wikipedia.org/wiki/PRAM%20consistency
|
PRAM consistency (pipelined random access memory) also known as FIFO consistency.
All processes see memory writes from one process in the order they were issued from the process.
Writes from different processes may be seen in a different order on different processes. Only the write order needs to be consistent, thus the name pipelined.
PRAM consistency is easy to implement. In effect it says that there are no guarantees about the order in which different processes see writes, except that two or more writes from a single source must arrive in order, as though they were in a pipeline.
P1:W(x)1
P2: R(x)1W(x)2
P3: R(x)1R(x)2
P4: R(x)2R(x)1
Time ---->
Fig: A valid sequence of events for PRAM consistency.
The above sequence is not valid for Causal consistency because W(x)1 and W(x)2 are causal, so different processes must read it in the same sequence.
References
Consistency models
|
https://en.wikipedia.org/wiki/Atari%20Portfolio
|
The Atari Portfolio (Atari PC Folio) is an IBM PC-compatible palmtop PC, released by Atari Corporation in June 1989. This makes it the world's first palmtop computer.
History
DIP Research Ltd. based in Guildford, Surrey, UK released a product in the UK called the DIP Pocket PC in 1989. Soon after its release, DIP licensed this product to Atari for sale as the Portfolio in the UK and US. In Italy, Spain and Germany, it was originally marketed as PC Folio instead. DIP officially stood for "Distributed Information Processing", although secretly it actually stood for "David, Ian and Peter", the three founding members of the company who were former employees of Psion. The original founder of the company (first called "Crushproof Software") was Ian H. S. Cullimore, and the other two David Frodsham and Peter Baldwin. Cullimore was involved in designing the early Organiser products at Psion before the DIP Pocket PC project. The technologic successor of the Portfolio was the also DIP-developed Sharp PC-3000/3100. DIP Research was later acquired by Phoenix Technologies in 1994.
Technology
The Portfolio uses an Intel 80C88 CPU running at 4.9152 MHz and runs "DIP Operating System 2.11" (DIP DOS), an operating system mostly compatible to MS-DOS 2.11, but with some DOS 2.xx functionality lacking and some internal data structures more compatible with DOS 3.xx. It has 128 KB of RAM and 256 KB of ROM which contains the OS and built-in applications. The on-board RAM is divided between system memory and local storage (the C: drive). The LCD is monochrome without backlight and has pixels or 40 characters × 8 lines.
The sound is handled by a small Dual-Tone Multi-Frequency speaker capable of outputting tones between 622 and 2,489Hz, the same range as touch tone telephones, so users could not only use the address book app to store phone numbers, but actually speed dial them too by holding the device up to a telephone handset.
Power is supplied by three AA size removable alkaline b
|
https://en.wikipedia.org/wiki/Practical%20number
|
In number theory, a practical number or panarithmic number is a positive integer such that all smaller positive integers can be represented as sums of distinct divisors of . For example, 12 is a practical number because all the numbers from 1 to 11 can be expressed as sums of its divisors 1, 2, 3, 4, and 6: as well as these divisors themselves, we have 5 = 3 + 2, 7 = 6 + 1, 8 = 6 + 2, 9 = 6 + 3, 10 = 6 + 3 + 1, and 11 = 6 + 3 + 2.
The sequence of practical numbers begins
Practical numbers were used by Fibonacci in his Liber Abaci (1202) in connection with the problem of representing rational numbers as Egyptian fractions. Fibonacci does not formally define practical numbers, but he gives a table of Egyptian fraction expansions for fractions with practical denominators.
The name "practical number" is due to . He noted that "the subdivisions of money, weights, and measures involve numbers like 4, 12, 16, 20 and 28 which are usually supposed to be so inconvenient as to deserve replacement by powers of 10." His partial classification of these numbers was completed by and . This characterization makes it possible to determine whether a number is practical by examining its prime factorization. Every even perfect number and every power of two is also a practical number.
Practical numbers have also been shown to be analogous with prime numbers in many of their properties.
Characterization of practical numbers
The original characterisation by stated that a practical number cannot be a deficient number, that is one of which the sum of all divisors (including 1 and itself) is less than twice the number unless the deficiency is one. If the ordered set of all divisors of the practical number is with and , then Srinivasan's statement can be expressed by the inequality
In other words, the ordered sequence of all divisors of a practical number has to be a complete sub-sequence.
This partial characterization was extended and completed by and who showed that it is s
|
https://en.wikipedia.org/wiki/Oracle%20ZFS
|
Oracle ZFS is Oracle's proprietary implementation of the ZFS file system and logical volume manager for Oracle Solaris. ZFS is a registered trademark belonging to Oracle.
History
Solaris 10
In update 2 and later, ZFS is part of Sun's own Solaris 10 operating system and is thus available on both SPARC and x86-based systems.
Solaris 11
After Oracle's Solaris 11 Express release, the OS/Net consolidation (the main OS code) was made proprietary and closed-source, and further ZFS upgrades and implementations inside Solaris (such as encryption) are not compatible with other non-proprietary implementations which use previous versions of ZFS.
When creating a new ZFS pool, to retain the ability to use access the pool from other non-proprietary Solaris-based distributions, it is recommended to upgrade to Solaris 11 Express from OpenSolaris (snv_134b), and thereby stay at ZFS version 28.
Future development
On September 2, 2017, Simon Phipps reported that Oracle had laid off virtually all of its Solaris core development staff, interpreting it as a sign that Oracle no longer intends to support future development of the platform.
Version history
References
External links
Compression file systems
Disk file systems
Formerly free software
Oracle software
RAID
Volume manager
|
https://en.wikipedia.org/wiki/Rabdology
|
In 1617 a treatise in Latin titled Rabdologiæ and written by John Napier was published in Edinburgh. Printed three years after his treatise on the discovery of logarithms and in the same year as his death, it describes three devices to aid arithmetic calculations.
The devices themselves don't use logarithms, rather they are tools to reduce multiplication and division of natural numbers to simple addition and subtraction operations.
The first device, which by then was already popularly used and known as Napier's bones, was a set of rods inscribed with the multiplication table. Napier coined the word rabdology (from Greek ῥάβδος [rhabdos], rod and λόγoς [logos] calculation or reckoning) to describe this technique. The rods were used to multiply, divide and even find the square roots and cube roots of numbers.
The second device was a promptuary (Latin promptuarium meaning storehouse) and consisted of a large set of strips that could multiply multidigit numbers more easily than the bones. In combination with a table of reciprocals, it could also divide numbers.
The third device used a checkerboard like grid and counters moving on the board to perform binary arithmetic. Napier termed this technique location arithmetic from the way in which the locations of the counters on the board represented and computed numbers. Once a number is converted into a binary form, simple movements of counters on the grid could multiply, divide and even find square roots of numbers.
Of these devices, Napier's bones were the most popular and widely known. In fact, part of his motivation to publish the treatise was to establish credit for his invention of the technique. The bones were easy to manufacture and simple to use, and several variations on them were published and used for many years.
The promptuary was never widely used, perhaps because it was more complex to manufacture, and it took nearly as much time to lay out the strips to find the product of numbers as to find the answer w
|
https://en.wikipedia.org/wiki/Stein%20factorization
|
In algebraic geometry, the Stein factorization, introduced by for the case of complex spaces, states that a proper morphism can be factorized as a composition of a finite mapping and a proper morphism with connected fibers. Roughly speaking, Stein factorization contracts the connected components of the fibers of a mapping to points.
Statement
One version for schemes states the following:
Let X be a scheme, S a locally noetherian scheme and a proper morphism. Then one can write
where is a finite morphism and is a proper morphism so that
The existence of this decomposition itself is not difficult. See below. But, by Zariski's connectedness theorem, the last part in the above says that the fiber is connected for any . It follows:
Corollary: For any , the set of connected components of the fiber is in bijection with the set of points in the fiber .
Proof
Set:
where SpecS is the relative Spec. The construction gives the natural map , which is finite since is coherent and f is proper. The morphism f factors through g and one gets , which is proper. By construction, . One then uses the theorem on formal functions to show that the last equality implies has connected fibers. (This part is sometimes referred to as Zariski's connectedness theorem.)
See also
Contraction morphism
References
Algebraic geometry
|
https://en.wikipedia.org/wiki/Spherical%203-manifold
|
In mathematics, a spherical 3-manifold M is a 3-manifold of the form
where is a finite subgroup of SO(4) acting freely by rotations on the 3-sphere . All such manifolds are prime, orientable, and closed. Spherical 3-manifolds are sometimes called elliptic 3-manifolds or Clifford-Klein manifolds.
Properties
A spherical 3-manifold has a finite fundamental group isomorphic to Γ itself. The elliptization conjecture, proved by Grigori Perelman, states that conversely all compact 3-manifolds with finite fundamental group are spherical manifolds.
The fundamental group is either cyclic, or is a central extension of a dihedral, tetrahedral, octahedral, or icosahedral group by a cyclic group of even order. This divides the set of such manifolds into 5 classes, described in the following sections.
The spherical manifolds are exactly the manifolds with spherical geometry, one of the 8 geometries of Thurston's geometrization conjecture.
Cyclic case (lens spaces)
The manifolds with Γ cyclic are precisely the 3-dimensional lens spaces. A lens space is not determined by its fundamental group (there are non-homeomorphic lens spaces with isomorphic fundamental groups); but any other spherical manifold is.
Three-dimensional lens spaces arise as quotients of by
the action of the group that is generated by elements of the form
where . Such a lens space has fundamental group for all , so spaces with different are not homotopy equivalent.
Moreover, classifications up to homeomorphism and homotopy equivalence are known, as follows. The three-dimensional spaces and
are:
homotopy equivalent if and only if for some
homeomorphic if and only if
In particular, the lens spaces L(7,1) and L(7,2) give examples of two 3-manifolds that are homotopy equivalent but not homeomorphic.
The lens space L(1,0) is the 3-sphere, and the lens space L(2,1) is 3 dimensional real projective space.
Lens spaces can be represented as Seifert fiber spaces in many ways, usually as fiber spac
|
https://en.wikipedia.org/wiki/Abramowitz%20and%20Stegun
|
Abramowitz and Stegun (AS) is the informal name of a 1964 mathematical reference work edited by Milton Abramowitz and Irene Stegun of the United States National Bureau of Standards (NBS), now the National Institute of Standards and Technology (NIST). Its full title is Handbook of Mathematical Functions with Formulas, Graphs, and Mathematical Tables. A digital successor to the Handbook was released as the "Digital Library of Mathematical Functions" (DLMF) on 11 May 2010, along with a printed version, the NIST Handbook of Mathematical Functions, published by Cambridge University Press.
Overview
Since it was first published in 1964, the 1046 page Handbook has been one of the most comprehensive sources of information on special functions, containing definitions, identities, approximations, plots, and tables of values of numerous functions used in virtually all fields of applied mathematics. The notation used in the Handbook is the de facto standard for much of applied mathematics today.
At the time of its publication, the Handbook was an essential resource for practitioners. Nowadays, computer algebra systems have replaced the function tables, but the Handbook remains an important reference source. The foreword discusses a meeting in 1954 in which it was agreed that "the advent of high-speed computing equipment changed the task of table making but definitely did not remove the need for tables".
The chapters are:
Mathematical Constants
Physical Constants and Conversion Factors
Elementary Analytical Methods
Elementary Transcendental Functions
Exponential Integral and Related Functions
Gamma Function and Related Functions
Error Function and Fresnel Integrals
Legendre Functions
Bessel Functions of Integral Order
Bessel Functions of Fractional Order
Integrals of Bessel Functions
Struve Functions and Related Functions
Confluent Hypergeometric Functions
Coulomb Wave Functions
Hypergeometric Functions
Jacobian Elliptic Functions and Theta Functions
Elliptic
|
https://en.wikipedia.org/wiki/Residual%20block%20termination
|
In cryptography, residual block termination is a variation of cipher block chaining mode (CBC) that does not require any padding. It does this by effectively changing to cipher feedback mode for one block. The cost is the increased complexity.
Encryption procedure
If the plaintext length N is not a multiple of the block size L:
Encrypt the ⌊N/L⌋ full blocks of plaintext using the cipher block chaining mode;
Encrypt the last full encrypted block again;
XOR the remaining bits of the plaintext with leftmost bits of the re-encrypted block.
Decryption procedure
Decrypt the ⌊N/L⌋ full encrypted blocks using the Cipher Block Chaining mode;
Encrypt the last full encrypted block;
XOR the remaining bits of the ciphertext with leftmost bits of the re-encrypted block.
Short message
For messages shorter than one block, residual block termination can use an encrypted initialization vector instead of the previously encrypted block.
Cryptographic algorithms
|
https://en.wikipedia.org/wiki/ChorusOS
|
ChorusOS is a microkernel real-time operating system designed as a message passing computing model. ChorusOS began as the Chorus distributed real-time operating system research project at the French Institute for Research in Computer Science and Automation (INRIA) in 1979. During the 1980s, Chorus was one of two earliest microkernels (the other being Mach) and was developed commercially by startup company Chorus Systèmes SA. Over time, development effort shifted away from distribution aspects to real-time for embedded systems.
In 1997, Sun Microsystems acquired Chorus Systèmes for its microkernel technology, which went toward the new JavaOS. Sun (and henceforth Oracle) no longer supports ChorusOS. The founders of Chorus Systèmes started a new company called Jaluna in August 2002. Jaluna then became VirtualLogix, which was then acquired by Red Bend in September 2010. VirtualLogix designed embedded systems using Linux and ChorusOS (which they named VirtualLogix C5). C5 was described by them as a carrier grade operating system, and was actively maintained by them.
The latest source tree of ChorusOS, an evolution of version 5.0, was released as open-source software by Sun and is available at the Sun Download Center. The Jaluna project has completed these sources and published it online. Jaluna-1 is described there as a real-time Portable Operating System Interface (RT-POSIX) layer based on FreeBSD 4.1, and the CDE cross-platform software development environment. ChorusOS is supported by popular Secure Socket Layer and Transport Layer Security (SSL/TLS) libraries such as wolfSSL.
See also
JavaOS
References
Distributed operating systems
French inventions
Microkernel-based operating systems
Microkernels
Real-time operating systems
Sun Microsystems software
|
https://en.wikipedia.org/wiki/Micro%20pitting
|
Micro pitting is a fatigue failure of the surface of a material commonly seen in rolling bearings and gears.
It is also known as grey staining, micro spalling or frosting.
Pitting and micropitting
The difference between pitting corrosion and micropitting is the size of the pits after surface fatigue. Pits formed by micropitting are approximately 10-20 μm in depth, and micropitted metal often has a frosted or gray appearance. Normal pitting creates larger and more visible pits. Micropits are originated from the local contact of asperities produced by improper lubrication.
Causes
In a normal bearing the surfaces are separated by a layer of oil, this is known as elastohydrodynamic (EHD) lubrication. If the thickness of the EHD film is of the same order of magnitude as the surface roughness, the surface topography is able to interact and cause micro pitting. A thin EHD film may be caused by excess load or temperature, a lower oil viscosity than is required, low speed or water in the oil. Water in the oil can make micro pitting worse by causing hydrogen embrittlement of the surface. Micro pitting occurs only under poor EHD lubrication conditions.
A surface with a deep scratch might break exactly at the scratch if stress is applied. One can imagine that the surface roughness is a composite of many very small scratches. So high surface roughness decreases the stability on heavy stressed parts. To get a good overview of the surface an areal scan (Surface metrology) gives more information that a measurement along a single profile (profileometer). To quantify the surface roughness the ISO 25178 can be used.
See also
Pitting corrosion
Corrosion
Corrosion
Materials degradation
|
https://en.wikipedia.org/wiki/American%20Institute%20of%20Electrical%20Engineers
|
The American Institute of Electrical Engineers (AIEE) was a United States-based organization of electrical engineers that existed from 1884 through 1962. On January 1, 1963, it merged with the Institute of Radio Engineers (IRE) to form the Institute of Electrical and Electronics Engineers (IEEE).
History
The 1884 founders of the American Institute of Electrical Engineers (AIEE) included some of the most prominent inventors and innovators in the then new field of electrical engineering, among them Nikola Tesla, Thomas Alva Edison, Elihu Thomson, Edwin J. Houston, and Edward Weston. The purpose of the AIEE was stated "to promote the Arts and Sciences connected with the production and utilization of electricity and the welfare of those employed in these Industries: by means of social intercourse, the reading and discussion of professional papers and the circulation by means of publication among members and associates of information thus obtained." The first president of AIEE was Norvin Green, president of the Western Union Telegraph Company. Other notable AIEE presidents were Alexander Graham Bell (1891–1892), Charles Proteus Steinmetz (1901–1902), Bion J. Arnold (1903-1904), Schuyler S. Wheeler (1905–1906), Dugald C. Jackson (1910–1911), Ralph D. Mershon (1912–1913), Michael I. Pupin (1925–1926), and Titus G. LeClair (1950–1951).
The first technical meeting of the AIEE was held during the International Electrical Exhibition of 1884 in Philadelphia, Pennsylvania on October 7–8, at the Franklin Institute. After several years of operating primarily in New York City, the AIEE authorized local sections in 1902. These were first formed in the United States in Chicago and Ithaca, New York in 1902, and then in other countries. The first section outside the United States, established in 1903, was in Toronto, Canada. AIEE's regional structure was soon complemented by a technical structure. The first technical committee of AIEE, the High Voltage Transmission Committee,
|
https://en.wikipedia.org/wiki/Elevator%20algorithm
|
The elevator algorithm, or SCAN, is a disk-scheduling algorithm to determine the motion of the disk's arm and head in servicing read and write requests.
This algorithm is named after the behavior of a building elevator, where the elevator continues to travel in its current direction (up or down) until empty, stopping only to let individuals off or to pick up new individuals heading in the same direction.
From an implementation perspective, the drive maintains a buffer of pending read/write requests, along with the associated cylinder number of the request, in which lower cylinder numbers generally indicate that the cylinder is closer to the spindle, and higher numbers indicate the cylinder is farther away.
Description
When a new request arrives while the drive is idle, the initial arm/head movement will be in the direction of the cylinder where the data is stored, either in or out. As additional requests arrive, requests are serviced only in the current direction of arm movement until the arm reaches the edge of the disk. When this happens, the direction of the arm reverses, and the requests that were remaining in the opposite direction are serviced, and so on.
Variations
One variation of this method ensures all requests are serviced in only one direction, that is, once the head has arrived at the outer edge of the disk, it returns to the beginning and services the new requests in this one direction only (or vice versa). This is known as the "Circular Elevator Algorithm" or C-SCAN. Although the time of the return seek is wasted, this results in more equal performance for all head positions, as the expected distance from the head is always half the maximum distance, unlike in the standard elevator algorithm where cylinders in the middle will be serviced as much as twice as often as the innermost or outermost cylinders.
Other variations include:
FSCAN
LOOK
C-LOOK
N-Step-SCAN
Example
The following is an example of how to calculate average disk seek ti
|
https://en.wikipedia.org/wiki/Biquaternion
|
In abstract algebra, the biquaternions are the numbers , where , and are complex numbers, or variants thereof, and the elements of multiply as in the quaternion group and commute with their coefficients. There are three types of biquaternions corresponding to complex numbers and the variations thereof:
Biquaternions when the coefficients are complex numbers.
Split-biquaternions when the coefficients are split-complex numbers.
Dual quaternions when the coefficients are dual numbers.
This article is about the ordinary biquaternions named by William Rowan Hamilton in 1844. Some of the more prominent proponents of these biquaternions include Alexander Macfarlane, Arthur W. Conway, Ludwik Silberstein, and Cornelius Lanczos. As developed below, the unit quasi-sphere of the biquaternions provides a representation of the Lorentz group, which is the foundation of special relativity.
The algebra of biquaternions can be considered as a tensor product , where is the field of complex numbers and is the division algebra of (real) quaternions. In other words, the biquaternions are just the complexification of the quaternions. Viewed as a complex algebra, the biquaternions are isomorphic to the algebra of complex matrices . They are also isomorphic to several Clifford algebras including , the Pauli algebra , and the even part of the spacetime algebra.
Definition
Let be the basis for the (real) quaternions , and let be complex numbers, then
is a biquaternion. To distinguish square roots of minus one in the biquaternions, Hamilton and Arthur W. Conway used the convention of representing the square root of minus one in the scalar field by to avoid confusion with the in the quaternion group. Commutativity of the scalar field with the quaternion group is assumed:
Hamilton introduced the terms bivector, biconjugate, bitensor, and biversor to extend notions used with real quaternions .
Hamilton's primary exposition on biquaternions came in 1853 in his Lectures on
|
https://en.wikipedia.org/wiki/Zeller%27s%20congruence
|
Zeller's congruence is an algorithm devised by Christian Zeller in the 19th century to calculate the day of the week for any Julian or Gregorian calendar date. It can be considered to be based on the conversion between Julian day and the calendar date.
Formula
For the Gregorian calendar, Zeller's congruence is
for the Julian calendar it is
where
h is the day of the week (0 = Saturday, 1 = Sunday, 2 = Monday, ..., 6 = Friday)
q is the day of the month
m is the month (3 = March, 4 = April, 5 = May, ..., 14 = February)
K the year of the century ().
J is the zero-based century (actually ) For example, the zero-based centuries for 1995 and 2000 are 19 and 20 respectively (not to be confused with the common ordinal century enumeration which indicates 20th for both cases).
is the floor function or integer part
mod is the modulo operation or remainder after division
In this algorithm January and February are counted as months 13 and 14 of the previous year. E.g. if it is 2 February 2010, the algorithm counts the date as the second day of the fourteenth month of 2009 (02/14/2009 in DD/MM/YYYY format)
For an ISO week date Day-of-Week d (1 = Monday to 7 = Sunday), use
Analysis
These formulas are based on the observation that the day of the week progresses in a predictable manner based upon each subpart of that date. Each term within the formula is used to calculate the offset needed to obtain the correct day of the week.
For the Gregorian calendar, the various parts of this formula can therefore be understood as follows:
represents the progression of the day of the week based on the day of the month, since each successive day results in an additional offset of 1 in the day of the week.
represents the progression of the day of the week based on the year. Assuming that each year is 365 days long, the same date on each succeeding year will be offset by a value of .
Since there are 366 days in each leap year, this needs to be accounted for by adding anot
|
https://en.wikipedia.org/wiki/Location%20arithmetic
|
Location arithmetic (Latin arithmeticae localis) is the additive (non-positional) binary numeral systems, which John Napier explored as a computation technique in his treatise Rabdology (1617), both symbolically and on a chessboard-like grid.
Napier's terminology, derived from using the positions of counters on the board to represent numbers, is potentially misleading because the numbering system is, in facts, non-positional in current vocabulary.
During Napier's time, most of the computations were made on boards with tally-marks or jetons. So, unlike how it may be seen by the modern reader, his goal was not to use moves of counters on a board to multiply, divide and find square roots, but rather to find a way to compute symbolically with pen and paper.
However, when reproduced on the board, this new technique did not require mental trial-and-error computations nor complex carry memorization (unlike base 10 computations). He was so pleased by his discovery that he said in his preface:
Location numerals
Binary notation had not yet been standardized, so Napier used what he called location numerals to represent binary numbers. Napier's system uses sign-value notation to represent numbers; it uses successive letters from the Latin alphabet to represent successive powers of two: a = 20 = 1, b = 21 = 2, c = 22 = 4, d = 23 = 8, e = 24 = 16 and so on.
To represent a given number as a location numeral, that number is expressed as a sum of powers of two and then each power of two is replaced by its corresponding digit (letter). For example, when converting from a decimal numeral:
87 = 1 + 2 + 4 + 16 + 64 = 20 + 21 + 22 + 24 + 26 = abceg
Using the reverse process, a location numeral can be converted to another numeral system. For example, when converting to a decimal numeral:
abdgkl = 20 + 21 + 23 + 26 + 210 + 211 = 1 + 2 + 8 + 64 + 1024 + 2048 = 3147
Napier showed multiple methods of converting numbers in and out of his numeral system. These methods are similar t
|
https://en.wikipedia.org/wiki/Carnauba%20wax
|
Carnauba (; ), also called Brazil wax and palm wax, is a wax of the leaves of the carnauba palm Copernicia prunifera (synonym: Copernicia cerifera), a plant native to and grown only in the northeastern Brazilian states of Ceará, Piauí, Paraíba, Pernambuco, Rio Grande do Norte, Maranhão and Bahia. It is known as the "Queen of Waxes". In its pure state, it is usually available in the form of hard yellow-brown flakes. It is obtained by collecting and drying the leaves, beating them to loosen the wax, then refining and bleaching it.
As a food additive, its E number is E903.
Composition
Carnauba consists mostly of aliphatic esters (40 wt%), diesters of 4-hydroxycinnamic acid (21.0 wt%), ω-hydroxycarboxylic acids (13.0 wt%), and fatty alcohols (12 wt%). The compounds are predominantly derived from acids and alcohols in the C26-C30 range. It is distinctive for its high content of diesters and its methoxycinnamic acid.
It is sold in grades of T1, T3 and T4 according to its purity level, which is accomplished by filtration, centrifugation and bleaching.
Properties
Because it creates a glossy finish, carnauba wax is used in automobile waxes, shoe polishes, dental floss, food products (such as sweets), polishes for musical instruments, and floor and furniture waxes and polishes, especially when mixed with beeswax and turpentine. It is commonly used for paper coatings in the United States. In its purest form, it was often used on speedboat hulls in the early 1960s to enhance speed and handling in saltwater. It is also used in some surfboard waxes, possibly in combination with coconut oil.
Because of its hypoallergenic and emollient properties as well as its gloss, carnauba wax is used as a thickener in cosmetics such as lipstick, eyeliner, mascara, eye shadow, foundation, deodorant, and skincare and sun care preparations. It is also used to make cutler's resin.
It is the finish of choice for most briar tobacco smoking pipes, as it produces a high gloss when buffed that
|
https://en.wikipedia.org/wiki/Monogastric
|
A monogastric organism has a simple single-chambered stomach (one stomach). Examples of monogastric herbivores are horses and rabbits. Examples of monogastric omnivores include humans, pigs, hamsters and rats. Furthermore, there are monogastric carnivores such as cats. A monogastric organism is comparable to ruminant organisms (which has a four-chambered complex stomach), such as cattle, goats, or sheep. Herbivores with monogastric digestion can digest cellulose in their diets by way of symbiotic gut bacteria. However, their ability to extract energy from cellulose digestion is less efficient than in ruminants.
Herbivores digest cellulose by microbial fermentation. Monogastric herbivores which can digest cellulose nearly as well as ruminants are called hindgut fermenters, while ruminants are called foregut fermenters. These are subdivided into two groups based on the relative size of various digestive organs in relationship to the rest of the system: colonic fermenters tend to be larger species such as horses and rhinos, and cecal fermenters are smaller animals such as rabbits and rodents. Great apes derive significant amounts of phytanic acid from the hindgut fermentation of plant materials.
Monogastrics cannot digest the fiber molecule cellulose as efficiently as ruminants, though the ability to digest cellulose varies amongst species.
A monogastric digestive system works as soon as the food enters the mouth. Saliva moistens the food and begins the digestive process. (Note that horses have no (or negligible amounts of) amylase in their saliva). After being swallowed, the food passes from the esophagus into the stomach, where stomach acid and enzymes help to break down the food. Once food leaves the stomach and enters the small intestine, the pancreas secretes enzymes and alkali to neutralize the stomach acid.
References
Digestive system
Biology terminology
|
https://en.wikipedia.org/wiki/Type%20genus
|
In biological taxonomy, the type genus is the genus which defines a biological family and the root of the family name.
Zoological nomenclature
According to the International Code of Zoological Nomenclature, "The name-bearing type of a nominal family-group taxon is a nominal genus called the 'type genus'; the family-group name is based upon that of the type genus."
Any family-group name must have a type genus (and any genus-group name must have a type species, but any species-group name may, but need not, have one or more type specimens). The type genus for a family-group name is also the genus that provided the stem to which was added the ending -idae (for families).
Example: The family name Formicidae has as its type genus the genus Formica Linnaeus, 1758.
Botanical nomenclature
In botanical nomenclature, the phrase "type genus" is used, unofficially, as a term of convenience. In the ICN this phrase has no status. The code uses type specimens for ranks up to family, and types are optional for higher ranks. The Code does not refer to the genus containing that type as a "type genus".
Example: "Poa is the type genus of the family Poaceae and of the order Poales" is another way of saying that the names Poaceae and Poales are based on the generic name Poa.
Bacteriological nomenclature
The 2008 Revision of the Bacteriological Code states, "The nomenclatural type […] of a taxon above genus, up to and including order, is the legitimate name of the included genus on whose name the name of the relevant taxon is based. One taxon of each category must include the type genus. The names of the taxa which include the type genus must be formed by the addition of the appropriate suffix to the stem of the name of the type genus[…]." In 2019, it was proposed that all ranks above genus should use the genus category as the nomenclatural type. This proposal was subsequently adopted for the rank of phylum.
Example: Pseudomonas is the type genus of the family Pseudomonadaceae, t
|
https://en.wikipedia.org/wiki/Code%20Reading
|
Code Reading () is a 2003 software development book written by Diomidis Spinellis.
The book is directed to programmers who want to improve their code reading abilities.
It discusses specific techniques for reading code written by others and outlines common programming concepts.
The code examples used in the book are taken from real-life software and uses C to illustrate basic concepts. Excerpts from prominent open-source code systems like the
Apache Web server,
the hsqldb Java relational database engine,
the NetBSD Unix distribution,
the Perl language,
the Tomcat application server,
and the X Window System are presented.
The book inaugurated Addison-Wesley's Effective Software Development Series, edited by Scott Meyers,
and received the 2004 Software Development Productivity Award in the “Technical Books” category.
It has been translated into Chinese, Greek, Japanese, Korean, Polish, and Russian.
See also
Code review
External links
Book home page
Software engineering books
Addison-Wesley books.
|
https://en.wikipedia.org/wiki/Primary%20decomposition
|
In mathematics, the Lasker–Noether theorem states that every Noetherian ring is a Lasker ring, which means that every ideal can be decomposed as an intersection, called primary decomposition, of finitely many primary ideals (which are related to, but not quite the same as, powers of prime ideals). The theorem was first proven by for the special case of polynomial rings and convergent power series rings, and was proven in its full generality by .
The Lasker–Noether theorem is an extension of the fundamental theorem of arithmetic, and more generally the fundamental theorem of finitely generated abelian groups to all Noetherian rings. The theorem plays an important role in algebraic geometry, by asserting that every algebraic set may be uniquely decomposed into a finite union of irreducible components.
It has a straightforward extension to modules stating that every submodule of a finitely generated module over a Noetherian ring is a finite intersection of primary submodules. This contains the case for rings as a special case, considering the ring as a module over itself, so that ideals are submodules. This also generalizes the primary decomposition form of the structure theorem for finitely generated modules over a principal ideal domain, and for the special case of polynomial rings over a field, it generalizes the decomposition of an algebraic set into a finite union of (irreducible) varieties.
The first algorithm for computing primary decompositions for polynomial rings over a field of characteristic 0 was published by Noether's student . The decomposition does not hold in general for non-commutative Noetherian rings. Noether gave an example of a non-commutative Noetherian ring with a right ideal that is not an intersection of primary ideals.
Primary decomposition of an ideal
Let be a Noetherian commutative ring. An ideal of is called primary if it is a proper ideal and for each pair of elements and in such that is in , either or some power of is in ;
|
https://en.wikipedia.org/wiki/Australian%20Atomic%20Energy%20Commission
|
The Australian Atomic Energy Commission (AAEC) was a statutory body of the Australian government.
It was established in 1952, replacing the Atomic Energy Policy Committee. In 1981 parts of the Commission were split off to become part of CSIRO, the remainder continuing until 1987, when it was replaced by the Australian Nuclear Science and Technology Organisation (ANSTO). The Commission head office was in the heritage-listed house Cliffbrook in Coogee, Sydney, New South Wales, while its main facilities were at the Atomic Energy Research Establishment at Lucas Heights, to the south of Sydney, established in 1958.
Highlights of the Commission's history included:
Major roles in the establishment of the IAEA and the system of international safeguards.
The construction of the HIFAR and MOATA research reactors at Lucas Heights.
The selection of the preferred tender for the construction of the proposed Jervis Bay Nuclear Power Plant.
The Ranger Uranium Mine joint venture.
Other significant facilities constructed by the Commission at Lucas Heights included a 3MeV Van de Graaff particle accelerator, installed in 1964 to provide proton beams and now upgraded to become ANTARES, a smaller 1.3MeV betatron, and radioisotope production and remote handling facilities associated with HIFAR reactor.
Significant research work included:
Radiochemistry.
Neutron diffraction.
Sodium coolant systems.
Use of beryllium as a neutron moderator.
Movement of spheres in a closed-packed lattice.
Gas centrifuge development.
Health physics.
Environmental science.
Development of synroc.
Molecular laser isotope separation and support of laser development for atomic vapor laser isotope separation.
References
Defunct Commonwealth Government agencies of Australia
Nuclear organizations
Scientific organisations based in Australia
Nuclear energy in Australia
Nuclear technology in Australia
1952 establishments in Australia
1987 disestablishments in Australia
Lucas Heights, New South Wales
Government ag
|
https://en.wikipedia.org/wiki/Plating
|
Plating is a finishing process in which a metal is deposited on a surface. Plating has been done for hundreds of years; it is also critical for modern technology. Plating is used to decorate objects, for corrosion inhibition, to improve solderability, to harden, to improve wearability, to reduce friction, to improve paint adhesion, to alter conductivity, to improve IR reflectivity, for radiation shielding, and for other purposes. Jewelry typically uses plating to give a silver or gold finish.
Thin-film deposition has plated objects as small as an atom, therefore plating finds uses in nanotechnology.
There are several plating methods, and many variations. In one method, a solid surface is covered with a metal sheet, and then heat and pressure are applied to fuse them (a version of this is Sheffield plate). Other plating techniques include electroplating, vapor deposition under vacuum and sputter deposition. Recently, plating often refers to using liquids. Metallizing refers to coating metal on non-metallic objects.
Electroplating
In electroplating, an ionic metal is supplied with electrons to form a non-ionic coating on a substrate. A common system involves a chemical solution with the ionic form of the metal, an anode (positively charged) which may consist of the metal being plated (a soluble anode) or an insoluble anode (usually carbon, platinum, titanium, lead, or steel), and finally, a cathode (negatively charged) where electrons are supplied to produce a film of non-ionic metal.
Electroless Deposition
Electroless deposition, also known as chemical or auto-catalytic plating, is a non-galvanic plating method that involves several simultaneous reactions in an aqueous solution, which occur without the use of external electrical power. The reaction is accomplished when hydrogen is released by a reducing agent, normally sodium hypophosphite (Note: the hydrogen leaves as a hydride ion) or thiourea, and oxidized, thus producing a negative charge on the surface of
|
https://en.wikipedia.org/wiki/Three-body%20problem
|
In physics and classical mechanics, the three-body problem is the problem of taking the initial positions and velocities (or momenta) of three point masses and solving for their subsequent motion according to Newton's laws of motion and Newton's law of universal gravitation. The three-body problem is a special case of the -body problem. Unlike two-body problems, no general closed-form solution exists, as the resulting dynamical system is chaotic for most initial conditions, and numerical methods are generally required.
Historically, the first specific three-body problem to receive extended study was the one involving the Moon, Earth, and the Sun. In an extended modern sense, a three-body problem is any problem in classical mechanics or quantum mechanics that models the motion of three particles.
Mathematical description
The mathematical statement of the three-body problem can be given in terms of the Newtonian equations of motion for vector positions of three gravitationally interacting bodies with masses :
where is the gravitational constant. This is a set of nine second-order differential equations. The problem can also be stated equivalently in the Hamiltonian formalism, in which case it is described by a set of 18 first-order differential equations, one for each component of the positions and momenta :
where is the Hamiltonian:
In this case is simply the total energy of the system, gravitational plus kinetic.
Restricted three-body problem
In the restricted three-body problem, a body of negligible mass (the "planetoid") moves under the influence of two massive bodies. Having negligible mass, the force that the planetoid exerts on the two massive bodies may be neglected, and the system can be analysed and can therefore be described in terms of a two-body motion. With respect to a rotating reference frame, the two co-orbiting bodies are stationary, and the third can be stationary as well at the Lagrangian points, or move around them, for instance on a
|
https://en.wikipedia.org/wiki/Proximate%20and%20ultimate%20causation
|
A proximate cause is an event which is closest to, or immediately responsible for causing, some observed result. This exists in contrast to a higher-level ultimate cause (or distal cause) which is usually thought of as the "real" reason something occurred.
The concept is used in many fields of research and analysis, including data science and ethology.
Example: Why did the ship sink?
Proximate cause: Because it was holed beneath the waterline, water entered the hull and the ship became denser than the water which supported it, so it could not stay afloat.
Ultimate cause: Because the ship hit a rock which tore open the hole in the ship's hull.
In most situations, an ultimate cause may itself be a proximate cause in comparison to a further ultimate cause. Hence we can continue the above example as follows:
Example: Why did the ship hit the rock?
Proximate cause: Because the ship failed to change course to avoid it.
Ultimate cause: Because the ship was under autopilot and the autopilot's data was inaccurate.
(even stronger): Because the shipwrights made mistakes in the ship's construction.
(stronger yet): Because the scheduling of labor at the shipyard allows for very little rest.
(in absurdum): Because the shipyard's owners have very small profit margins in an ever-shrinking market.
In biology
Ultimate causation explains traits in terms of evolutionary forces acting on them.
Example: female animals often display preferences among male display traits, such as song. An ultimate explanation based on sexual selection states that females who display preferences have more vigorous or more attractive male offspring.
Proximate causation explains biological function in terms of immediate physiological or environmental factors.
Example: a female animal chooses to mate with a particular male during a mate choice trial. A possible proximate explanation states that one male produced a more intense signal, leading to elevated hormone levels in the female producin
|
https://en.wikipedia.org/wiki/Stratus%20VOS
|
Stratus VOS (Virtual Operating System) is a proprietary operating system running on Stratus Technologies fault-tolerant computer systems. VOS is available on Stratus's ftServer and Continuum platforms. VOS customers use it to support high-volume transaction processing applications which require continuous availability. VOS is notable for being one of the few operating systems which run on fully lockstepped hardware.
During the 1980s, an IBM version of Stratus VOS existed and was called the System/88 Operating System.
History
VOS was designed from its inception as a high-security transaction-processing environment tailored to fault-tolerant hardware. It incorporates much of the design experience that came out of the MIT/Bell-Laboratories/General-Electric (later Honeywell) Multics project.
In 1984, Stratus added a UNIX System V implementation called Unix System Facilities (USF) to VOS, integrating Unix and VOS at the kernel level.
In recent years, Stratus has added POSIX-compliance, and many open source packages can run on VOS.
Like competing proprietary operating systems, VOS has seen its market share shrink steadily in the 1990s and early 2000s.
Development
Programming for VOS
VOS provides compilers for PL/I, COBOL, Pascal, FORTRAN, C (with the VOS C and GCC compilers), and C++ (also GCC). Each of these programming languages can make VOS system calls (e.g. s$seq_read to read a record from a file), and has extensions to support varying-length strings in PL/I style. Developers typically code in their favourite VOS text editor, or offline, before compiling on the system; there are no VOS IDE applications.
In its history, Stratus has offered hardware platforms based on the Motorola 68000 microprocessor family ("FT" and "XA" series), the Intel i860 microprocessor family ("XA/R" series), the HP PA-RISC processor family ("Continuum" series), and the Intel Xeon x86 processor family ("V Series"). All versions of VOS offer compilers targeted at the native instructi
|
https://en.wikipedia.org/wiki/Shannon%27s%20source%20coding%20theorem
|
In information theory, Shannon's source coding theorem (or noiseless coding theorem) establishes the statistical limits to possible data compression for data whose source is an independent identically-distributed random variable, and the operational meaning of the Shannon entropy.
Named after Claude Shannon, the source coding theorem shows that, in the limit, as the length of a stream of independent and identically-distributed random variable (i.i.d.) data tends to infinity, it is impossible to compress such data such that the code rate (average number of bits per symbol) is less than the Shannon entropy of the source, without it being virtually certain that information will be lost. However it is possible to get the code rate arbitrarily close to the Shannon entropy, with negligible probability of loss.
The source coding theorem for symbol codes places an upper and a lower bound on the minimal possible expected length of codewords as a function of the entropy of the input word (which is viewed as a random variable) and of the size of the target alphabet.
Note that, for data that exhibits more dependencies (whose source is not an i.i.d. random variable), the Kolmogorov complexity, which quantifies the minimal description length of an object, is more suitable to describe the limits of data compression. Shannon entropy takes into account only frequency regularities while Kolmogorov complexity takes into account all algorithmic regularities, so in general the latter is smaller. On the other hand, if an object is generated by a random process in such a way that it has only frequency regularities, entropy is close to complexity with high probability (Shen et al. 2017).
Statements
Source coding is a mapping from (a sequence of) symbols from an information source to a sequence of alphabet symbols (usually bits) such that the source symbols can be exactly recovered from the binary bits (lossless source coding) or recovered within some distortion (lossy source coding).
|
https://en.wikipedia.org/wiki/List%20of%20Linux%20audio%20software
|
The following is an incomplete list of Linux audio software.
Audio players
GStreamer-based
Amarok is a free music player for Linux and other Unix-like operating systems. Multiple backends are supported (xine, helix and NMM).
Banshee is a free audio player for Linux which uses the GStreamer multimedia platforms to play, encode, and decode Ogg Vorbis, MP3, and other formats. Banshee supports playing and importing audio CDs and playing and synchronizing music with iPods. Audioscrobbler API support.
Clementine is a cross-platform, open-source, Qt based audio player, written in C++. It can play Internet radio streams; managing some media devices, playlists; supports visualizations, Audioscrobbler API. It was made as a spin-off of Amarok 1.4 and is a rougher version of said program.
Exaile is a free software audio player for Unix-like operating systems that aims to be functionally similar to KDE’s Amarok. Unlike Amarok, Exaile is a Python program and uses the GTK toolkit.
Guayadeque Music Player is a free and open-source audio player written in C++ using the wxWidgets toolkit.
Muine is an audio player for the GNOME desktop environment. Muine is written in C# using Mono and Gtk#. The default backend is GStreamer framework but Muine can also use xine libraries.
Quod Libet is a GTK based audio player, written in Python, using GStreamer or Xine as back ends. Its distinguishing features are a rigorous approach to tagging (making it especially popular with classical music fans) and a flexible approach to music library management. It supports regular expression and Boolean algebra-based searches, and is stated to perform efficiently with music libraries of tens of thousands of tracks.
Rhythmbox is an audio player inspired by Apple iTunes.
Songbird is a cross-platform, open-source media player and web browser. It is built using code from the Firefox web browser. The graphical user interface (GUI) is very similar to Apple iTunes, and it can sync with Apple iPods. Like
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.