source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Initial%20ramdisk
|
In Linux systems, initrd (initial ramdisk) is a scheme for loading a temporary root file system into memory, to be used as part of the Linux startup process. initrd and initramfs (from INITial RAM File System) refer to two different methods of achieving this. Both are commonly used to make preparations before the real root file system can be mounted.
Rationale
Many Linux distributions ship a single, generic Linux kernel image one that the distribution's developers create specifically to boot on a wide variety of hardware. The device drivers for this generic kernel image are included as loadable kernel modules because statically compiling many drivers into one kernel causes the kernel image to be much larger, perhaps too large to boot on computers with limited memory, or in some cases to cause boot-time crashes or other problems due to probing for nonexistent or conflicting hardware. This static-compiled kernel approach also leaves modules in kernel memory which are no longer used or needed, and raises the problem of detecting and loading the modules necessary to mount the root file system at boot time, or for that matter, deducing where or what the root file system is.
To further complicate matters, the root file system may be on a software RAID volume, LVM, NFS (on diskless workstations), or on an encrypted partition. All of these require special preparations to mount.
Another complication is kernel support for hibernation, which suspends the computer to disk by dumping an image of the entire contents of memory to a swap partition or a regular file, then powering off. On next boot, this image has to be made accessible before it can be loaded back into memory.
To avoid having to hardcode handling for so many special cases into the kernel, an initial boot stage with a temporary root file-system – now dubbed early user space – is used. This root file-system can contain user-space helpers which do the hardware detection, module loading and device discovery necessar
|
https://en.wikipedia.org/wiki/N-player%20game
|
In game theory, an n-player game is a game which is well defined for any number of players. This is usually used in contrast to standard 2-player games that are only specified for two players. In defining n-player games, game theorists usually provide a definition that allow for any (finite) number of players. The limiting case of is the subject of mean field game theory.
Changing games from 2-player games to n-player games entails some concerns. For instance, the Prisoner's dilemma is a 2-player game. One might define an n-player Prisoner's Dilemma where a single defection results everyone else getting the sucker's payoff. Alternatively, it might take certain amount of defection before the cooperators receive the sucker's payoff. (One example of an n-player Prisoner's Dilemma is the Diner's dilemma.)
References
Game theory game classes
|
https://en.wikipedia.org/wiki/Unscrupulous%20diner%27s%20dilemma
|
In game theory, the unscrupulous diner's dilemma (or just diner's dilemma) is an n-player prisoner's dilemma. The situation imagined is that several people go out to eat, and before ordering, they agree to split the cost equally between them. Each diner must now choose whether to order the costly or cheap dish. It is presupposed that the costlier dish is better than the cheaper, but not by enough to warrant paying the difference when eating alone. Each diner reasons that, by ordering the costlier dish, the extra cost to their own bill will be small, and thus the better dinner is worth the money. However, all diners having reasoned thus, they each end up paying for the costlier dish, which by assumption, is worse than had they each ordered the cheaper.
Formal definition and equilibrium analysis
Let a represent the joy of eating the expensive meal, b the joy of eating the cheap meal, k is the cost of the expensive meal, l the cost of the cheap meal, and n the number of players. From the description above we have the following ordering . Also, in order to make the game sufficiently similar to the Prisoner's dilemma we presume that one would prefer to order the expensive meal given others will help defray the cost,
Consider an arbitrary set of strategies by a player's opponent. Let the total cost of the other players' meals be x. The cost of ordering the cheap meal is and the cost of ordering the expensive meal is . So the utilities for each meal are for the expensive meal and for the cheaper meal. By assumption, the utility of ordering the expensive meal is higher. Remember that the choice of opponents' strategies was arbitrary and that the situation is symmetric. This proves that the expensive meal is strictly dominant and thus the unique Nash equilibrium.
If everyone orders the expensive meal all of the diners pay k and the utility of every player is . On the other hand, if all the individuals had ordered the cheap meal, the utility of every player wo
|
https://en.wikipedia.org/wiki/List%20of%20Xbox%20games%20compatible%20with%20Xbox%20360
|
The Xbox 360 gaming console has received updates from Microsoft from its launch in 2005 until November 2007 that enable it to play select games from its predecessor, Xbox. The Xbox 360 launched with backward compatibility with the number of supported Xbox games varying depending on region. Microsoft continued to update the list of Xbox games that were compatible with Xbox 360 until November 2007 when the list was finalized. Microsoft later launched the Xbox Originals program on December 7, 2007 where select backward compatible Xbox games could be purchased digitally on Xbox 360 consoles with the program ending less than two years later in June 2009. The following is a list of all backward compatible games on Xbox 360 under this functionality.
History
At its launch in November 2005, the Xbox 360 did not possess hardware-based backward compatibility with Xbox games due to the different types of hardware and architecture used in the Xbox and Xbox 360. Instead backward compatibility was achieved using software emulation. When the Xbox 360 launched in North America 212 Xbox games were supported while in Europe 156 games were supported. The Japanese market had the fewest titles supported at launch with only 12 games. Microsoft's final update to the list of backward compatible titles was in November 2007 bringing the final total to 461 Xbox games.
In order to use the backwards compatibility feature on Xbox 360 a hard drive is required. Updates to the list were provided from Microsoft as part of regular software updates via the Internet, ordering a disc by mail from the official website or downloading the update from the official website then burning it to either a CD or DVD. Subscribers to Official Xbox Magazine would also have updates to the backwards compatibility list on the demo discs included with the magazine.
Supported original Xbox games will run each with an emulation profile that has been recompiled for each game with the emulation profiles stored on the cons
|
https://en.wikipedia.org/wiki/Lamb%E2%80%93Oseen%20vortex
|
In fluid dynamics, the Lamb–Oseen vortex models a line vortex that decays due to viscosity. This vortex is named after Horace Lamb and Carl Wilhelm Oseen.
Mathematical description
Oseen looked for a solution for the Navier–Stokes equations in cylindrical coordinates with velocity components of the form
where is the circulation of the vortex core. Navier-Stokes equations lead to
which, subject to the conditions that it is regular at and becomes unity as , leads to
where is the kinematic viscosity of the fluid. At , we have a potential vortex with concentrated vorticity at the axis; and this vorticity diffuses away as time passes.
The only non-zero vorticity component is in the direction, given by
The pressure field simply ensures the vortex rotates in the circumferential direction, providing the centripetal force
where ρ is the constant density
Generalized Oseen vortex
The generalized Oseen vortex may obtained by looking for solutions of the form
that leads to the equation
Self-similar solution exists for the coordinate , provided , where is a constant, in which case . The solution for may be written according to Rott (1958) as
where is an arbitrary constant. For , the classical Lamb–Oseen vortex is recovered. The case corresponds to the axisymmetric stagnation point flow, where is a constant. When , , a Burgers vortex is a obtained. For arbitrary , the solution becomes , where is an arbitrary constant. As , Burgers vortex is recovered.
See also
The Rankine vortex and Kaufmann (Scully) vortex are common simplified approximations for a viscous vortex.
References
Vortices
Equations of fluid dynamics
|
https://en.wikipedia.org/wiki/Batchelor%20vortex
|
In fluid dynamics, Batchelor vortices, first described by George Batchelor in a 1964 article, have been found useful in analyses of airplane vortex wake hazard problems.
The model
The Batchelor vortex is an approximate solution to the Navier–Stokes equations obtained using a boundary layer approximation. The physical reasoning behind this approximation is the assumption that the axial gradient of the flow field of interest is of much smaller magnitude than the radial gradient.
The axial, radial and azimuthal velocity components of the vortex are denoted , and respectively and can be represented in cylindrical coordinates as follows:
The parameters in the above equations are
, the free-stream axial velocity,
, the velocity scale (used for nondimensionalization),
, the length scale (used for nondimensionalization),
, a measure of the core size, with initial core size and representing viscosity,
, the swirl strength, given as a ratio between the maximum tangential velocity and the core velocity.
Note that the radial component of the velocity is zero and that the axial and azimuthal components depend only on .
We now write the system above in dimensionless form by scaling time by a factor . Using the same symbols for the dimensionless variables, the Batchelor vortex can be expressed in terms of the dimensionless variables as
where denotes the free stream axial velocity and is the Reynolds number.
If one lets and considers an infinitely large swirl number then the Batchelor vortex simplifies to the Lamb–Oseen vortex for the azimuthal velocity:
where is the circulation.
References
External links
Continuous spectra of the Batchelor vortex (Authored by Xueri Mao and Spencer Sherwin and published by Imperial College London)
Equations of fluid dynamics
Vortices
Fluid dynamics
|
https://en.wikipedia.org/wiki/Kaufmann%20vortex
|
The Kaufmann vortex, also known as the Scully model, is a mathematical model for a vortex taking account of viscosity. It uses an algebraic velocity profile. This vortex is not a solution of the Navier–Stokes equations.
Kaufmann and Scully's model for the velocity in the Θ direction is:
The model was suggested by W. Kaufmann in 1962, and later by Scully and Sullivan in 1972 at the Massachusetts Institute of Technology.
See also
Rankine vortex – a simpler, but more crude, approximation for a vortex.
Lamb–Oseen vortex – the exact solution for a free vortex decaying due to viscosity.
References
Equations of fluid dynamics
Vortices
|
https://en.wikipedia.org/wiki/Nintendo%20Power%20%28cartridge%29
|
was a video game distribution service for Super Famicom or Game Boy operated by Nintendo that ran exclusively in Japan from late 1996 until February 2007. The service allowed users to download Super Famicom or Game Boy titles onto a special flash memory cartridge for a lower price than that of a pre-written ROM cartridge.
At its 1996 launch, the service initially offered only Super Famicom titles. Game Boy titles began being offered on March 1, 2000. The service was ultimately discontinued on February 28, 2007.
History
Background
During the market lifespan of the Famicom, Nintendo developed the Disk System, a floppy disk drive peripheral with expanded RAM which allowed players to use re-writable disk media called "disk cards" at Disk Writer kiosks. The system was relatively popular but suffered from issues of limited capacity. However, Nintendo did see a market for an economical re-writable medium due to the popularity of the Disk System.
Nintendo's first dynamic flash storage subsystem for the Super Famicom is the Satellaview, a peripheral released in 1995 that facilitated the delivery of a set of unique Super Famicom games via the St.GIGA satellite network.
Release
The Super Famicom version of Nintendo Power was released in late 1996.
The Game Boy Nintendo Power was originally planned to launch on November 1, 1999; however, due to the 1999 Jiji earthquake disrupting production in Taiwan, it was delayed until March 1, 2000.
Nintendo Power was discontinued in February 2007, with kiosks being removed from stores.
Usage
When this was on the market in the 1990s, the user would first purchase the RAM cartridge, then bring it to a store featuring a Nintendo Power kiosk. The user selects games to be copied to the cartridge and the store provides a printed copy of the manual. Game prices varied, with older games being relatively cheap, and newer games and Nintendo Power exclusives being more expensive.
The proprietary medium made illicit duplication much more dif
|
https://en.wikipedia.org/wiki/Dielectric%20breakdown%20model
|
Dielectric breakdown model (DBM) is a macroscopic mathematical model combining the diffusion-limited aggregation model with electric field. It was developed by Niemeyer, Pietronero, and Weismann in 1984. It describes the patterns of dielectric breakdown of solids, liquids, and even gases, explaining the formation of the branching, self-similar Lichtenberg figures.
See also
Eden growth model
Lichtenberg figure
Diffusion-limited aggregation
References
External links
Dielectric Breakdown Model
Electricity
Mathematical modeling
Electrical breakdown
|
https://en.wikipedia.org/wiki/Symmetric%20product%20of%20an%20algebraic%20curve
|
In mathematics, the n-fold symmetric product of an algebraic curve C is the quotient space of the n-fold cartesian product
C × C × ... × C
or Cn by the group action of the symmetric group Sn on n letters permuting the factors. It exists as a smooth algebraic variety denoted by ΣnC. If C is a compact Riemann surface, ΣnC is therefore a complex manifold. Its interest in relation to the classical geometry of curves is that its points correspond to effective divisors on C of degree n, that is, formal sums of points with non-negative integer coefficients.
For C the projective line (say the Riemann sphere ∪ {∞} ≈ S2), its nth symmetric product ΣnC can be identified with complex projective space of dimension n.
If G has genus g ≥ 1 then the ΣnC are closely related to the Jacobian variety J of C. More accurately for n taking values up to g they form a sequence of approximations to J from below: their images in J under addition on J (see theta-divisor) have dimension n and fill up J, with some identifications caused by special divisors.
For g = n we have ΣgC actually birationally equivalent to J; the Jacobian is a blowing down of the symmetric product. That means that at the level of function fields it is possible to construct J by taking linearly disjoint copies of the function field of C, and within their compositum taking the fixed subfield of the symmetric group. This is the source of André Weil's technique of constructing J as an abstract variety from 'birational data'. Other ways of constructing J, for example as a Picard variety, are preferred now but this does mean that for any rational function F on C
F(x1) + ... + F(xg)
makes sense as a rational function on J, for the xi staying away from the poles of F.
For n > g the mapping from ΣnC to J by addition fibers it over J; when n is large enough (around twice g) this becomes a projective space bundle (the Picard bundle). It has been studied in detail, for example by Kempf and Mukai.
Betti numbers and the E
|
https://en.wikipedia.org/wiki/Linearly%20disjoint
|
In mathematics, algebras A, B over a field k inside some field extension of k are said to be linearly disjoint over k if the following equivalent conditions are met:
(i) The map induced by is injective.
(ii) Any k-basis of A remains linearly independent over B.
(iii) If are k-bases for A, B, then the products are linearly independent over k.
Note that, since every subalgebra of is a domain, (i) implies is a domain (in particular reduced). Conversely if A and B are fields and either A or B is an algebraic extension of k and is a domain then it is a field and A and B are linearly disjoint. However, there are examples where is a domain but A and B are not linearly disjoint: for example, A = B = k(t), the field of rational functions over k.
One also has: A, B are linearly disjoint over k if and only if subfields of generated by , resp. are linearly disjoint over k. (cf. Tensor product of fields)
Suppose A, B are linearly disjoint over k. If , are subalgebras, then and are linearly disjoint over k. Conversely, if any finitely generated subalgebras of algebras A, B are linearly disjoint, then A, B are linearly disjoint (since the condition involves only finite sets of elements.)
See also
Tensor product of fields
References
P.M. Cohn (2003). Basic algebra
Algebra
|
https://en.wikipedia.org/wiki/IMLAC
|
IMLAC Corporation was an American electronics company in Needham, Massachusetts, that manufactured graphical display systems, mainly the PDS-1 and PDS-4, in the 1970s.
The PDS-1 debuted in 1970. It was the first low-cost commercial realization of Ivan Sutherland's Sketchpad system of a highly interactive computer graphics display with motion. Selling for $8,300 before options, its price was equivalent to the cost of four Volkswagen Beetles. The PDS-1 was functionally similar to the much bigger IBM 2250, which cost 30 times more. It was a significant step forward towards computer workstations and modern displays.
The PDS-1 consisted of a CRT monitor, keyboard, light pen, and a control panel on a small desk with most electronic logic in the desk pedestal. The electronics included a simple 16-bit minicomputer, 8-16 kilobytes of magnetic-core memory, and a display processor for driving CRT beam movements.
IMLAC is not an acronym but is the name of a poet-philosopher from Samuel Johnson's novel, The History of Rasselas, Prince of Abissinia.
Timeline of products
1968: Imlac founded. Their business plan was interactive graphics terminals for stock exchange traders, which did not happen.
1970: PDS-1 introduced for the general graphics market.
1972: PDS-1D introduced. It was similar to the PDS-1 with improved circuits and backplane.
1973: PDS-1G introduced.
1974: PDS-4 introduced. It ran twice as fast and displayed twice as much text or graphics without flicker. Its display processor supported instantaneous interactive magnification with clipping. It had an optional floating point add-on.
1977: A total of about 700 PDS-4 systems had been sold in the US. They were built upon order rather than being mass-produced.
1978: Dynagraphic 3250 introduced. It was designed to be used mainly by a proprietary Fortran-coded graphics library running on larger computers, without customer programming inside the terminal.
????: Dynagraphic 6220 introduced.
1979: Imlac Corporat
|
https://en.wikipedia.org/wiki/E%20Ink
|
E Ink (electronic ink) is a brand of electronic paper (e-paper) display technology commercialized by the E Ink Corporation, which was co-founded in 1997 by MIT undergraduates JD Albert and Barrett Comiskey, MIT Media Lab professor Joseph Jacobson, Jerome Rubin and Russ Wilcox.
It is available in grayscale and color and is used in mobile devices such as e-readers, digital signage, smartwatches, mobile phones, electronic shelf labels and architecture panels.
History
Background
The notion of a low-power paper-like display had existed since the 1970s, originally conceived by researchers at Xerox PARC, but had never been realized. While a post-doctoral student at Stanford University, physicist Joseph Jacobson envisioned a multi-page book with content that could be changed at the push of a button and required little power to use.
Neil Gershenfeld recruited Jacobson for the MIT Media Lab in 1995, after hearing Jacobson's ideas for an electronic book. Jacobson, in turn, recruited MIT undergrads Barrett Comiskey, a math major, and J.D. Albert, a mechanical engineering major, to create the display technology required to realize his vision.
Product development
The initial approach was to create tiny spheres which were half white and half black, and which, depending on the electric charge, would rotate such that the white side or the black side would be visible on the display. Albert and Comiskey were told this approach was impossible by most experienced chemists and materials scientists and had trouble creating these perfectly half-white, half-black spheres; during his experiments, Albert accidentally created some all-white spheres.
Comiskey experimented with charging and encapsulating those all-white particles in microcapsules mixed in with a dark dye. The result was a system of microcapsules that could be applied to a surface and could then be charged independently to create black and white images. A first patent was filed by MIT for the microencapsulated electrophore
|
https://en.wikipedia.org/wiki/Glossary%20of%20arithmetic%20and%20diophantine%20geometry
|
This is a glossary of arithmetic and diophantine geometry in mathematics, areas growing out of the traditional study of Diophantine equations to encompass large parts of number theory and algebraic geometry. Much of the theory is in the form of proposed conjectures, which can be related at various levels of generality.
Diophantine geometry in general is the study of algebraic varieties V over fields K that are finitely generated over their prime fields—including as of special interest number fields and finite fields—and over local fields. Of those, only the complex numbers are algebraically closed; over any other K the existence of points of V with coordinates in K is something to be proved and studied as an extra topic, even knowing the geometry of V.
Arithmetic geometry can be more generally defined as the study of schemes of finite type over the spectrum of the ring of integers. Arithmetic geometry has also been defined as the application of the techniques of algebraic geometry to problems in number theory.
A
B
C
D
E
F
G
H
I
K
L
M
N
O
Q
R
S
T
U
V
W
See also
Arithmetic topology
Arithmetic dynamics
References
Further reading
Dino Lorenzini (1996), An invitation to arithmetic geometry, AMS Bookstore,
Diophantine geometry
Algebraic geometry
Geometry
Wikipedia glossaries using description lists
|
https://en.wikipedia.org/wiki/Height%20function
|
A height function is a function that quantifies the complexity of mathematical objects. In Diophantine geometry, height functions quantify the size of solutions to Diophantine equations and are typically functions from a set of points on algebraic varieties (or a set of algebraic varieties) to the real numbers.
For instance, the classical or naive height over the rational numbers is typically defined to be the maximum of the numerators and denominators of the coordinates (e.g. for the coordinates ), but in a logarithmic scale.
Significance
Height functions allow mathematicians to count objects, such as rational points, that are otherwise infinite in quantity. For instance, the set of rational numbers of naive height (the maximum of the numerator and denominator when expressed in lowest terms) below any given constant is finite despite the set of rational numbers being infinite. In this sense, height functions can be used to prove asymptotic results such as Baker's theorem in transcendental number theory which was proved by .
In other cases, height functions can distinguish some objects based on their complexity. For instance, the subspace theorem proved by demonstrates that points of small height (i.e. small complexity) in projective space lie in a finite number of hyperplanes and generalizes Siegel's theorem on integral points and solution of the S-unit equation.
Height functions were crucial to the proofs of the Mordell–Weil theorem and Faltings's theorem by and respectively. Several outstanding unsolved problems about the heights of rational points on algebraic varieties, such as the Manin conjecture and Vojta's conjecture, have far-reaching implications for problems in Diophantine approximation, Diophantine equations, arithmetic geometry, and mathematical logic.
History
An early form of height function was proposed by Giambattista Benedetti (c. 1563), who argued that the consonance of a musical interval could be measured by the product of its numerator
|
https://en.wikipedia.org/wiki/D54%20%28protocol%29
|
D54 is an analogue lighting communications protocol used to control stage lighting. It was developed by Strand Lighting in the late 1970s and was originally designed to handle 384 channels. Though more advanced protocols exist such as Digital MultipleX DMX (lighting), it was widely used in larger venues such as London's West End theatres which had Strand Lighting dimming installations, and it was popular amongst technicians because all the levels can be "seen" on an oscilloscope. D54 is still supported on legacy equipment such as the Strand 500 series consoles alongside DMX. Generally a protocol converter is now used to convert DMX (lighting) down to the native D54.
History
One of the significant problems in controlling dimmers is getting the control signal from a lighting control unit to the dimmer units. For many years this was achieved by providing a dedicated wire from the control unit to each dimmer (analogue control) where the voltage present on the wire was varied by the control unit to set the output level of the dimmer. In about 1976, to deal with the bulky cable requirements of analogue control, Strand's R&D group in the UK developed an analogue multiplexing control system designated D54 (D54 is the internal standards number, which became the accepted name). Originally developed for use on the Strand Galaxy (1980) and Strand Gemini (1984) control desks.
Although a claimed expansion capability of 768 dimmers was documented, early receivers used simple hardware counters that rolled over before reaching 768, effectively preventing commercial exploitation. The refresh period would also have been slow on such a long dimmer update cycle. Instead, multiple D54 streams were supported by some later consoles.
D54 was developed in the United Kingdom at approximately the same time as AMX192 (another analogue multiplexing protocol) was developed in the United States, and the two protocols remained almost exclusively in those countries.
Protocol
Article Authors No
|
https://en.wikipedia.org/wiki/Repeated%20game
|
In game theory, a repeated game is an extensive form game that consists of a number of repetitions of some base game (called a stage game). The stage game is usually one of the well-studied 2-person games. Repeated games capture the idea that a player will have to take into account the impact of their current action on the future actions of other players; this impact is sometimes called their reputation. Single stage game or single shot game are names for non-repeated games.
For the real-life example of a repeated game, consider two gas stations that are adjacent to one another. They compete by publicly posting pricing and have the same and constant marginal cost c (the wholesale price of gasoline). Assume that when they both charge p = 10, their joint profit is maximized, resulting in a high profit for everyone. Despite the fact that this is the best outcome for them, they are motivated to deviate. By modestly lowering the price, anyone can steal all of their competitors' consumers, doubling their revenues (nearly). P = c, where their profit is zero, is the only price without this profit deviation. In other words, in the pricing competition game, the only Nash equilibrium is inefficient (for gas stations) that both charge p = c. This is more of a rule than an exception: in a staged game, the Nash equilibrium is the only result that an agent can consistently acquire in an interaction, and it is usually inefficient for them. This is because the agents are just concerned with their own personal interests and are unconcerned about the benefits or costs that their actions bring to competitors. On the other hand, gas stations make a profit even if there is another gas station adjacent. One of the most crucial reasons is that their interaction is not one-off. This condition is portrayed by repeated games, in which two gas stations compete for pricing (stage games) across an indefinite time range t = 0, 1, 2,....
Finitely vs infinitely repeated games
Repeated games ma
|
https://en.wikipedia.org/wiki/GIOVE
|
; ), or Galileo In-Orbit Validation Element, is the name for two satellites built for the European Space Agency (ESA) to test technology in orbit for the Galileo positioning system.
The name was chosen as a tribute to Galileo Galilei, who discovered the first four natural satellites of Jupiter, and later discovered that they could be used as a universal clock to obtain the longitude of a point on the Earth's surface.
The GIOVE satellites are operated by the GIOVE Mission (GIOVE-M) segment in the frame of the risk mitigation for the In Orbit Validation (IOV) of the Galileo positioning system.
Purpose
These validation satellites were previously known as the Galileo System Testbed (GSTB) version 2 (GSTB-V2). In 2004 the Galileo System Test Bed Version 1 (GSTB-V1) project validated the on-ground algorithms for Orbit Determination and Time Synchronization (OD&TS). This project, led by ESA and European Satellite Navigation Industries, has provided industry with fundamental knowledge to develop the mission segment of the Galileo positioning system.
GIOVE satellites transmitted multifrequency ranging signals equivalent to the signals of future Galileo: L1BC, L1A, E6BC, E6A, E5a, E5b. The main purpose of the GIOVE mission was to test and validate the reception and performance of novel code modulations designed for Galileo including new signals based on the use of the BOC (Binary Offset Carrier) technique, in particular the high-performance E5AltBOC signal.
Satellites
GIOVE-A
Previously known as GSTB-V2/A, this satellite was constructed by Surrey Satellite Technology Ltd (SSTL).
Its mission has the main goal of claiming the frequencies allocated to Galileo by the ITU. It has two independently developed Galileo signal generation chains and also tests the design of two on-board rubidium atomic clocks and the orbital characteristics of the intermediate circular orbit for future satellites.
GIOVE-A is the first spacecraft whose design is based upon SSTL's new Geostationa
|
https://en.wikipedia.org/wiki/Euclid%27s%20theorem
|
Euclid's theorem is a fundamental statement in number theory that asserts that there are infinitely many prime numbers. It was first proved by Euclid in his work Elements. There are several proofs of the theorem.
Euclid's proof
Euclid offered a proof published in his work Elements (Book IX, Proposition 20), which is paraphrased here.
Consider any finite list of prime numbers p1, p2, ..., pn. It will be shown that at least one additional prime number not in this list exists. Let P be the product of all the prime numbers in the list: P = p1p2...pn. Let q = P + 1. Then q is either prime or not:
If q is prime, then there is at least one more prime that is not in the list, namely, q itself.
If q is not prime, then some prime factor p divides q. If this factor p were in our list, then it would divide P (since P is the product of every number in the list); but p also divides P + 1 = q, as just stated. If p divides P and also q, then p must also divide the difference of the two numbers, which is (P + 1) − P or just 1. Since no prime number divides 1, p cannot be in the list. This means that at least one more prime number exists beyond those in the list.
This proves that for every finite list of prime numbers there is a prime number not in the list. In the original work, as Euclid had no way of writing an arbitrary list of primes, he used a method that he frequently applied, that is, the method of generalizable example. Namely, he picks just three primes and using the general method outlined above, proves that he can always find an additional prime. Euclid presumably assumes that his readers are convinced that a similar proof will work, no matter how many primes are originally picked.
Euclid is often erroneously reported to have proved this result by contradiction beginning with the assumption that the finite set initially considered contains all prime numbers, though it is actually a proof by cases, a direct proof method. The philosopher Torkel Franzén, in a book on lo
|
https://en.wikipedia.org/wiki/Motorola%20Mobility
|
Motorola Mobility LLC, marketed as Motorola, is an American consumer electronics manufacturer primarily producing smartphones and other mobile devices running Android. Headquartered at Merchandise Mart in Chicago, Illinois, it is a subsidiary of the Chinese multinational technology company Lenovo.
Motorola Mobility was formed on January 4, 2011, after a split of Motorola, Inc. into two separate companies, with Motorola Mobility assuming the company's consumer-oriented product lines (including its mobile phone business, as well as its cable modems and pay television set-top boxes), while Motorola Solutions assumed the company's enterprise-oriented product lines.
In May 2012 Google acquired Motorola Mobility for US$12.5 billion; the main intent of the purchase was to gain Motorola Mobility's patent portfolio, in order to protect other Android vendors from litigation. Under Google, Motorola Mobility increased its focus on the entry-level smartphone market, and under the Google ATAP division, began development on Project Ara—a platform for modular smartphones with interchangeable components. Shortly after the purchase, Google sold Motorola Mobility's cable modem and set-top box business to Arris Group.
Google's ownership of the company was short-lived. In January 2014, Google announced that it would sell Motorola Mobility to Lenovo for $6.91 billion. The sale, which excluded ATAP and all but 2,000 of Motorola Mobility's patents, was completed on 30 October 2014. Lenovo disclosed an intent to use Motorola Mobility as a way to expand into the United States smartphone market. In August 2015, Lenovo's existing smartphone division was subsumed by Motorola Mobility.
History
On January 4, 2011, Motorola, Inc. was split into two publicly traded companies; Motorola Solutions took on the company's enterprise-oriented business units, while the remaining consumer division was taken on by Motorola Mobility. Motorola Mobility originally consisted of the mobile devices business,
|
https://en.wikipedia.org/wiki/Subframe
|
A subframe is a structural component of a vehicle, such as an automobile or an aircraft, that uses a discrete, separate structure within a larger body-on-frame or unit body to carry certain components, such as the engine, drivetrain, or suspension. The subframe is bolted and/or welded to the vehicle. When bolted, it is sometimes equipped with rubber bushings or springs to dampen vibration.
The principal purposes of using a subframe are, to spread high chassis loads over a wide area of relatively thin sheet metal of a monocoque body shell, and to isolate vibrations and harshness from the rest of the body. For example, in an automobile with its powertrain contained in a subframe, forces generated by the engine and transmission can be damped enough that they will not disturb passengers. As a natural development from a car with a full chassis, separate front and rear subframes are used in modern vehicles to reduce the overall weight and cost. In addition a subframe yields benefits to production in that subassemblies can be made which can be introduced to the main bodyshell when required on an automated line.
There are generally three basic forms of the subframe.
A simple "axle" type which usually carries the lower control arms and steering rack.
A perimeter frame which carries the above components but in addition supports the engine.
A perimeter frame which carries the above components but in addition supports the engine, transmission and possibly full suspension. (As used on front wheel drive cars)
A subframe is usually made of pressed steel panels that are much thicker than bodyshell panels, which are welded or spot welded together. The use of hydroformed tubes may also be used.
The revolutionary monocoque transverse engined front wheel drive 1959 Austin Mini, that set the template for modern front wheel drive cars, used front and rear subframes to provide accurate road wheel control while using a stiff lightweight body. The 1961 Jaguar E-type or XKE used a tub
|
https://en.wikipedia.org/wiki/Separation%20of%20duties
|
Separation of duties (SoD), also known as segregation of duties, is the concept of having more than one person required to complete a task. It is an administrative control used by organisations to prevent fraud, sabotage, theft, misuse of information, and other security compromises. In the political realm, it is known as the separation of powers, as can be seen in democracies where the government is separated into three independent branches: a legislature, an executive, and a judiciary.
General description
Separation of duties is a key concept of internal controls. Increased protection from fraud and errors must be balanced with the increased cost/effort required.
In essence, SoD implements an appropriate level of checks and balances upon the activities of individuals. R. A. Botha and J. H. P. Eloff in the IBM Systems Journal describe SoD as follows.
Separation of duty, as a security principle, has as its primary objective the prevention of fraud and errors. This objective is achieved by disseminating the tasks and associated privileges for a specific business process among multiple users. This principle is demonstrated in the traditional example of separation of duty found in the requirement of two signatures on a cheque.
Actual job titles and organizational structure may vary greatly from one organization to another, depending on the size and nature of the business. Accordingly, rank or hierarchy are less important than the skillset and capabilities of the individuals involved. With the concept of SoD, business critical duties can be categorized into four types of functions: authorization, custody, record keeping, and reconciliation. In a perfect system, no one person should handle more than one type of function.
Principles
Principally several approaches are optionally viable as partially or entirely different paradigms:
sequential separation (two signatures principle)
individual separation (four eyes principle)
spatial separation (separate action in separ
|
https://en.wikipedia.org/wiki/Pose%20%28computer%20vision%29
|
In the fields of computing and computer vision, pose (or spatial pose) represents the position and orientation of an object, usually in three dimensions. Poses are often stored internally as transformation matrices. The term “pose” is largely synonymous with the term “transform”, but a transform may often include scale, whereas pose does not.
In computer vision, the pose of an object is often estimated from camera input by the process of pose estimation. This information can then be used, for example, to allow a robot to manipulate an object or to avoid moving into the object based on its perceived position and orientation in the environment. Other applications include skeletal action recognition.
Pose estimation
The specific task of determining the pose of an object in an image (or stereo images, image sequence) is referred to as pose estimation. Pose estimation problems can be solved in different ways depending on the image sensor configuration, and choice of methodology. Three classes of methodologies can be distinguished:
Analytic or geometric methods: Given that the image sensor (camera) is calibrated and the mapping from 3D points in the scene and 2D points in the image is known. If also the geometry of the object is known, it means that the projected image of the object on the camera image is a well-known function of the object's pose. Once a set of control points on the object, typically corners or other feature points, has been identified, it is then possible to solve the pose transformation from a set of equations which relate the 3D coordinates of the points with their 2D image coordinates. Algorithms that determine the pose of a point cloud with respect to another point cloud are known as point set registration algorithms, if the correspondences between points are not already known.
Genetic algorithm methods: If the pose of an object does not have to be computed in real-time a genetic algorithm may be used. This approach is robust especially when
|
https://en.wikipedia.org/wiki/DOCSIS%20Set-top%20Gateway
|
DOCSIS Set-top Gateway (or DSG) is a specification describing how out-of-band data is delivered to a cable set-top box. Cable set-top boxes need a reliable source of out of band data for information such as program guides, channel lineups, and updated code images.
Features
DSG is an extension of the DOCSIS protocol governing cable modems, and applies to all versions of DOCSIS.
The principal features of DSG are:
One-way operation
The original DOCSIS protocol supports only two way connectivity. A cable modem that is unable to acquire an upstream channel will give up and resume scanning for new channels. Likewise, persistent upstream errors will cause a cable modem to "reinitialize its MAC" and scan for new downstream channels. This behavior is appropriate for traditional cable modems, but not for cable set-top boxes. A cable set-top box still needs to acquire its out of band data even if the upstream channel is impaired.
The DSG specification introduced one way (downstream only) modes of operation. When upstream errors occur, the set-top enters a downstream-only state, periodically attempting to reacquire the upstream channel.
Defining how to recognize the correct downstream channel
Set-top out of band data is generally present only on certain downstream channels. The set-top needs a way to distinguish a valid downstream (containing the set-top's data) from an invalid one used only by standalone cable modems.
The DSG specification defines a special downstream keep-alive message so that the set-top can recognize an appropriate downstream channel.
Creating an out-of-band directory
The Advanced Mode of the DSG Specification introduces a special MAC Management message called the Downstream Channel Descriptor (DCD). The DCD provides a directory identifying the MAC and IP parameters associated with the out of band data streams.
Each data consumer is assigned a special Client Identifier that names the out of band data stream in the DCD.
SNMP MIBs
The DSG Spec
|
https://en.wikipedia.org/wiki/Focus-plus-context%20screen
|
A focus-plus-context screen is a specialized type of display device that consists of one or more high-resolution "focus" displays embedded into a larger low-resolution "context" display. Image content is displayed across all display regions, such that the scaling of the image is preserved, while its resolution varies across the display regions.
The original focus-plus-context screen prototype consisted of an 18"/45 cm LCD screen embedded in a 5'/150 cm front-projected screen. Alternative designs have been proposed that achieve the mixed-resolution effect by combining two or more projectors with different focal lengths
While the high-resolution area of the original prototype was located at a fixed location, follow-up projects have obtained a movable focus area by using a Tablet PC.
Patrick Baudisch is the inventor of focus-plus-context screens (2000, while at Xerox PARC)
Advantages
Allows users to leverage their foveal and their peripheral vision
Cheaper to manufacture than a display that is high-resolution across the entire display surface
Displays entirety and details of large images in a single view. Unlike approaches that combine entirety and details in software (fisheye views), focus-plus-context screens do not introduce distortion.
Disadvantages
In existing implementations, the focus display is either fixed or moving it is physically demanding
References
Notes
Yudhijit Bhattacharjee. In a Seamless Image, the Great and Small. In The New York Times, Thursday, March 14, 2002.
External links
Focus-plus-context screens homepage
User interfaces
Computing output devices
Display technology
User interface techniques
|
https://en.wikipedia.org/wiki/Schur%20polynomial
|
In mathematics, Schur polynomials, named after Issai Schur, are certain symmetric polynomials in n variables, indexed by partitions, that generalize the elementary symmetric polynomials and the complete homogeneous symmetric polynomials. In representation theory they are the characters of polynomial irreducible representations of the general linear groups. The Schur polynomials form a linear basis for the space of all symmetric polynomials. Any product of Schur polynomials can be written as a linear combination of Schur polynomials with non-negative integral coefficients; the values of these coefficients is given combinatorially by the Littlewood–Richardson rule. More generally, skew Schur polynomials are associated with pairs of partitions and have similar properties to Schur polynomials.
Definition (Jacobi's bialternant formula)
Schur polynomials are indexed by integer partitions. Given a partition ,
where , and each is a non-negative integer, the functions
are alternating polynomials by properties of the determinant. A polynomial is alternating if it changes sign under any transposition of the variables.
Since they are alternating, they are all divisible by the Vandermonde determinant
The Schur polynomials are defined as the ratio
This is known as the bialternant formula of Jacobi. It is a special case of the Weyl character formula.
This is a symmetric function because the numerator and denominator are both alternating, and a polynomial since all alternating polynomials are divisible by the Vandermonde determinant.
Properties
The degree Schur polynomials in variables are a linear basis for the space of homogeneous degree symmetric polynomials in variables.
For a partition , the Schur polynomial is a sum of monomials,
where the summation is over all semistandard Young tableaux of shape . The exponents give the weight of , in other words each counts the occurrences of the number in . This can be shown to be equivalent to the definition from
|
https://en.wikipedia.org/wiki/Copper%20indium%20gallium%20selenide
|
Copper indium gallium (di)selenide (CIGS) is a I-III-VI2 semiconductor material composed of copper, indium, gallium, and selenium. The material is a solid solution of copper indium selenide (often abbreviated "CIS") and copper gallium selenide. It has a chemical formula of CuIn1−xGaxSe2, where the value of x can vary from 0 (pure copper indium selenide) to 1 (pure copper gallium selenide). CIGS is a tetrahedrally bonded semiconductor, with the chalcopyrite crystal structure, and a bandgap varying continuously with x from about 1.0 eV (for copper indium selenide) to about 1.7 eV (for copper gallium selenide).
Structure
CIGS is a tetrahedrally bonded semiconductor, with the chalcopyrite crystal structure. Upon heating it transforms to the zincblende form and the transition temperature decreases from 1045 °C for x = 0 to 805 °C for x = 1.
Applications
It is best known as the material for CIGS solar cells a thin-film technology used in the photovoltaic industry. In this role, CIGS has the advantage of being able to be deposited on flexible substrate materials, producing highly flexible, lightweight solar panels. Improvements in efficiency have made CIGS an established technology among alternative cell materials.
See also
Copper indium gallium selenide solar cells
CZTS
List of CIGS companies
References
Semiconductor materials
Copper(I) compounds
Indium compounds
Gallium compounds
Selenides
Renewable energy
Dichalcogenides
|
https://en.wikipedia.org/wiki/Quantum%20heterostructure
|
A quantum heterostructure is a heterostructure in a substrate (usually a semiconductor material), where size restricts the movements of the charge carriers forcing them into a quantum confinement. This leads to the formation of a set of discrete energy levels at which the carriers can exist. Quantum heterostructures have sharper density of states than structures of more conventional sizes.
Quantum heterostructures are important for fabrication of short-wavelength light-emitting diodes and diode lasers, and for other optoelectronic applications, e.g. high-efficiency photovoltaic cells.
Examples of quantum heterostructures confining the carriers in quasi-two, -one and -zero dimensions are:
Quantum wells
Quantum wires
Quantum dots
References
See also
http://www.ecse.rpi.edu/~schubert/Light-Emitting-Diodes-dot-org/chap04/chap04.htm
Kitaev's periodic table
Quantum electronics
Nanomaterials
Semiconductor structures
|
https://en.wikipedia.org/wiki/Nightingale%20floor
|
are floors that make a chirping sound when walked upon. These floors were used in the hallways of some temples and palaces, the most famous example being Nijō Castle, in Kyoto, Japan. Dry boards naturally creak under pressure, but these floors were built in a way that the flooring nails rub against a jacket or clamp, causing chirping noises. It is unclear if the design was intentional. It seems that, at least initially, the effect arose by chance. An information sign in Nijō castle states that "The singing sound is not actually intentional, stemming rather from the movement of nails against clumps in the floor caused by wear and tear over the years". Legend has it that the squeaking floors were used as a security device, assuring that no one could sneak through the corridors undetected.
The English name "nightingale" refers to the Japanese bush warbler, or uguisu, which is a common songbird in Japan.
Etymology
refers to the Japanese bush warbler. The latter segment comes from , which can be used to mean "to lay/board (flooring)", as in the expression yukaita wo haru (床板を張る) meaning "to board a/the floor". The verb haru becomes nominalized as hari and voiced through rendaku to become bari. In this form it refers to the method of boarding, as in other words like herinbōnbari (ヘリンボーン張り), which refers to flooring laid in a Herringbone pattern. As such, uguisubari means "Warbler boarding".
Construction
The floors were made from dried boards. Upside-down V-shaped joints move within the boards when pressure is applied.
Examples
The following locations incorporate nightingale floors:
Nijō Castle, Kyoto
Chion-in, Kyoto
Eikan-dō Zenrin-ji, Kyoto
Daikaku-ji, Kyoto
Modern influences and related topics
Melody Road in Hokkaido, Wakayama, and Gunma
Singing Road in Anyanag, Gyeonggi South Korea
Civic Musical Road in Lancaster, California
Across the Nightingale Floor, 2002 novel by Lian Hearn
Notes
References
A-Z Animals. "Uguisi" under "Animals". (2008). access
|
https://en.wikipedia.org/wiki/Semimodule
|
In mathematics, a semimodule over a semiring R is like a module over a ring except that it is only a commutative monoid rather than an abelian group.
Definition
Formally, a left R-semimodule consists of an additively-written commutative monoid M and a map from to M satisfying the following axioms:
.
A right R-semimodule can be defined similarly. For modules over a ring, the last axiom follows from the others. This is not the case with semimodules.
Examples
If R is a ring, then any R-module is an R-semimodule. Conversely, it follows from the second, fourth, and last axioms that (-1)m is an additive inverse of m for all , so any semimodule over a ring is in fact a module.
Any semiring is a left and right semimodule over itself in the same way that a ring is a left and right module over itself. Every commutative monoid is uniquely an -semimodule in the same way that an abelian group is a -module.
References
Algebraic structures
Module theory
|
https://en.wikipedia.org/wiki/Pesticide%20residue
|
Pesticide residue refers to the pesticides that may remain on or in food after they are applied to food crops. The maximum allowable levels of these residues in foods are often stipulated by regulatory bodies in many countries. Regulations such as pre-harvest intervals also often prevent harvest of crop or livestock products if recently treated in order to allow residue concentrations to decrease over time to safe levels before harvest. Exposure of the general population to these residues most commonly occurs through consumption of treated food sources, or being in close contact to areas treated with pesticides such as farms or lawns.
Many of these chemical residues, especially derivatives of chlorinated pesticides, exhibit bioaccumulation which could build up to harmful levels in the body as well as in the environment. Persistent chemicals can be magnified through the food chain and have been detected in products ranging from meat, poultry, and fish, to vegetable oils, nuts, and various fruits and vegetables.
Definition
A pesticide is a substance or a mixture of substances used for killing pests: organisms dangerous to cultivated plants or to animals. The term applies to various pesticides such as insecticide, fungicide, herbicide and nematocide. Applications of pesticides to crops and animals may leave residues in or on food when it is consumed, and those specified derivatives are considered to be of toxicological significance.
Background
From post-World War II era, chemical pesticides have become the most important form of pest control. There are two categories of pesticides, first-generation pesticides and second-generation pesticide. The first-generation pesticides, which were used prior to 1940, consisted of compounds such as arsenic, mercury, and lead. These were soon abandoned because they were highly toxic and ineffective. The second-generation pesticides were composed of synthetic organic compounds. The growth in these pesticides accelerated in late
|
https://en.wikipedia.org/wiki/Permeameter
|
The permeameter is an instrument for rapidly measuring the electromagnetic permeability of samples of iron or steel with sufficient accuracy for many commercial purposes. The name was first applied by Silvanus P. Thompson to an apparatus devised by himself in 1890, which indicates the mechanical force required to detach one end of the sample, arranged as the core of a straight electromagnet, from an iron yoke of special form; when this force is known, the permeability can be easily calculated.
References
Measuring instruments
permeameter.aspx
www.glossary.oilfield.slb.com/en/Terms/p/permeameter.aspx
|
https://en.wikipedia.org/wiki/Mosser%20Glass
|
Mosser Glass is a company making handmade glass, founded in Cambridge, Ohio, in 1970 by Thomas R. Mosser. The company is operated by his oldest son, Tim Mosser. The Mosser family got their start in the business at the Cambridge Glass Company.
The company offers tours of its facilities.
References
External links
Mosser Glass Company website
Glassmaking companies of the United States
Manufacturing companies based in Ohio
American companies established in 1970
Manufacturing companies established in 1970
1969 establishments in Ohio
Tourist attractions in Ohio
|
https://en.wikipedia.org/wiki/Hecke%20character
|
In number theory, a Hecke character is a generalisation of a Dirichlet character, introduced by Erich Hecke to construct a class of
L-functions larger than Dirichlet L-functions, and a natural setting for the Dedekind zeta-functions and certain others which have functional equations analogous to that of the Riemann zeta-function.
A name sometimes used for Hecke character is the German term Größencharakter (often written Grössencharakter, Grossencharacter, etc.).
Definition using ideles
A Hecke character is a character of the idele class group of a number field or global function field. It corresponds uniquely to a character of the idele group which is trivial on principal ideles, via composition with the projection map.
This definition depends on the definition of a character, which varies slightly between authors: It may be defined as a homomorphism to the non-zero complex numbers (also called a "quasicharacter"), or as a homomorphism to the unit circle in C ("unitary"). Any quasicharacter (of the idele class group) can be written uniquely as a unitary character times a real power of the norm, so there is no big difference between the two definitions.
The conductor of a Hecke character χ is the largest ideal m such that χ is a Hecke character mod m. Here we say that χ is a Hecke character mod m if χ (considered as a character on the idele group) is trivial on the group of finite ideles whose every v-adic component lies in 1 + mOv.
Definition using ideals
The original definition of a Hecke character, going back to Hecke, was in terms of
a character on fractional ideals. For a number field K, let
m = mfm∞ be a
K-modulus, with mf, the "finite part", being an integral ideal of K and m∞, the "infinite part", being a (formal) product of real places of K. Let Im
denote the group of fractional ideals of K relatively prime to mf and
let Pm denote the subgroup of principal fractional ideals (a)
where a is near 1 at each place of m in accordance with the multiplicit
|
https://en.wikipedia.org/wiki/Irving%20Kaplansky
|
Irving Kaplansky (March 22, 1917 – June 25, 2006) was a mathematician, college professor, author, and amateur musician.
Biography
Kaplansky or "Kap" as his friends and colleagues called him was born in Toronto, Ontario, Canada, to Polish-Jewish immigrants; his father worked as a tailor, and his mother ran a grocery and, eventually, a chain of bakeries. He went to Harbord Collegiate Institute receiving the Prince of Wales Scholarship as a teenager. He attended the University of Toronto as an undergraduate and finished first in his class for three consecutive years. In his senior year, he competed in the first William Lowell Putnam Mathematical Competition, becoming one of the first five recipients of the Putnam Fellowship, which paid for graduate studies at Harvard University. Administered by the Mathematical Association of America, the competition is widely considered to be the most difficult mathematics examination in the world and "its difficulty is such that the median score is often zero or one (out of 120) despite being attempted by students specializing in mathematics."
After receiving his Ph.D. from Harvard in 1941 as Saunders Mac Lane's first student, he remained at Harvard as a Benjamin Peirce Instructor, and in 1944 moved with Mac Lane to Columbia University for one year to collaborate on work surrounding World War II working on "miscellaneous studies in mathematics applied to warfare analysis with emphasis upon aerial gunnery, studies of fire control equipment, and rocketry and toss bombing" with the Applied Mathematics Panel. He was a member of the Institute for Advanced Study and attended the 1946 Princeton University Bicentennial.
He was professor of mathematics at the University of Chicago from 1945 to 1984, and Chair of the department from 1962 to 1967. In 1968, Kaplansky was presented an honorary doctoral degree from Queen's University with the university noting "we honour as a Canadian whose clarity of lectures, elegance of writing, and profundi
|
https://en.wikipedia.org/wiki/RecLOH
|
RecLOH is a term in genetics that is an abbreviation for "Recombinant Loss of Heterozygosity".
This is a type of mutation which occurs with DNA by recombination. From a pair of equivalent ("homologous"), but slightly different (heterozygous) genes, a pair of identical genes results. In this case there is a non-reciprocal exchange of genetic code between the chromosomes, in contrast to chromosomal crossover, because genetic information is lost.
For Y chromosome
In genetic genealogy, the term is used particularly concerning similar seeming events in Y chromosome DNA. This type of mutation happens within one chromosome, and does not involve a reciprocal transfer. Rather, one homologous segment "writes over" the other. The mechanism is presumed to be different from RecLOH events in autosomal chromosomes, since the target is the very same chromosome instead of the homologous one.
During the mutation one of these copies overwrites the other. Thus the differences between the two are lost. Because differences are lost, heterozygosity is lost.
Recombination on the Y-chromosome does not only take place during meiosis, but virtually at every mitosis when the Y chromosome condenses, because it doesn't require pairing between chromosomes. Recombination frequency even exceeds the frame shift mutation frequency (slipped strand mispairing) of (average fast) Y-STRs, however many recombination products may lead to infertile germ cells and "daughter out".
Recombination events (RecLOH) can be observed if YSTR databases are searched for twin alleles at 3 or more duplicated markers on the same palindrome (hairpin).
E.g. DYS459, DYS464 and DYS724 (CDY) are located on the same palindrome P1. A high proportion of 9-9, 15-15-17-17, 36-36 combinations and similar twin allelic patterns will be found. PCR typing technologies have been developed (e.g. DYS464X) that are able to verify that there are most frequently really two alleles of each, so we can be sure that there is no gene deletion
|
https://en.wikipedia.org/wiki/LocationFree%20Player
|
Sony's LocationFree is the marketing name for a group of products and technologies for timeshifting and placeshifting streaming video. The LocationFree Player is an Internet-based multifunctional device used to stream live television broadcasts (including digital cable and satellite), DVDs and DVR content over a home network or the Internet. It is in essence a remote video streaming server product (similar to the Slingbox). It was first announced by Sony in Q1 2004 and launched early in Q4 2004 alongside a co-branded wireless tablet TV. The last LocationFree product was the LF-V30 released in 2007.
The LocationFree base station connects to a home network via a wired Ethernet cable, or for newer models, via a wireless connection. Up to three attached video sources can stream content through the network to local content provision devices or across the internet to remote devices. A remote user can connect to the internet at a wireless hotspot or any other internet connection anywhere in the world and receive streamed content. Content may only be streamed to one computer at a time. In addition, the original LocationFree Player software contained a license for only one client computer. Additional (paid) licenses were required to connect to the base station for up to a total of four clients.
On November 29, 2007 Sony modified its LocationFree Player policy to provide free access to the latest LocationFree Player LFA-PC30 software for Windows XP/Vista. In addition, the software no longer requires a unique serial number in order to pair it with a LocationFree base station. In December, 2007 Sony Dropped the $30 license fee for the LocationFree client. However, the software still requires registration to Sony's servers after 30 days.
Clients
The player (server) can stream content to the following (client) devices:
Windows or Mac computer - requires additional software
Mobile/cellular phones - coming later in 2007
Pocket PCs running Windows Mobile
Smartphones/tablets ru
|
https://en.wikipedia.org/wiki/DISLIN
|
DISLIN is a high-level plotting library developed by Helmut Michels at the Max Planck Institute for Solar System Research in Göttingen, Germany. Helmut Michels has worked as a mathematician and Unix system manager at the computer center of the institute. He retired in April 2020 and
founded the company Dislin Software.
The DISLIN library contains routines and functions for displaying data as curves, bar graphs, pie charts, 3D-colour plots, surfaces, contours and maps. Several output formats are supported such as X11, VGA, PostScript, PDF, CGM, WMF, EMF, SVG, HPGL, PNG, BMP, PPM, GIF and TIFF.
DISLIN is available for the programming languages Fortran 77, Fortran 90/95 and C. Plotting extensions for the languages Perl, Python, Java, Julia, Ruby, Go and Tcl are also supported for most operating systems. There is a third-party package to use DISLIN from IDL.
History
The first version 1.0 was released in December 1986. The current version of DISLIN is 11.5, released in March 2022.
Since version 11.3 the DISLIN software is free for non-commercial and commercial use.
References
External links
1986 software
3D graphics software
Cross-platform software
Graphics libraries
Max Planck Society
Plotting software
|
https://en.wikipedia.org/wiki/RootkitRevealer
|
RootkitRevealer is a proprietary freeware tool for rootkit detection on Microsoft Windows by Bryce Cogswell and Mark Russinovich. It runs on Windows XP and Windows Server 2003 (32-bit-versions only). Its output lists Windows Registry and file system API discrepancies that may indicate the presence of a rootkit. It is the same tool that triggered the Sony BMG copy protection rootkit scandal.
RootkitRevealer is no longer being developed.
See also
Sysinternals
Process Explorer
Process Monitor
ProcDump
References
Microsoft software
Computer security software
Windows security software
Windows-only freeware
Rootkit detection software
2006 software
|
https://en.wikipedia.org/wiki/Memory%20refresh
|
Memory refresh is the process of periodically reading information from an area of computer memory and immediately rewriting the read information to the same area without modification, for the purpose of preserving the information. Memory refresh is a background maintenance process required during the operation of semiconductor dynamic random-access memory (DRAM), the most widely used type of computer memory, and in fact is the defining characteristic of this class of memory.
In a DRAM chip, each bit of memory data is stored as the presence or absence of an electric charge on a small capacitor on the chip. As time passes, the charges in the memory cells leak away, so without being refreshed the stored data would eventually be lost. To prevent this, external circuitry periodically reads each cell and rewrites it, restoring the charge on the capacitor to its original level. Each memory refresh cycle refreshes a succeeding area of memory cells, thus repeatedly refreshing all the cells in a consecutive cycle. This process is conducted automatically in the background by the memory circuitry and is transparent to the user. While a refresh cycle is occurring the memory is not available for normal read and write operations, but in modern memory this "overhead" time is not large enough to significantly slow down memory operation.
Electronic memory that does not require refreshing is available, called static random-access memory (SRAM). SRAM circuits require more area on a chip, because an SRAM memory cell requires four to six transistors, compared to a single transistor and a capacitor for DRAM. As a result, data density is much lower in SRAM chips than in DRAM, and SRAM has higher price per bit. Therefore, DRAM is used for the main memory in computers, video game consoles, graphics cards and applications requiring large capacities and low cost. The need for memory refresh makes DRAM timing and circuits significantly more complicated than SRAM circuits, but the density a
|
https://en.wikipedia.org/wiki/Large%20set%20%28Ramsey%20theory%29
|
In Ramsey theory, a set S of natural numbers is considered to be a large set if and only if Van der Waerden's theorem can be generalized to assert the existence of arithmetic progressions with common difference in S. That is, S is large if and only if every finite partition of the natural numbers has a cell containing arbitrarily long arithmetic progressions having common differences in S.
Examples
The natural numbers are large. This is precisely the assertion of Van der Waerden's theorem.
The even numbers are large.
Properties
Necessary conditions for largeness include:
If S is large, for any natural number n, S must contain at least one multiple (equivalently, infinitely many multiples) of n.
If is large, it is not the case that sk≥3sk-1 for k≥ 2.
Two sufficient conditions are:
If S contains n-cubes for arbitrarily large n, then S is large.
If where is a polynomial with and positive leading coefficient, then is large.
The first sufficient condition implies that if S is a thick set, then S is large.
Other facts about large sets include:
If S is large and F is finite, then S – F is large.
is large.
If S is large, is also large.
If is large, then for any , is large.
2-large and k-large sets
A set is k-large, for a natural number k > 0, when it meets the conditions for largeness when the restatement of van der Waerden's theorem is concerned only with k-colorings. Every set is either large or k-large for some maximal k. This follows from two important, albeit trivially true, facts:
k-largeness implies (k-1)-largeness for k>1
k-largeness for all k implies largeness.
It is unknown whether there are 2-large sets that are not also large sets. Brown, Graham, and Landman (1999) conjecture that no such sets exists.
See also
Partition of a set
Further reading
External links
Mathworld: van der Waerden's Theorem
Basic concepts in set theory
Ramsey theory
Theorems in discrete mathematics
|
https://en.wikipedia.org/wiki/Large%20set%20%28combinatorics%29
|
In combinatorial mathematics, a large set of positive integers
is one such that the infinite sum of the reciprocals
diverges. A small set is any subset of the positive integers that is not large; that is, one whose sum of reciprocals converges.
Large sets appear in the Müntz–Szász theorem and in the Erdős conjecture on arithmetic progressions.
Examples
Every finite subset of the positive integers is small.
The set of all positive integers is a large set; this statement is equivalent to the divergence of the harmonic series. More generally, any arithmetic progression (i.e., a set of all integers of the form an + b with a ≥ 1, b ≥ 1 and n = 0, 1, 2, 3, ...) is a large set.
The set of square numbers is small (see Basel problem). So is the set of cube numbers, the set of 4th powers, and so on. More generally, the set of positive integer values of any polynomial of degree 2 or larger forms a small set.
The set {1, 2, 4, 8, ...} of powers of 2 is a small set, and so is any geometric progression (i.e., a set of numbers of the form of the form abn with a ≥ 1, b ≥ 2 and n = 0, 1, 2, 3, ...).
The set of prime numbers is large. The set of twin primes is small (see Brun's constant).
The set of prime powers which are not prime (i.e., all numbers of the form pn with n ≥ 2 and p prime) is small although the primes are large. This property is frequently used in analytic number theory. More generally, the set of perfect powers is small; even the set of powerful numbers is small.
The set of numbers whose expansions in a given base exclude a given digit is small. For example, the set
of integers whose decimal expansion does not include the digit 7 is small. Such series are called Kempner series.
Any set whose upper asymptotic density is nonzero, is large.
Properties
Every subset of a small set is small.
The union of finitely many small sets is small, because the sum of two convergent series is a convergent series. (In set theoretic terminology, the small sets for
|
https://en.wikipedia.org/wiki/Card%20catalog%20%28cryptology%29
|
The card catalog, or "catalog of characteristics," in cryptography, was a system designed by Polish Cipher Bureau mathematician-cryptologist Marian Rejewski, and first completed about 1935 or 1936, to facilitate decrypting German Enigma ciphers.
History
The Polish Cipher Bureau used the theory of permutations to start breaking the Enigma cipher in late 1932. The Bureau recognized that the Enigma machine's doubled-key (see Grill (cryptology)) permutations formed cycles, and those cycles could be used to break the cipher. With German cipher keys provided by a French spy, the Bureau was able to reverse engineer the Enigma and start reading German messages. At the time, the Germans were using only 6 steckers, and the Polish grill method was feasible. On 1 August 1936, the Germans started using 8 steckers, and that change made the grill method less feasible. The Bureau needed an improved method to break the German cipher.
Although the steckers would change which letters were in a doubled-key's cycle, the steckers would not change the number of cycles or the length of those cycles. Steckers could be ignored. Ignoring the mid-key turnovers, the Enigma machine had only distinct settings of the three rotors, and the three rotors could only be arranged in the machine ways. That meant there were only likely doubled-key permutations. The Bureau set about determining and cataloging the characteristic of each of those likely permutations. Each letter of the key could be one of partition number 13 = 101 possible values, and the 3 letters of the key meant there were possible keys. On average, a key would find one setting of the rotors, but it might find several possible settings.
The Polish cryptanalyst could then collect enough traffic to determine all the cycles in a daily key. That usually took about 60 messages. The result might be:
He would use the lengths of the cycles (132;102-32;102-22-12) to look up the wheel order (II I III) and starting rotor positions in the car
|
https://en.wikipedia.org/wiki/Swimlane
|
A swimlane (as in swimlane diagram) is used in process flow diagrams, or flowcharts, that visually distinguishes job sharing and responsibilities for sub-processes of a business process. Swimlanes may be arranged either horizontally or vertically.
Attributes of a swimlane
The swimlane flowchart differs from other flowcharts in that processes and decisions are grouped visually by placing them in lanes. Parallel lines divide the chart into lanes, with one lane for each person, group or sub process. Lanes are labelled to show how the chart is organized.
In the accompanying example, the vertical direction represents the sequence of events in the overall process, while the horizontal divisions depict what sub-process is performing that step. Arrows between the lanes represent how information or material is passed between the sub processes.
Alternately, the flow can be rotated so that the sequence reads horizontally from left to right, with the roles involved being shown at the left edge. This can be easier to read and design, since computer screens are typically wider than they are tall, which gives an improved view of the flow.
Use of standard symbols enables clear linkage to be shown between related flow charts when charting flows with complex relationships.
Usage
When used to diagram a business process that involves more than one department, swimlanes often serve to clarify not only the steps and who is responsible for each one, but also how delays, mistakes or cheating are most likely to occur.
Many process modeling methodologies utilize the concept of swimlanes, as a mechanism to organize activities into separate visual categories in order to illustrate different functional capabilities or responsibilities (organisational roles). Swimlanes are used in Business Process Modeling Notation (BPMN) and Unified Modeling Language activity diagram modeling methodologies.
Alternative terms
A Swimlane was first introduced to computer-based Process Modeling by IGrafx
|
https://en.wikipedia.org/wiki/Tracking%20system
|
A tracking system, also known as a locating system, is used for the observing of persons or objects on the move and supplying a timely ordered sequence of location data for further processing.
Applications
A myriad of tracking systems exists. Some are 'lag time' indicators, that is, the data is collected after an item has passed a point for example a bar code or choke point or gate. Others are 'real-time' or 'near real-time' like Global Positioning Systems (GPS) depending on how often the data is refreshed. There are bar-code systems which require items to be scanned and automatic identification (RFID auto-id). For the most part, the tracking worlds are composed of discrete hardware and software systems for different applications. That is, bar-code systems are separate from Electronic Product Code (EPC) systems, GPS systems are separate from active real time locating systems or RTLS for example, a passive RFID system would be used in a warehouse to scan the boxes as they are loaded on a truck - then the truck itself is tracked on a different system using GPS with its own features and software. The major technology “silos” in the supply chain are:
Distribution/warehousing/manufacturing
Indoors assets are tracked repetitively reading e.g. a barcode, any passive and active RFID and feeding read data into Work in Progress models (WIP) or Warehouse Management Systems (WMS) or ERP software. The readers required per choke point are meshed auto-ID or hand-held ID applications.
However tracking could also be capable of providing monitoring data without binding to a fixed location by using a cooperative tracking capability, e.g. an RTLS.
Yard management
Outdoors mobile assets of high value are tracked by choke point,
802.11, Received Signal Strength Indication (RSSI), Time Delay on Arrival (TDOA), active RFID or GPS Yard Management; feeding into either third party yard management software from the provider or to an existing system. Yard Management Systems (YMS) couple
|
https://en.wikipedia.org/wiki/Virtual%20acoustic%20space
|
Virtual acoustic space (VAS), also known as virtual auditory space, is a technique in which sounds presented over headphones appear to originate from any desired direction in space. The illusion of a virtual sound source outside the listener's head is created.
Sound localization cues generate an externalized percept
When one listens to sounds over headphones (in what is known as the "closed field") the sound source appears to arise from center of the head. On the other hand, under normal, so-called free-field, listening conditions sounds are perceived as being externalized. The direction of a sound in space (see sound localization) is determined by the brain when it analyses the interaction of incoming sound with head and external ears. A sound arising to one side reaches the near ear before the far ear (creating an interaural time difference, ITD), and will also be louder at the near ear (creating an interaural level difference, ILD – also known as interaural intensity difference, IID). These binaural cues allow sounds to be lateralized. Although conventional stereo headphone signals make used of ILDs (not ITDs) the sound is not perceived as being externalized.
The perception of an externalized sound source is due to the frequency and direction-dependent filtering of the pinna which makes up the external ear structure. Unlike ILDs and ITDs, these spectral localization cues are generated monaurally. The same sound presented from different directions will produce at the eardrum a different pattern of peaks and notches across frequency. The pattern of these monaural spectral cues is different for different listeners. Spectral cues are vital for making elevation judgments and distinguishing if a sound arose from in front or behind the listener. They are also vital for creating the illusion of an externalized sound source. Since only ILDs are present in stereo recordings, the lack of spectral cues means that the sound is not perceived as being externalized. The easi
|
https://en.wikipedia.org/wiki/Continental%20Circus
|
Continental Circus is a racing simulation arcade game, created and manufactured by Taito in 1987. In 1989, ports for the Amiga, Amstrad CPC, Atari ST, Commodore 64, MSX and ZX Spectrum were published by Virgin Games.
The arcade version of this game comes in both upright and sit-down models, both of which feature shutter-type 3D glasses hanging above the player's head. According to Computer and Video Games in 1988, it was "the world's first three dimensional racing simulation." The home conversions of Continental Circus lack the full-on 3D and special glasses of the arcade version.
Circus is a common term for racing in France and Japan, likely stemming from the Latin term for a racecourse.
In 2005 the game was made available for the PlayStation 2, Xbox, and PC as part of Taito Legends.
Gameplay
The in-game vehicle is the 1987 Camel-sponsored Honda/Lotus 99T Formula One car as driven by Ayrton Senna and Satoru Nakajima. Due to licensing reasons, sponsor names such as "Camel" or "DeLonghi" are intentionally misspelled to prevent copyright infringement under Japanese law.
The player must successfully qualify in eight different races to win. At the beginning, the player must take 80th place or better to advance. As the player advances, so does the worst possible position to qualify. If the player fails to meet to qualify or if the timer runs out, the game is over. The player does, however, have the option to continue, but if the player fails to qualify in the final race, the game is automatically over, and the player cannot continue.
Hazards
As in the real F1 races, the car is susceptible to damage from contact with another car. Once a player hits a car or a piece of the trackside scenery, they will be called into the pits. If they let the car smoke too long, it will catch fire, and the message "IMPENDING EXPLOSION" will appear. Either way, if they fail to make it back or hit another car, then they will crash or explode, costing several seconds.
Also, if the car r
|
https://en.wikipedia.org/wiki/Grill%20%28cryptology%29
|
The grill method (), in cryptology, was a method used chiefly early on, before the advent of the cyclometer, by the mathematician-cryptologists of the Polish Cipher Bureau (Biuro Szyfrów) in decrypting German Enigma machine ciphers. The Enigma rotor cipher machine changes plaintext characters into cipher text using a different permutation for each character, and so implements a polyalphabetic substitution cipher.
Background
The German navy started using Enigma machines in 1926; it was called Funkschlüssel C ("Radio cipher C"). By 15 July 1928, the German Army (Reichswehr) had introduced their own version of the Enigma—the Enigma G; a revised Enigma I (with plugboard) appeared in June 1930. The Enigma I used by the German military in the 1930s was a 3-rotor machine. Initially, there were only three rotors labeled I, II, and III, but they could be arranged in any order when placed in the machine. Rejewski identified the rotor permutations by , , and ; the encipherment produced by the rotors altered as each character was encrypted. The rightmost permutation () changed with each character. In addition, there was a plugboard that did some additional scrambling.
The number of possible different rotor wirings is:
The number of possible different reflector wirings is:
A perhaps more intuitive way of arriving at this figure is to consider that 1 letter can be wired to any of 25. That leaves 24 letters to connect. The next chosen letter can connect to any of 23. And so on.
The number of possible different plugboard wirings (for six cables) is:
To encrypt or decrypt, the operator made the following machine key settings:
the rotor order (Walzenlage)
the ring settings (Ringstellung)
the plugboard connections (Steckerverbindung)
an initial rotor position (Grundstellung)
In the early 1930s, the Germans distributed a secret monthly list of all the daily machine settings. The Germans knew that it would be foolish to encrypt the day's traffic using the same key, so each me
|
https://en.wikipedia.org/wiki/Robin%20Gandy
|
Robin Oliver Gandy (22 September 1919 – 20 November 1995) was a British mathematician and logician. He was a friend, student, and associate of Alan Turing, having been supervised by Turing during his PhD at the University of Cambridge, where they worked together.
Education and early life
Robin Gandy was born in the village of Rotherfield Peppard, Oxfordshire, England. A great-great-grandson of the architect and artist Joseph Gandy (1771–1843), he was the son of Thomas Hall Gandy (1876–1948), a general practitioner, and Ida Caroline née Hony (1885–1977), a social worker and later an author. His brother was the diplomat Christopher Gandy and his sister was the physician Gillian Gandy.
Educated at Abbotsholme School in Derbyshire, Gandy took two years of the Mathematical Tripos, at King's College, Cambridge, before enlisting for military service in 1940. During World War II he worked on radio intercept equipment at Hanslope Park, where Alan Turing was working on a speech encipherment project, and he became one of Turing's lifelong friends and associates. In 1946, he completed Part III of the Mathematical Tripos, then began studying for a PhD under Turing's supervision. He completed his thesis, On axiomatic systems in mathematics and theories in Physics, in 1952. He was a member of the Cambridge Apostles.
Career and research
Gandy held positions at the University of Leicester, the University of Leeds, and the University of Manchester. He was a visiting associate professor at Stanford University from 1966 to 1967, and held a similar position at University of California, Los Angeles in 1968. In 1969, he moved to Wolfson College, Oxford, where he became Reader in Mathematical Logic.
Gandy is known for his work in recursion theory. His contributions include the Spector–Gandy theorem, the Gandy Stage Comparison theorem, and the Gandy Selection theorem. He also made a significant contribution to the understanding of the Church–Turing thesis, and his generalisation of the
|
https://en.wikipedia.org/wiki/Intercast
|
Intercast was a short-lived technology developed in 1996 by Intel for broadcasting information such as web pages and computer software, along with a single television channel. It required a compatible TV tuner card installed in a personal computer and a decoding program called Intel Intercast Viewer. The data for Intercast was embedded in the Vertical Blanking Interval (VBI) of the video signal carrying the Intercast-enabled program, at a maximum of 10.5 Kilobytes/sec in 10 of the 45 lines of the VBI.
With Intercast, a computer user could watch the TV broadcast in one window of the Intercast Viewer, while being able to view HTML web pages in another window. Users were also able to download software transmitted via Intercast as well. Most often the web pages received were relevant to the television program being broadcast, such as extra information relating to a television program, or extra news headlines and weather forecasts during a newscast. Intercast can be seen as a more modern version of teletext.
The Intercast Viewer software was bundled with several TV tuner cards at the time, such as the Hauppauge Win-TV card. Also at the time of Intercast's introduction, Compaq offered some models of computers with built-in TV tuners installed with the Intercast Viewer software.
Upon its debut, Intercast was used by several TV networks, such as NBC, CNN, The Weather Channel, and MTV Networks.
On June 25, 1996, Intel and NBC announced an arrangement which enabled users to watch coverage of the 1996 Summer Olympics and other programming from NBC News.
Intel discontinued support for Intercast a couple of years later.
NBC's series Homicide: Life on the Street was a show that was Intercast-enabled.
References
External links
Archived copy of Intercast's web site from archive.org
Article about Intercast, NBC, and the 1996 Summer Olympics
Businessweek article
Microsoft press release regarding Intercast and Windows 98
Intercast dying of neglect
Television technology
M
|
https://en.wikipedia.org/wiki/Anagama%20kiln
|
The anagama kiln (Japanese Kanji: 穴窯/ Hiragana: あながま) is an ancient type of pottery kiln brought to Japan from China via Korea in the 5th century. It is a version of the climbing dragon kiln of south China, whose further development was also copied, for example in breaking up the firing space into a series of chambers in the noborigama kiln.
An anagama (a Japanese term meaning "cave kiln") consists of a firing chamber with a firebox at one end and a flue at the other. Although the term "firebox" is used to describe the space for the fire, there is no physical structure separating the stoking space from the pottery space. The term anagama describes single-chamber kilns built in a sloping tunnel shape. In fact, ancient kilns were sometimes built by digging tunnels into banks of clay.
The anagama is fueled with firewood, in contrast to the electric or gas-fueled kilns commonly used by most modern potters. A continuous supply of fuel is needed for firing, as wood thrown into the hot kiln is consumed very rapidly. Stoking occurs round the clock until a variety of variables are achieved including the way the fired pots look inside the kiln, the temperatures reached and sustained, the amount of ash applied, the wetness of the walls and the pots, etc.
Burning wood not only produces heat of up to 1400°C (2,500 °F), it also produces fly ash and volatile salts. Wood ash settles on the pieces during the firing, and the complex interaction between flame, ash, and the minerals of the clay body forms a natural ash glaze. This glaze may show great variation in color, texture, and thickness, ranging from smooth and glossy to rough and sharp. The placement of pieces within the kiln distinctly affects the pottery's appearance, as pieces closer to the firebox may receive heavy coats of ash, or even be immersed in embers, while others deeper in the kiln may only be softly touched by ash effects. Other factors that depend on positioning include temperature and oxidation/reduct
|
https://en.wikipedia.org/wiki/Discount%20points
|
Discount points, also called mortgage points or simply points, are a form of pre-paid interest available in the United States when arranging a mortgage. One point equals one percent of the loan amount. By charging a borrower points, a lender effectively increases the yield on the loan above the amount of the stated interest rate. Borrowers can offer to pay a lender points as a method to reduce the interest rate on the loan, thus obtaining a lower monthly payment in exchange for this up-front payment. For each point purchased, the loan rate is typically reduced by anywhere from 1/8% (0.125%) to 1/4% (0.25%).
Selling the property or refinancing prior to this break-even point will result in a net financial loss for the buyer while keeping the loan for longer than this break-even point will result in a net financial savings for the buyer. Accordingly, if the intention is to buy and sell the property or refinance, paying points will cost more than just paying the higher interest rate.
Points may also be purchased to reduce the monthly payment for the purpose of qualifying for a loan. Loan qualification based on monthly income versus the monthly loan payment may sometimes only be achievable by reducing the monthly payment through the purchasing of points to buy down the interest rate, thereby reducing the monthly loan payment.
Discount points may be different from origination fee, mortgage arrangement fee or broker fee. Discount points are always used to buy down the interest rates, while origination fees sometimes are fees the lender charges for the loan or sometimes just another name for buying down the interest rate. Origination fee and discount points are both items listed under lender-charges on the HUD-1 Settlement Statement.
The difference in savings over the life of the loan can make paying points a benefit to the borrower. Any significant changes in fees should be re-disclosed in the final good faith estimate (GFE).
Also directly related to points is the c
|
https://en.wikipedia.org/wiki/Redfield%20ratio
|
The Redfield ratio or Redfield stoichiometry is the consistent atomic ratio of carbon, nitrogen and phosphorus found in marine phytoplankton and throughout the deep oceans.
The term is named for American oceanographer Alfred C. Redfield who in 1934 first described the relatively consistent ratio of nutrients in marine biomass samples collected across several voyages on board the research vessel Atlantis, and empirically found the ratio to be C:N:P = 106:16:1. While deviations from the canonical 106:16:1 ratio have been found depending on phytoplankton species and the study area, the Redfield ratio has remained an important reference to oceanographers studying nutrient limitation. A 2014 paper summarizing a large data set of nutrient measurements across all major ocean regions spanning from 1970 to 2010 reported the global median C:N:P to be 163:22:1.
Discovery
For his 1934 paper, Alfred Redfield analyzed nitrate and phosphate data for the Atlantic, Indian, Pacific oceans and Barents Sea. As a Harvard physiologist, Redfield participated in several voyages on board the research vessel Atlantis, analyzing data for C, N, and P content in marine plankton, and referenced data collected by other researchers as early as 1898.
Redfield’s analysis of the empirical data led to him to discover that across and within the three oceans and Barents Sea, seawater had an N:P atomic ratio near 20:1 (later corrected to 16:1), and was very similar to the average N:P of phytoplankton.
To explain this phenomenon, Redfield initially proposed two mutually non-exclusive mechanisms:
I) The N:P in plankton tends towards the N:P composition of seawater. Specifically, phytoplankton species with different N and P requirements compete within the same medium and come to reflect the nutrient composition of the seawater.
II) An equilibrium between seawater and planktonic nutrient pools is maintained through biotic feedback mechanisms. Redfield proposed a thermostat like scenario in which the a
|
https://en.wikipedia.org/wiki/Generation%20gap%20%28pattern%29
|
Generation gap is a software design pattern documented by John Vlissides that treats automatically generated code differently than code that was written by a developer. Modifications should not be made to generated code, as they would be overwritten if the code generation process was ever re-run, such as during recompilation. Vlissides proposed creating a subclass of the generated code which contains the desired modification. This might be considered an example of the template method pattern.
Modern languages
Modern byte-code language like Java were in their early stages when Vlissides developed his ideas. In a language like Java or C#, this pattern may be followed by generating an interface, which is a completely abstract class. The developer would then hand-modify a concrete implementation of the generated interface.
References
Software design patterns
|
https://en.wikipedia.org/wiki/BID/60
|
BID/60, also called Singlet, was a British encryption machine. It was used by the British intelligence services from around 1949 or 1950 onwards. The system is a rotor machine, and would appear to have used 10 rotors. There are some apparent similarities between this machine and the US / NATO KL-7 device.
In 2005, a Singlet machine was exhibited in the Enigma and Friends display at the Bletchley Park museum.
References
External links
Jerry Proc's page on BID/60
Rotor machines
Cryptographic hardware
Cold War military equipment of the United Kingdom
|
https://en.wikipedia.org/wiki/Dependency%20inversion%20principle
|
In object-oriented design, the dependency inversion principle is a specific methodology for loosely coupled software modules. When following this principle, the conventional dependency relationships established from high-level, policy-setting modules to low-level, dependency modules are reversed, thus rendering high-level modules independent of the low-level module implementation details. The principle states:
By dictating that high-level and low-level objects must depend on the same abstraction, this design principle the way some people may think about object-oriented programming.
The idea behind points A and B of this principle is that when designing the interaction between a high-level module and a low-level one, the interaction should be thought of as an abstract interaction between them. This not only has implications on the design of the high-level module, but also on the low-level one: the low-level one should be designed with the interaction in mind and it may be necessary to change its usage interface.
In many cases, thinking about the interaction in itself as an abstract concept allows the coupling of the components to be reduced without introducing additional coding patterns, allowing only a lighter and less implementation-dependent interaction schema.
When the discovered abstract interaction schema(s) between two modules is/are generic and generalization makes sense, this design principle also leads to the following dependency inversion coding pattern.
Traditional layers pattern
In conventional application architecture, lower-level components (e.g., Utility Layer) are designed to be consumed by higher-level components (e.g., Policy Layer) which enable increasingly complex systems to be built. In this composition, higher-level components depend directly upon lower-level components to achieve some task. This dependency upon lower-level components limits the reuse opportunities of the higher-level components.
The goal of the dependency inversion pat
|
https://en.wikipedia.org/wiki/GCHQ%20Bude
|
GCHQ Bude, also known as GCHQ Composite Signals Organisation Station Morwenstow, abbreviated to GCHQ CSO Morwenstow, is a UK Government satellite ground station and eavesdropping centre located on the north Cornwall coast at Cleave Camp, between the small villages of Morwenstow and Coombe. It is operated by the British signals intelligence service, officially known as the Government Communications Headquarters, commonly abbreviated GCHQ. It is located on part of the site of the former World War II airfield, RAF Cleave.
History
The site of GCHQ Bude is in Morwenstow, the northernmost parish of Cornwall. During World War II, the location was developed for and used by the Royal Air Force (RAF). RAF Cleave was conceived as housing target and target support aircraft for firing ranges along the north Cornwall coast, and land was acquired from Cleave Manor. In 1939, it became home to two flights of (1AAC). In 1943, No. 639 Squadron was established on the site for the remainder of the war. The airfield was put under maintenance in April 1945, staying under government ownership.
Satellite interception
In the early 1960s, developments occurred which appear to have prompted the establishment of the facility now known as GCHQ Bude. In 1962, a satellite receiving station for the commercial communication satellites of Intelsat was established at Goonhilly Downs, just over a hundred kilometres south-southwest of Morwenstow.
The downstream link from the Intelsat satellites could easily be intercepted by placing receiver dishes nearby in the satellites' 'footprint'. For that, the land at Cleave was allotted to the Ministry of Public Buildings and Works in 1967, and construction of the satellite interception station began in 1969. Two dishes appeared first, followed by smaller dishes in the ensuing years. The station was originally signposted as 'CSOS Morwenstow', with 'CSOS' standing for Composite Signals Organisation Station. In 2001, a third large dish appeared, a
|
https://en.wikipedia.org/wiki/Inflatable%20movie%20screen
|
An inflatable movie screen is an inflatable framework with an attached projection screen. Inflatable screens are used for outdoor movies, film festivals, drive-in theaters, sports, social, fundraising and other events requiring outdoor projection.
Design
The projection frame is made from PVC-coated fabric layers joined by high-frequency welding or mechanical sewing. The projection surface can be made of PVC or spandex, with the latter providing for rear projection capabilities. The projection surface can be detachable for ease of care. The frame is inflated with a high-pressure air blower. Larger frames may require a three-phase blower. For bigger screens, the blower typically continues to operate, ensuring the screen remains fully inflated. For consumer market and smaller screen sizes, the screen is sealed and does not require a constantly operating air blower.
Screens can be held upright with help of a supporting structure (A-frame or inflatable legs) or with the system of straps and counterweights.
In comparison to traditional and heavy steel constructions, inflatable screens can be set up in a few hours or less, with the smallest screens taking one minute to inflate. This can be useful for environments where wind may be a factor. This is one of the major drawbacks of an inflatable screen. Even a small wind can make these fly like a kite without proper anchoring. Inflatable screens can be deflated quickly adding to their safety. Inflatable screens are lightweight and highly portable compared to other structures used to support screens like a truss or scaffold. A screen usually fits on a single pallet. A truss or steel system takes up an entire truck. Inflatable screens reach sizes up to . Another big disadvantage of inflatable screens is the noise created by the continuous blowers.
See also
Video projector
LCD projector
DLP projector
Drive-in theater
List of inflatable manufactured goods
Outdoor cinema
References
Display technology
Film and v
|
https://en.wikipedia.org/wiki/Low%20IF%20receiver
|
In a low-IF receiver, the RF signal is mixed down to a non-zero low or moderate intermediate frequency, typically a few megahertz (instead of 33–40 MHz) for TV, and even lower frequencies (typically 120–130 kHz instead of 10.7–10.8 MHz or 13.45 MHz) in the case of FM radio band receivers or (455–470 kHz for) AM (MW/LW/SW) receivers. Low-IF receiver topologies have many of the desirable properties of zero-IF architectures, but avoid the DC offset and 1/f noise problems.
The use of a non-zero IF re-introduces the image issue. However, when there are relatively relaxed image and neighbouring channel rejection requirements they can be satisfied by carefully designed low-IF receivers. Image signal and unwanted blockers can be rejected by quadrature down-conversion (complex mixing) and subsequent filtering.
This technique is now widely used in the tiny FM receivers incorporated into MP3 players and mobile phones and is becoming commonplace in both analog and digital TV receiver designs. Using advanced analog- and digital signal processing techniques, cheap, high quality receivers using no resonant circuits at all are now possible.
Communication circuits
Radio electronics
Receiver (radio)
|
https://en.wikipedia.org/wiki/Epson%20Robots
|
EPSON Robots is the robotics design and manufacturing department of Japanese corporation Seiko Epson, the brand-name watch and computer printer producer.
Epson started the production of robots in 1980. Epson manufactures Cartesian, SCARA and 6-axis industrial robots for factory automation. Cleanroom and ESD compliant models are available.
They offer PC-based controllers and integrated vision systems utilizing Epson's own vision processing technology.
Epson has a 30-year heritage and there are more than 30,000 Epson robots installed in manufacturing industries around the world.
Epson uses a standardized PC-based controller for 6-axis robots, SCARA, and Linear Module needs. A move that simplifies support and reduces learning time.
Epson SCARA Robots
Epson offers four different lines of SCARA robots including the T-Series, G-Series, RS-Series, and LS-Series . The performance and features offered for each series of robot is determined by the intended purpose and needs of the robot. The T- Series robot is a high performance alternative to slide robots for pick-and-place operations. The G-Series offers a wide variety of robots in regards to the size, arm design, payload application, and more. The RS-Series offers two SCARA robots that are mounted from above and have the ability to move the second axis under the first axis. The LS-Series features several low cost and high performance robots that come in a variety of sizes.
References
External links
Official website
Robotics at Epson
Epson
Robotics companies of Japan
|
https://en.wikipedia.org/wiki/Bishop%E2%80%93Cannings%20theorem
|
The Bishop–Cannings theorem is a theorem in evolutionary game theory. It states that (i) all members of a mixed evolutionarily stable strategy (ESS) have the same payoff (Theorem 2), and (ii) that none of these can also be a pure ESS (from their Theorem 3). The usefulness of the results comes from the fact that they can be used to directly find ESSes algebraically, rather than simulating the game and solving it by iteration.
The logic of (i) also applies to Nash equilibria (all strategies in the support of a mixed strategy receive the same payoff).
The theorem was formulated by Tim Bishop and Chris Cannings at Sheffield University, who published it in 1978.
A review is given by John Maynard Smith in Evolution and the Theory of Games, with proof in the appendix.
Notes
Evolutionary game theory
Biological theorems
Economics theorems
|
https://en.wikipedia.org/wiki/Nanotechnology%20in%20fiction
|
The use of nanotechnology in fiction has attracted scholarly attention. The first use of the distinguishing concepts of nanotechnology was "There's Plenty of Room at the Bottom", a talk given by physicist Richard Feynman in 1959. K. Eric Drexler's 1986 book Engines of Creation introduced the general public to the concept of nanotechnology. Since then, nanotechnology has been used frequently in a diverse range of fiction, often as a justification for unusual or far-fetched occurrences featured in speculative fiction.
Notable examples
Literature
In 1931, Boris Zhitkov wrote a short story called Microhands (Микроруки), where the narrator builds for himself a pair of microscopic remote manipulators, and uses them for fine tasks like eye surgery. When he attempts to build even smaller manipulators to be manipulated by the first pair, the story goes into detail about the problem of regular materials behaving differently on a microscopic scale.
In his 1956 short story The Next Tenants, Arthur C. Clarke describes tiny machines that operate at the micrometre scale – although not strictly nanoscale (billionth of a meter), they are the first fictional example of the concepts now associated with nanotechnology.
A concept similar to nanotechnology, called "micromechanical devices", was described in Lem's 1959 novel Eden These devices were used by the aliens as "seeds" to grow a wall around the human spaceship. <ref> Doktryna nieingerencji, In: Marek Oramus, Bogowie Lema</ref>
Stanislaw Lem's 1964 novel The Invincible involves the discovery of an artificial ecosystem of minuscule robots, although like in Clarke's story they are larger than what is strictly meant by the term 'nanotechnology'.
Robert Silverberg's 1969 short story How It Was when the Past Went Away describes nanotechnology being used in the construction of stereo loudspeakers, with a thousand speakers per inch.
The 1984 novel Peace on Earth by Stanislaw Lem tells about small bacteria-sized nanorobots lookin
|
https://en.wikipedia.org/wiki/Do%20not%20feed%20the%20animals
|
The prohibition "do not feed the animals" reflects a policy forbidding the artificial feeding of wild or feral animals. Signs displaying this message are commonly found in zoos, circuses, animal theme parks, aquariums, national parks, parks, public spaces, farms, and other places where people come into contact with wildlife. In some cases there are laws to enforce such no-feeding policies.
Feeding wild animals can significantly change their behavior. Feeding or leaving unattended food to large animals, such as bears, can lead them to aggressively seek out food from people, sometimes resulting in injury. Feeding can also alter animal behavior so that animals routinely travel in larger groups, which can make disease transmission between animals more likely. In public spaces, the congregation of animals caused by feeding can result in them being considered pests. In zoos, giving food to the animals is discouraged due to the strict dietary controls in place. More generally, artificial feeding can result in, for example, vitamin deficiencies and dietary mineral deficiencies. Outside zoos, a concern is that the increase in local concentrated wildlife population due to artificial feeding can promote the transfer of disease among animals or between animals and humans.
Sign example gallery
Zoos
Zoos generally discourage visitors from giving any food to the animals. Some zoos, particularly petting zoos, do the opposite and actively encourage people to get involved with the feeding of the animals. This, however, is strictly monitored and usually involves set food available from the zookeepers or vending machines, as well as a careful choice of which animals to feed, and the provision of hand-washing facilities to avoid spreading disease. Domestic animals such as sheep and goats are often permitted to be fed, as are giraffes.
National and state parks
In national parks and state parks, feeding animals can result in malnourishment due to inappropriate diet and in disruptio
|
https://en.wikipedia.org/wiki/Luminosity%20distance
|
Luminosity distance DL is defined in terms of the relationship between the absolute magnitude M and apparent magnitude m of an astronomical object.
which gives:
where DL is measured in parsecs. For nearby objects (say, in the Milky Way) the luminosity distance gives a good approximation to the natural notion of distance in Euclidean space.
The relation is less clear for distant objects like quasars far beyond the Milky Way since the apparent magnitude is affected by spacetime curvature, redshift, and time dilation. Calculating the relation between the apparent and actual luminosity of an object requires taking all of these factors into account. The object's actual luminosity is determined using the inverse-square law and the proportions of the object's apparent distance and luminosity distance.
Another way to express the luminosity distance is through the flux-luminosity relationship,
where is flux (W·m−2), and is luminosity (W). From this the luminosity distance (in meters) can be expressed as:
The luminosity distance is related to the "comoving transverse distance" by
and with the angular diameter distance by the Etherington's reciprocity theorem:
where z is the redshift. is a factor that allows calculation of the comoving distance between two objects with the same redshift but at different positions of the sky; if the two objects are separated by an angle , the comoving distance between them would be . In a spatially flat universe, the comoving transverse distance is exactly equal to the radial comoving distance , i.e. the comoving distance from ourselves to the object.
See also
Distance measure
Distance modulus
Notes
External links
Ned Wright's Javascript Cosmology Calculator
iCosmos: Cosmology Calculator (With Graph Generation )
Observational astronomy
Physical quantities
|
https://en.wikipedia.org/wiki/Sister%20Mary%20Joseph%20nodule
|
In medicine, the Sister Mary Joseph nodule (sometimes Sister Mary Joseph node or Sister Mary Joseph sign) refers to a palpable nodule bulging into the umbilicus as a result of metastasis of a malignant cancer in the pelvis or abdomen. Sister Mary Joseph nodules can be painful to palpation.
A periumbilical mass is not always a Sister Mary Joseph nodule. Other conditions that can cause a palpable periumbilical mass include umbilical hernia, infection, and endometriosis. Medical imaging, such as abdominal ultrasound, may be used to distinguish a Sister Mary Joseph nodule from another kind of mass.
Gastrointestinal malignancies account for about half of underlying sources (most commonly gastric cancer, colonic cancer or pancreatic cancer, mostly of the tail and body of the pancreas), and men are even more likely to have an underlying cancer of the gastrointestinal tract. Gynecological cancers account for about 1 in 4 cases (primarily ovarian cancer and also uterine cancer). Nodules will also, rarely, originate from appendix cancer spillage and pseudomyxoma peritonei. Unknown primary tumors and rarely, urinary or respiratory tract malignancies can cause umbilical metastases. How exactly the metastases reach the umbilicus remains largely unknown. Proposed mechanisms for the spread of cancer cells to the umbilicus include direct transperitoneal spread, via the lymphatics which run alongside the obliterated umbilical vein, hematogenous spread, or via remnant structures such as the falciform ligament, median umbilical ligament, or a remnant of the vitelline duct. Sister Mary Joseph nodule is associated with multiple peritoneal metastases and a poor prognosis.
History
Sister Mary Joseph Dempsey (born Julia Dempsey) was a Catholic nun and surgical assistant of William J. Mayo at St. Mary's Hospital in Rochester, Minnesota from 1890 to 1915. She drew Mayo's attention to the phenomenon, and he published an article about it in 1928. The eponymous term Sister Mary Joseph nodul
|
https://en.wikipedia.org/wiki/Magic%20Cap
|
Magic Cap (short for Magic Communicating Applications Platform) is a discontinued object-oriented operating system for PDAs developed by General Magic. Tony Fadell was a contributor to the platform, and Darin Adler was an architect.
Its graphical user interface incorporates a room metaphor, where the user navigates between rooms to perform tasks, such as going to a home office to perform word processing, or to a file room to clean up the system files. Automation is based on mobile agents but not an office assistant.
Several electronic companies came to market with Magic Cap devices, including the Sony Magic Link and the Motorola Envoy, both released in 1994. None of these devices were commercial successes.
Mobile agents
The Magic Cap operating system includes a new mobile agent technology named Telescript. Conceptually, the agents carry work orders, travel to a Place outside of the handheld device, complete their work, and then return to the device with the results. When the Magic Cap devices were delivered, the only Place for agents to travel was the PersonaLink service provided by AT&T. The agents had little access to functionality, because each agent had to be strictly authorized and its scope of inquiry was limited to the software modules installed on the PersonaLink servers. The payload carried by these agents was also hampered by the slow dial-up modem speed of 2400 bit/s.
The authentication and authorization system of the mobile agents in Telescript created a high coupling between the device and the target Place. As a result, deployment of agent-based technology was incredibly difficult, and never reached fruition before the PersonaLink service was shut down.
See also
Apple Newton
Microsoft Bob
Danger Hiptop
References
External links
Archive section and "time capsule" dedicated to Magic Cap – Pen Computing Magazine
"Making Magic" – a developer’s introduction to General Magic and Magic Cap by Richard Clark, Scott Knaster, and the staff of General Ma
|
https://en.wikipedia.org/wiki/War%20of%20attrition%20%28game%29
|
In game theory, the war of attrition is a dynamic timing game in which players choose a time to stop, and fundamentally trade off the strategic gains from outlasting other players and the real costs expended with the passage of time. Its precise opposite is the pre-emption game, in which players elect a time to stop, and fundamentally trade off the strategic costs from outlasting other players and the real gains occasioned by the passage of time. The model was originally formulated by John Maynard Smith; a mixed evolutionarily stable strategy (ESS) was determined by Bishop & Cannings. An example is a second price all-pay auction, in which the prize goes to the player with the highest bid and each player pays the loser's low bid (making it an all-pay sealed-bid second-price auction).
Examining the game
To see how a war of attrition works, consider the all pay auction: Assume that each player makes a bid on an item, and the one who bids the highest wins a resource of value V. Each player pays his bid. In other words, if a player bids b, then his payoff is -b if he loses, and V-b if he wins. Finally, assume that if both players bid the same amount b, then they split the value of V, each gaining V/2-b. Finally, think of the bid b as time, and this becomes the war of attrition, since a higher bid is costly, but the higher bid wins the prize.
The premise that the players may bid any number is important to analysis of the all-pay, sealed-bid, second-price auction. The bid may even exceed the value of the resource that is contested over. This at first appears to be irrational, being seemingly foolish to pay more for a resource than its value; however, remember that each bidder only pays the low bid. Therefore, it would seem to be in each player's best interest to bid the maximum possible amount rather than an amount equal to or less than the value of the resource.
There is a catch, however; if both players bid higher than V, the high bidder does not so much win as los
|
https://en.wikipedia.org/wiki/Program-specific%20information
|
Program-specific information (PSI) is metadata about a program (channel) and part of an MPEG transport stream.
The PSI data as defined by ISO/IEC 13818-1 (MPEG-2 Part 1: Systems) includes four tables:
PAT (Program Association Table)
CAT (Conditional Access Table)
PMT (Program Mapping Table)
NIT (Network Information Table)
The MPEG-2 specification does not specify the format of the CAT and NIT.
PSI is carried in the form of a table structure. Each table structure is broken into sections. Each section can span multiple transport stream packets. On the other hand, a transport stream packet can also contain multiple sections with same PID. Adaptation field also occurs in TS packets carrying PSI data. The PSI data will never be scrambled so that the decoder at the receiving end can easily identify the properties of the stream.
The sections comprising the PAT and CAT tables are associated with predefined PIDs (Packet Identifier) as explained in their respective descriptions below. There may be multiple independent PMT sections in a stream; each section is given a unique user-defined PID and maps a program number to the metadata describing that program and the streams within it. PMT section PIDs are defined in the PAT, and are the only PIDs defined there. The streams themselves are contained in PES packets with user-defined PIDs specified in the PMT.
PSI structure
Table Sections
Descriptor
PAT (Program Association Table)
The program association table (PAT) lists all programs available in the transport stream. Each of the listed programs is identified by a 16-bit value called program_number. Each of the programs listed in PAT has an associated value of PID for its PMT.
The value 0x0000 for program_number is reserved to specify the PID where to look for network information table. If such a program is not present in PAT the default PID value (0x0010) shall be used for NIT.
TS packets containing PAT information always have PID 0x0000.
The PAT is assigned PID 0
|
https://en.wikipedia.org/wiki/Cryptographic%20Service%20Provider
|
In Microsoft Windows, a Cryptographic Service Provider (CSP) is a software library that implements the Microsoft CryptoAPI (CAPI). CSPs implement encoding and decoding functions, which computer application programs may use, for example, to implement strong user authentication or for secure email.
CSPs are independent modules that can be used by different applications. A user program calls CryptoAPI functions and these are redirected to CSPs functions. Since CSPs are responsible for implementing cryptographic algorithms and standards, applications do not need to be concerned about security details. Furthermore, one application can define which CSP it is going to use on its calls to CryptoAPI. In fact, all cryptographic activity is implemented in CSPs. CryptoAPI only works as a bridge between the application and the CSP.
CSPs are implemented basically as a special type of DLL with special restrictions on loading and use. Every CSP must be digitally signed by Microsoft and the signature is verified when Windows loads the CSP. In addition, after being loaded, Windows periodically re-scans the CSP to detect tampering, either by malicious software such as computer viruses or by the user him/herself trying to circumvent restrictions (for example on cryptographic key length) that might be built into the CSP's code.
To obtain a signature, non-Microsoft CSP developers must supply paperwork to Microsoft promising to obey various legal restrictions and giving valid contact information. Microsoft did not charge any fees to supply these signatures. For development and testing purposes, a CSP developer can configure Windows to recognize the developer's own signatures instead of Microsoft's, but this is a somewhat complex and obscure operation unsuitable for nontechnical end users.
The CAPI/CSP architecture had its origins in the era of restrictive US government controls on the export of cryptography. Microsoft's default or "base" CSP then included with Windows was li
|
https://en.wikipedia.org/wiki/Kaplan%E2%80%93Meier%20estimator
|
The Kaplan–Meier estimator, also known as the product limit estimator, is a non-parametric statistic used to estimate the survival function from lifetime data. In medical research, it is often used to measure the fraction of patients living for a certain amount of time after treatment. In other fields, Kaplan–Meier estimators may be used to measure the length of time people remain unemployed after a job loss, the time-to-failure of machine parts, or how long fleshy fruits remain on plants before they are removed by frugivores. The estimator is named after Edward L. Kaplan and Paul Meier, who each submitted similar manuscripts to the Journal of the American Statistical Association. The journal editor, John Tukey, convinced them to combine their work into one paper, which has been cited more than 61,800 times since its publication in 1958.
The estimator of the survival function (the probability that life is longer than ) is given by:
with a time when at least one event happened, di the number of events (e.g., deaths) that happened at time , and the individuals known to have survived (have not yet had an event or been censored) up to time .
Basic concepts
A plot of the Kaplan–Meier estimator is a series of declining horizontal steps which, with a large enough sample size, approaches the true survival function for that population. The value of the survival function between successive distinct sampled observations ("clicks") is assumed to be constant.
An important advantage of the Kaplan–Meier curve is that the method can take into account some types of censored data, particularly right-censoring, which occurs if a patient withdraws from a study, is lost to follow-up, or is alive without event occurrence at last follow-up. On the plot, small vertical tick-marks state individual patients whose survival times have been right-censored. When no truncation or censoring occurs, the Kaplan–Meier curve is the complement of the empirical distribution function.
In me
|
https://en.wikipedia.org/wiki/Cauchy%20formula%20for%20repeated%20integration
|
The Cauchy formula for repeated integration, named after Augustin-Louis Cauchy, allows one to compress n antiderivatives of a function into a single integral (cf. Cauchy's formula).
Scalar case
Let f be a continuous function on the real line. Then the nth repeated integral of f with base-point a,
is given by single integration
Proof
A proof is given by induction. The base case with n=1 is trivial, since it is equivalent to:
Now, suppose this is true for n, and let us prove it for n+1. Firstly, using the Leibniz integral rule, note that
Then, applying the induction hypothesis,
It has been shown that this statement holds true for the base case .
If the statement is true for , then it has been shown that the statement holds true for .
Thus this statement has been proven true for all positive integers.
This completes the proof.
Generalizations and applications
The Cauchy formula is generalized to non-integer parameters by the Riemann-Liouville integral, where is replaced by , and the factorial is replaced by the gamma function. The two formulas agree when .
Both the Cauchy formula and the Riemann-Liouville integral are generalized to arbitrary dimension by the Riesz potential.
In fractional calculus, these formulae can be used to construct a differintegral, allowing one to differentiate or integrate a fractional number of times. Differentiating a fractional number of times can be accomplished by fractional integration, then differentiating the result.
References
Augustin-Louis Cauchy: Trente-Cinquième Leçon. In: Résumé des leçons données à l’Ecole royale polytechnique sur le calcul infinitésimal. Imprimerie Royale, Paris 1823. Reprint: Œuvres complètes II(4), Gauthier-Villars, Paris, pp. 5–261.
Gerald B. Folland, Advanced Calculus, p. 193, Prentice Hall (2002).
External links
Augustin-Louis Cauchy
Integral calculus
Theorems in analysis
|
https://en.wikipedia.org/wiki/Invertebrate%20zoology
|
Invertebrate zoology is the subdiscipline of zoology that consists of the study of invertebrates, animals without a backbone (a structure which is found only in fish, amphibians, reptiles, birds and mammals).
Invertebrates are a vast and very diverse group of animals that includes sponges, echinoderms, tunicates, numerous different phyla of worms, molluscs, arthropods and many additional phyla. Single-celled organisms or protists are usually not included within the same group as invertebrates.
Subdivisions
Invertebrates represent 97% of all named animal species, and because of that fact, this subdivision of zoology has many further
subdivisions, including but not limited to:
Arthropodology - the study of arthropods, which includes
Arachnology - the study of spiders and other arachnids
Entomology - the study of insects
Carcinology - the study of crustaceans
Myriapodology - the study of centipedes, millipedes, and other myriapods
Cnidariology - the study of Cnidaria
Helminthology - the study of parasitic worms.
Malacology - the study of mollusks, which includes
Conchology - the study of Mollusk shells.
Limacology - the study of slugs.
Teuthology - the study of cephalopods.
Invertebrate paleontology - the study of fossil invertebrates
These divisions are sometimes further divided into more specific specialties. For example, within arachnology, acarology is the study of mites and ticks; within entomology, lepidoptery is the study of butterflies and moths, myrmecology is the study of ants and so on. Marine invertebrates are all those invertebrates that exist in marine habitats.
History
Early Modern Era
In the early modern period starting in the late 16th century, invertebrate zoology saw growth in the number of publications made and improvement in the experimental practices associated with the field. (Insects are one of the most diverse groups of organisms on Earth. They play important roles in ecosystems, including pollination, natural enemies, saprophytes, and
|
https://en.wikipedia.org/wiki/Zero%20moment%20point
|
The zero moment point (also referred to as zero-tilting moment point) is a concept related to the dynamics and control of legged locomotion, e.g., for humanoid or quadrupedal robots. It specifies the point with respect to which reaction forces at the contacts between the feet and the ground do not produce any moment in the horizontal direction, i.e., the point where the sum of horizontal inertia and gravity forces is zero. The concept assumes the contact area is planar and has sufficiently high friction to keep the feet from sliding.
Introduction
This concept was introduced to the legged locomotion community in January 1968 by Miomir Vukobratović and Davor Juričić at The Third All-Union Congress of Theoretical and Applied Mechanics in Moscow. The term "zero moment point" itself was coined in works that followed between 1970 and 1972, and was widely and successfully reproduced in works from robotics groups around the world.
The zero moment point is an important concept in the motion planning for biped robots. Since they have only two points of contact with the floor and they are supposed to walk, “run” or “jump” (in the motion context), their motion has to be planned concerning the dynamical stability of their whole body. This is not an easy task, especially because the upper body of the robot (torso) has larger mass and inertia than the legs which are supposed to support and move the robot. This can be compared to the problem of balancing an inverted pendulum.
The trajectory of a walking robot is planned using the angular momentum equation to ensure that the generated joint trajectories guarantee the dynamical postural stability of the robot, which usually is quantified by the distance of the zero moment point in the boundaries of a predefined stability region. The position of the zero moment point is affected by the referred mass and inertia of the robot's torso, since its motion generally requires large angle torques to maintain a satisfactory dynamical postu
|
https://en.wikipedia.org/wiki/Generator%20%28mathematics%29
|
In mathematics and physics, the term generator or generating set may refer to any of a number of related concepts. The underlying concept in each case is that of a smaller set of objects, together with a set of operations that can be applied to it, that result in the creation of a larger collection of objects, called the generated set. The larger set is then said to be generated by the smaller set. It is commonly the case that the generating set has a simpler set of properties than the generated set, thus making it easier to discuss and examine. It is usually the case that properties of the generating set are in some way preserved by the act of generation; likewise, the properties of the generated set are often reflected in the generating set.
List of generators
A list of examples of generating sets follow.
Generating set or spanning set of a vector space: a set that spans the vector space
Generating set of a group: A subset of a group that is not contained in any subgroup of the group other than the entire group
Generating set of a ring: A subset S of a ring A generates A if the only subring of A containing S is A
Generating set of an ideal in a ring
Generating set of a module
A generator, in category theory, is an object that can be used to distinguish morphisms
In topology, a collection of sets that generate the topology is called a subbase
Generating set of a topological algebra: S is a generating set of a topological algebra A if the smallest closed subalgebra of A containing S is A
Differential equations
In the study of differential equations, and commonly those occurring in physics, one has the idea of a set of infinitesimal displacements that can be extended to obtain a manifold, or at least, a local part of it, by means of integration. The general concept is of using the exponential map to take the vectors in the tangent space and extend them, as geodesics, to an open set surrounding the tangent point. In this case, it is not unusual to cal
|
https://en.wikipedia.org/wiki/Integral%20symbol
|
The integral symbol:
is used to denote integrals and antiderivatives in mathematics, especially in calculus.
History
The notation was introduced by the German mathematician Gottfried Wilhelm Leibniz in 1675 in his private writings; it first appeared publicly in the article "" (On a hidden geometry and analysis of indivisibles and infinites), published in Acta Eruditorum in June 1686. The symbol was based on the ſ (long s) character and was chosen because Leibniz thought of the integral as an infinite sum of infinitesimal summands.
Typography in Unicode and LaTeX
Fundamental symbol
The integral symbol is in Unicode and \int in LaTeX. In HTML, it is written as ∫ (hexadecimal), ∫ (decimal) and ∫ (named entity).
The original IBM PC code page 437 character set included a couple of characters ⌠ and ⌡ (codes 244 and 245 respectively) to build the integral symbol. These were deprecated in subsequent MS-DOS code pages, but they still remain in Unicode (U+2320 and U+2321 respectively) for compatibility.
The ∫ symbol is very similar to, but not to be confused with, the letter ʃ ("esh").
Extensions of the symbol
Related symbols include:
Typography in other languages
In other languages, the shape of the integral symbol differs slightly from the shape commonly seen in English-language textbooks. While the English integral symbol leans to the right, the German symbol (used throughout Central Europe) is upright, and the Russian variant leans slightly to the left to occupy less horizontal space.
Another difference is in the placement of limits for definite integrals. Generally, in English-language books, limits go to the right of the integral symbol:
By contrast, in German and Russian texts, the limits are placed above and below the integral symbol, and, as a result, the notation requires larger line spacing, but is more compact horizontally, especially when longer expressions are used in the limits:
See also
Capital sigma notation
Capital
|
https://en.wikipedia.org/wiki/Wizardry%20III%3A%20Legacy%20of%20Llylgamyn
|
Wizardry III: Legacy of Llylgamyn (originally known as Wizardry: Legacy of Llylgamyn - The Third Scenario) is the third scenario in the Wizardry series of role-playing video games. It was published in 1983 by Sir-Tech.
Plot
The City of Llylgamyn is threatened by the violent forces of nature. Earthquakes and volcanic rumblings endanger everyone. Only by seeking the dragon L'Kbreth can the city be saved.
Gameplay
Legacy of Llylgamyn is another six-level dungeon crawl, although the dungeon is a volcano so the party journeys upwards rather than downwards. The gameplay and the spells are identical to the first two scenarios. Parties of up to six characters can adventure at one time.
Characters have to be imported from either Wizardry: Proving Grounds of the Mad Overlord or Wizardry II: The Knight of Diamonds. However, since the game is set years later, the characters are actually the descendants of the original characters. They keep the same name and class, can select a new alignment (class permitting), and are reset to level one.
Development
Wizardry III is the first adventure game with a window manager, released before the first games on the Macintosh. The game was delayed by a year for using the technology. Llylgamyn was originally a typo, it was supposed to be spelled with only one L.
Reception
Softline in 1983 praised Llylgamyn, stating that it "wasn't written; it was composed ... The dungeon feels like a living, breathing entity", and concluding that the game "is the best Wizardry yet".
Robert Reams reviewed the game for Computer Gaming World, and stated that "The Legacy of Llylgamyn is an example of the maturing and improvement of an already excellent product. This new adventure will challenge all who accept this quest and will leave you looking for the two sequels which follow in its path."
Philip L. Wing reviewed Legacy of Llylgamyn in The Space Gamer No. 72. Wing commented that "Wizardry III: Legacy of Llylgamyn is the best scenario of the series yet.
|
https://en.wikipedia.org/wiki/Knowledge-based%20processor
|
Knowledge-based processors (KBPs) are used for processing packets in computer networks. Knowledge-based processors are designed with the goal of increased performance of the IPv6 network. By contributing to the buildout of the IPv6 network, KBPs provide the means to an improved and secure networking system.
Standards
All networks are required to perform the following functions:
IPv4/IPv6 multilayer packet/flow classification
Policy-based routing and Policy enforcement (QoS)
Longest Prefix Match (CIDR)
Differentiated Services (DiffServ)
IP Security (IPSec)
Server Load Balancing
Transaction verification
All of the above functions must occur at high speeds in advanced networks. Knowledge-based processors contain embedded databases that store information required to process packets that travel through a network at wired speeds. Knowledge based processors are a new addition to intelligent networking that allow these functions to occur at high speeds and at the same time provide for lower power consumption.
Knowledge-based processors currently target the 3rd layer of the 7 layer OSI model which is devoted to packet processing.
Advantages
The advantages that knowledge based processors offer are the ability to execute multiple simultaneous decision making processes for a range of network-aware processing functions. These include routing, Quality of Service (QOS), access control for both security and billing, as well as the forwarding of voice/video packets. These functions improve the performance of advanced Internet applications in IPv6 networks such as VOD (Video on demand), VoIP (voice over Internet protocol), and streaming of video and audio.
Knowledge-based processors use a variety of techniques to improve network functioning such as parallel processing, deep pipelining and advanced power management techniques. Improvements in each of these areas allows for existing components to carry on their functions at wired speeds more efficiently thus improving
|
https://en.wikipedia.org/wiki/Johnny%20Castaway
|
Johnny Castaway is a screensaver released in 1992 by Sierra On-Line/Dynamix, and marketed under the Screen Antics brand as "the world's first story-telling screen saver".
The screensaver depicts a man, Johnny Castaway, stranded on a very small island with a single palm tree. It follows a story which is slowly revealed through time. While Johnny fishes, builds sand castles, and jogs on a regular basis, other events are seen less frequently, such as a mermaid or Lilliputian pirates coming to the island, or a seagull swooping down to steal his shorts while he is bathing. Much like the castaways of Gilligan's Island, Johnny repeatedly comes close to being rescued, but ultimately remains on the island as a result of various unfortunate accidents.
"Johnny Castaway" includes Easter eggs for a number of United States holidays such as Halloween, Christmas and Independence Day. During these holidays, the scenes are played out as usual except for some detail representing that holiday or event. During the last week of the year, for example, the palm tree will sport a "Happy New Year" banner, and on Halloween a jack-o'-lantern can be seen in the sand. The screensaver can be manipulated into showing these features by adjusting the computer clock to correspond with the date of the event.
The Johnny Castaway screensaver was distributed on a 3½-inch floppy disk and required a computer with a 386SX processor and Windows 3.1 as its operating system. Today, it is widely available on the internet, but as it relies on outdated 16-bit software components, it will only work on older versions of the Microsoft Windows operating system, although workarounds exist for getting the screensaver to run on Windows 64-bit, Mac OS X and Linux.
Character design was done by Shawn Bird while he was at Dynamix. The program had been developed at Jeff Tunnell Productions, the eponymous company of the original founder of Dynamix. According to Ken Williams, the screensaver was one of several products by
|
https://en.wikipedia.org/wiki/Selectable%20marker
|
A selectable marker is a gene introduced into a cell, especially a bacterium or to cells in culture, that confers a trait suitable for artificial selection. They are a type of reporter gene used in laboratory microbiology, molecular biology, and genetic engineering to indicate the success of a transfection or other procedure meant to introduce foreign DNA into a cell. Selectable markers are often antibiotic resistance genes (An antibiotic resistance marker is a gene that produces a protein that provides cells expressing this protein with resistance to an antibiotic.). Bacteria that have been subjected to a procedure to introduce foreign DNA are grown on a medium containing an antibiotic, and those bacterial colonies that can grow have successfully taken up and expressed the introduced genetic material.
Normally the genes encoding resistance to antibiotics such as ampicillin, chloramphenicol, tetracycline or kanamycin, etc., are considered useful selectable markers for E. coli.
Modus operandi
The non-recombinants are separated from recombinants; i.e., a r-DNA is introduced in bacteria, some bacteria are successfully transformed some remain non-transformed. When grown on medium containing ampicillin, bacteria die due to lack of ampicillin resistance. The position is later noted on nitrocellulose paper and separated out to move them to nutrient medium for mass production of required product.
An alternative to a selectable marker is a screenable marker which can also be denoted as a reporter gene, which allows the researcher to distinguish between wanted and unwanted cells, e.g. between blue and white colonies. These wanted or unwanted cells are simply un-transformed cells that were unable to take up the gene during the experiment.
Positive and Negative
For molecular biology research different types of markers may be used based on the selection sought. These include:
Positive or selection markers are selectable markers that confer selective advantage to the host organ
|
https://en.wikipedia.org/wiki/Multiplexed%20Analogue%20Components
|
Multiplexed Analogue Components (MAC) was an analog television standard where luminance and chrominance components were transmitted separately. This was an evolution from older color TV systems (such as PAL or SECAM) where there was interference between chrominance and luminance.
MAC was originally proposed in the 1980s for use on a Europe-wide terrestrial HDTV system. Terrestrial transmission tests were conducted in France, although the system was never used for that purpose. Various variants were developed, collectively known as the "MAC/packet" family.
In 1985 MAC was recommended for satellite and cable broadcasts by the European Broadcasting Union (EBU), with specific variants for each medium. C-MAC/packet was intended for Direct Broadcast Satellite (DBS), D-MAC/packet for wide-band cable, and D2-MAC/packet for both for DBS and narrow-band cable.
History
MAC was originally developed by the Independent Broadcasting Authority in the early 1980, as a system for delivering high quality pictures via direct broadcast satellites, that would be independent of European countries' choice of terrestrial colour-coding standard.
In 1982, MAC was adopted as the transmission format for the UK's forthcoming DBS television services, eventually provided by British Satellite Broadcasting. The following year, MAC was adopted by the EBU as the standard for all DBS broadcasts.
By 1986, despite there being two variants (D-MAC and D2-MAC) favoured by different countries, an EU Directive imposed MAC on the national DBS broadcasters. The justification was to provide a stepping stone from analogue formats (PAL and SECAM) the future HD and digital television, placing european TV manufacturers in a privileged position to provide the equipment required.
However, the Astra satellite system was also starting up at this time (the first satellite, Astra 1A, was launched in 1989), operating outside of the EU's MAC requirements, due to being a non-DBS satellite.
Despite further pressure
|
https://en.wikipedia.org/wiki/Hyperbolic%20group
|
In group theory, more precisely in geometric group theory, a hyperbolic group, also known as a word hyperbolic group or Gromov hyperbolic group, is a finitely generated group equipped with a word metric satisfying certain properties abstracted from classical hyperbolic geometry. The notion of a hyperbolic group was introduced and developed by . The inspiration came from various existing mathematical theories: hyperbolic geometry but also low-dimensional topology (in particular the results of Max Dehn concerning the fundamental group of a hyperbolic Riemann surface, and more complex phenomena in three-dimensional topology), and combinatorial group theory. In a very influential (over 1000 citations ) chapter from 1987, Gromov proposed a wide-ranging research program. Ideas and foundational material in the theory of hyperbolic groups also stem from the work of George Mostow, William Thurston, James W. Cannon, Eliyahu Rips, and many others.
Definition
Let be a finitely generated group, and be its Cayley graph with respect to some finite set of generators. The set is endowed with its graph metric (in which edges are of length one and the distance between two vertices is the minimal number of edges in a path connecting them) which turns it into a length space. The group is then said to be hyperbolic if is a hyperbolic space in the sense of Gromov. Shortly, this means that there exists a such that any geodesic triangle in is -thin, as illustrated in the figure on the right (the space is then said to be -hyperbolic).
A priori this definition depends on the choice of a finite generating set . That this is not the case follows from the two following facts:
the Cayley graphs corresponding to two finite generating sets are always quasi-isometric one to the other;
any geodesic space which is quasi-isometric to a geodesic Gromov-hyperbolic space is itself Gromov-hyperbolic.
Thus we can legitimately speak of a finitely generated group being hyperbolic without refer
|
https://en.wikipedia.org/wiki/Plasterwork
|
Plasterwork is construction or ornamentation done with plaster, such as a layer of plaster on an interior or exterior wall structure, or plaster decorative moldings on ceilings or walls. This is also sometimes called pargeting. The process of creating plasterwork, called plastering or rendering, has been used in building construction for centuries. For the art history of three-dimensional plaster, see stucco.
History
The earliest plasters known to us were lime-based. Around 7500 BC, the people of 'Ain Ghazal in Jordan used lime mixed with unheated crushed limestone to make plaster which was used on a large scale for covering walls, floors, and hearths in their houses. Often, walls and floors were decorated with red, finger-painted patterns and designs. In ancient India and China, renders in clay and gypsum plasters were used to produce a smooth surface over rough stone or mud brick walls, while in early Egyptian tombs, walls were coated with lime and gypsum plaster and the finished surface was often painted or decorated.
Modelled stucco was employed throughout the Roman Empire. The Romans used mixtures of lime and sand to build up preparatory layers over which finer applications of gypsum, lime, sand and marble dust were made; pozzolanic materials were sometimes added to produce a more rapid set. Following the fall of the Roman Empire, the addition of marble dust to plaster to allow the production of fine detail and a hard, smooth finish in hand-modelled and moulded decoration was not used until the Renaissance. Around the 4th century BC, the Romans discovered the principles of the hydraulic set of lime, which by the addition of highly reactive forms of silica and alumina, such as volcanic earths, could solidify rapidly even under water. There was little use of hydraulic mortar after the Roman period until the 18th century.
Plaster decoration was widely used in Europe in the Middle Ages where, from the mid-13th century, gypsum plaster was used for internal and
|
https://en.wikipedia.org/wiki/SUBST
|
In computing, SUBST is a command on the DOS, IBM OS/2, Microsoft Windows and ReactOS operating systems used for substituting paths on physical and logical drives as virtual drives.
Overview
In MS-DOS, the SUBST command was added with the release of MS-DOS 3.1. The command is similar to floating drives, a more general concept in operating systems of Digital Research origin, including CP/M-86 2.x, Personal CP/M-86 2.x, Concurrent DOS, Multiuser DOS, System Manager 7, REAL/32, as well as DOS Plus and DR DOS (up to 6.0). DR DOS 6.0 includes an implementation of the command. The command is also available in FreeDOS and PTS-DOS. The Windows SUBST command is available in supported versions of the command line interpreter cmd.exe. In Windows NT, SUBST uses DefineDosDevice() to create the disk mappings.
The JOIN command is the "opposite" of SUBST, because JOIN will take a drive letter and make it appear as a directory.
Some versions of MS-DOS COMMAND.COM support the undocumented internal TRUENAME command which can display the "true name" of a file, i.e. the fully qualified name with drive, path, and extension, which is found possibly by name only via the PATH environment variable, or through SUBST, JOIN and ASSIGN filesystem mappings.
Syntax
This is the command syntax in Windows XP to associate a path with a drive letter:
SUBST [drive1: [drive2:]path]
SUBST drive1: /D
Parameters
drive1: – Specify a virtual drive to which to assign a path.
[drive2:]path – Specify a physical drive and path to assign to a virtual drive.
/D – Delete a substituted (virtual) drive.
Examples
Mapping a drive
This means that, for example, to map C:'s root to X:, the following command would be used at the command-line interface:
C:\>SUBST X: C:\
Upon doing this, a new drive called X: would appear under the My Computer virtual folder in Windows Explorer.
Unmapping a drive
To unmap drive X: again, the following command needs to by typed at the command prompt:
C:\>SUBST X: /D
Custom labe
|
https://en.wikipedia.org/wiki/Packed%20bed
|
In chemical processing, a packed bed is a hollow tube, pipe, or other vessel that is filled with a packing material. The packed bed can be randomly filled with small objects like Raschig rings or else it can be a specifically designed structured packing. Packed beds may also contain catalyst particles or adsorbents such as zeolite pellets, granular activated carbon, etc.
The purpose of a packed bed is typically to improve contact between two phases in a chemical or similar process. Packed beds can be used in a chemical reactor, a distillation process, or a scrubber, but packed beds have also been used to store heat in chemical plants. In this case, hot gases are allowed to escape through a vessel that is packed with a refractory material until the packing is hot. Air or other cool gas is then fed back to the plant through the hot bed, thereby pre-heating the air or gas feed.
Applications
Packed Columns
In industry, a packed column is a type of packed bed used to perform separation processes, such as absorption, stripping, and distillation.
Columns used in certain types of chromatography consisting of a tube filled with packing material can also be called packed columns and their structure has similarities to packed beds.
Bed structure: random and structured packed beds
The column bed can be filled with randomly dumped packing material (creating a random packed bed) or with structured packing sections, which are arranged in a way that force fluids to take complicated paths through the bed (creating a structured packed bed). In the column, liquids tend to wet the surface of the packing material and the vapors pass across this wetted surface, where mass transfer takes place. Packing materials can be used instead of trays to improve separation in distillation columns. Packing offers the advantage of a lower pressure drop across the column (when compared to plates or trays), which is beneficial while operating under vacuum. Differently shaped packing material
|
https://en.wikipedia.org/wiki/LANtastic
|
LANtastic is a peer-to-peer local area network (LAN) operating system for DOS and Microsoft Windows (and formerly OS/2). The New York Times described the network, which permits machines to function both as servers and as workstations, as allowing computers, "to share printers and other devices."
InformationWeek pointed out that "these peer-to-peer networking solutions, such as Webcorp's Web and Artisoft's LANtastic, definitely aren't powerful, but they can act as 'starter' local area networks" yet added that even Fortune-sized companies find them useful.
LANtastic supports Ethernet, ARCNET and Token Ring adapters as well as its original twisted-pair adapter at .
Overview
Lantastic networks use NetBIOS.
Its multi-platform support allows a LANtastic client station to access any combination of Windows or DOS operating systems, and its interconnectivity allows sharing of files, printers, CD-ROMs and applications throughout an enterprise. LANtastic was especially popular before Windows 95 arrived with built-in networking and was nearly as popular as the market leader Novell at the time.
The New York Times described the network, which permits machines to function both as servers and as workstations, as allowing computers "to share printers and other devices.
History
LANtastic was originally developed by Artisoft, Inc. in Tucson, Arizona, the first company to offer peer-to-peer networking.
Several foreign-language versions were released in 1992.
By mid 1994, Microsoft's Windows for Workgroups was "eating into" LANtastic's lead (as was Novell).
Artisoft bought TeleVantage, and renamed the latter Artisoft TeleVantage. Artisoft subsequently bought Vertical Communications (September, 2004), and renamed itself (January, 2005) to be Vertical Communications.
Following the release of TeleVantage, Lantastic and Artisoft's other legacy products were acquired by SpartaCom Technologies in 2000. SpartaCom was later acquired by PC Micro.
LANtastic 8.01 was released in 2006.
|
https://en.wikipedia.org/wiki/Lawrence%20Paulson
|
Lawrence Charles Paulson (born 1955) is an American computer scientist. He is a Professor of Computational Logic at the University of Cambridge Computer Laboratory and a Fellow of Clare College, Cambridge.
Education
Paulson graduated from the California Institute of Technology in 1977, and obtained his PhD in Computer Science from Stanford University in 1981 for research on programming languages and compiler-compilers supervised by John L. Hennessy.
Research
Paulson came to the University of Cambridge in 1983 and became a Fellow of Clare College, Cambridge in 1987. He is best known for the cornerstone text on the programming language ML, ML for the Working Programmer. His research is based around the interactive theorem prover Isabelle, which he introduced in 1986. He has worked on the verification of cryptographic protocols using inductive definitions, and he has also formalised the constructible universe of Kurt Gödel. Recently he has built a new theorem prover, MetiTarski, for real-valued special functions.
Paulson teaches an undergraduate lecture course in the Computer Science Tripos, entitled Logic and Proof which covers automated theorem proving and related methods. (He used to teach Foundations of Computer Science which introduces functional programming, but this course was taken over by Alan Mycroft and Amanda Prorok in 2017, and then Anil Madhavapeddy and Amanda Prorok in 2019.)
Awards and honours
Paulson was elected a Fellow of the Royal Society (FRS) in 2017, a Fellow of the Association for Computing Machinery in 2008 and a Distinguished Affiliated Professor for Logic in Informatics at the Technical University of Munich.
Personal life
Paulson has two children by his first wife, Dr Susan Mary Paulson, who died in 2010. Since 2012, he has been married to Dr Elena Tchougounova.
References
1955 births
Living people
American computer scientists
Members of the University of Cambridge Computer Laboratory
California Institute of Technology alumni
Stanford
|
https://en.wikipedia.org/wiki/Sony%20BMG%20copy%20protection%20rootkit%20scandal
|
The Sony BMG CD copy protection scandal concerns the copy protection measures included by Sony BMG on compact discs in 2005. When inserted into a computer, the CDs installed one of two pieces of software that provided a form of digital rights management (DRM) by modifying the operating system to interfere with CD copying. Neither program could easily be uninstalled, and they created vulnerabilities that were exploited by unrelated malware. One of the programs would install and "phone home" with reports on the user's private listening habits, even if the user refused its end-user license agreement (EULA), while the other was not mentioned in the EULA at all. Both programs contained code from several pieces of copylefted free software in an apparent infringement of copyright, and configured the operating system to hide the software's existence, leading to both programs being classified as rootkits.
Sony BMG initially denied that the rootkits were harmful. It then released an uninstaller for one of the programs that merely made the program's files visible while also installing additional software that could not be easily removed, collected an email address from the user and introduced further security vulnerabilities.
Following public outcry, government investigations and class-action lawsuits in 2005 and 2006, Sony BMG partially addressed the scandal with consumer settlements, a recall of about 10% of the affected CDs and the suspension of CD copy-protection efforts in early 2007.
Background
In August 2000, statements by Sony Pictures Entertainment U.S. senior vice president Steve Heckler foreshadowed the events of late 2005. Heckler told attendees at the Americas Conference on Information Systems: "The industry will take whatever steps it needs to protect itself and protect its revenue streams ... It will not lose that revenue stream, no matter what ... Sony is going to take aggressive steps to stop this. We will develop technology that transcends the individual u
|
https://en.wikipedia.org/wiki/Linear%20integrated%20circuit
|
A linear integrated circuit or analog chip is a set of miniature electronic analog circuits formed on a single piece of semiconductor material.
Description
The voltage and current at specified points in the circuits of analog chips vary continuously over time. In contrast, digital chips only assign meaning to voltages or currents at discrete levels. In addition to transistors, analog chips often include a larger number of passive elements (capacitors, resistors, and inductors) than digital chips. Inductors tend to be avoided because of their large physical size, and difficulties incorporating them into monolithic semiconductor ICs. Certain circuits such as gyrators can often act as equivalents of inductors, while constructed only from transistors and capacitors.
Analog chips may also contain digital logic elements to replace some analog functions, or to allow the chip to communicate with a microprocessor. For this reason, and since logic is commonly implemented using CMOS technology, these chips typically use BiCMOS processes, as implemented by companies such as Freescale, Texas Instruments, STMicroelectronics, and others. This is known as "mixed signal processing", and allows a designer to incorporate more functions into a single chip. Some of the benefits of this mixed technology include load protection, reduced parts count, and higher reliability.
Purely analog chips in information processing have been mostly replaced with digital chips. Analog chips are still required for wideband signals, high-power applications, and transducer interfaces. Research and industry in this specialty continues to grow and prosper. Some examples of long-lived and well-known analog chips are the 741 operational amplifier, and the 555 timer IC.
Power supply chips are also considered to be analog chips. Their main purpose is to produce a well-regulated output voltage supply for other chips in the system. Since all electronic systems require electrical power, power supply ICs (power
|
https://en.wikipedia.org/wiki/Message%20Handling%20System
|
Message Handling System (MHS) is an important early email protocol developed by Action Technologies, Inc. (ATI) in 1986. Novell licensed it in 1988 then later bought it.
Email clients
A wide variety of email clients used MHS, including:
Para-Mail - Paradox Development introduced version 2.0 along with Novell at Comdex 1986.
DaVinci Email - The first Microsoft Windows-based email client used MHS natively.
Pegasus Mail - A free mail client, this used MHS its native protocol.
ExpressIT! and ExpressIT! 2000 - Infinite Technologies' MHS compliant email clients.
FirstMail - A cut-down version of Pegasus Mail, bundled with some versions of NetWare.
Futurus TEAM - Early groupware package offering an MHS compliant email client.
MacAccess - An Apple Macintosh MHS-based email client.
Role as a gateway
MHS was a very 'open' system, and this, with Novell's encouragement, made it popular in the early 1990s as a 'glue' between not only the proprietary email systems of the day such as PROFS, SNADS, MCI, 3+Mail, cc:Mail, Para-Mail and Microsoft Mail, but also the competing standards-based SMTP and X.400. However, by 1996 it was very clear that SMTP over the Internet would take over this role.
Work-alike products
A compatible family of products from Infinite Technologies (now Captaris) and marketed under the name Connect2 were also very widely used as part of MHS-based email networks.
Decline
Novell became increasingly less supportive after their 1994 purchase of WordPerfect as they worked to transform WordPerfect Office into GroupWise.
At about the same time, confidence in the future of X.400 collapsed and SMTP email across the public internet became the compelling choice for mail between unrelated organisations, replacing MHS's former "glue" role.
References
Para-Mail from Paradox Development Corporation was the first email package to be brought into Novell and MHS. Paradox Development Corporation introduced Para-Mail version 2.0 with Novell at Comdex 1986.
External
|
https://en.wikipedia.org/wiki/Generalist%20and%20specialist%20species
|
A generalist species is able to thrive in a wide variety of environmental conditions and can make use of a variety of different resources (for example, a heterotroph with a varied diet). A specialist species can thrive only in a narrow range of environmental conditions or has a limited diet. Most organisms do not all fit neatly into either group, however. Some species are highly specialized (the most extreme case being monophagous, eating one specific type of food), others less so, and some can tolerate many different environments. In other words, there is a continuum from highly specialized to broadly generalist species.
Description
Omnivores are usually generalists. Herbivores are often specialists, but those that eat a variety of plants may be considered generalists. A well-known example of a specialist animal is the monophagous koala, which subsists almost entirely on eucalyptus leaves. The raccoon is a generalist, because it has a natural range that includes most of North and Central America, and it is omnivorous, eating berries, insects such as butterflies, eggs, and various small animals.
The distinction between generalists and specialists is not limited to animals. For example, some plants require a narrow range of temperatures, soil conditions and precipitation to survive while others can tolerate a broader range of conditions. A cactus could be considered a specialist species. It will die during winters at high latitudes or if it receives too much water.
When body weight is controlled for, specialist feeders such as insectivores and frugivores have larger home ranges than generalists like some folivores (leaf-eaters), whose food-source is less abundant; they need a bigger area for foraging. An example comes from the research of Tim Clutton-Brock, who found that the black-and-white colobus, a folivore generalist, needs a home range of only 15 ha. On the other hand, the more specialized red colobus monkey has a home range of 70 ha, which it requires to
|
https://en.wikipedia.org/wiki/Phosphorene
|
Phosphorene is a two-dimensional material consisting of phosphorus. It consists of a single layer of black phosphorus, the most stable allotrope of phosphorus. Phosphorene is analogous to graphene (single layer graphite). Among two-dimensional materials, phosphorene is a competitor to graphene because it has a nonzero fundamental band gap that can be modulated by strain and the number of layers in a stack. Phosphorene was first isolated in 2014 by mechanical exfoliation. Liquid exfoliation is a promising method for scalable phosphorene production.
History
In 1914 black phosphorus, a layered, semiconducting allotrope of phosphorus, was synthesized. This allotrope exhibits high carrier mobility. In 2014, several groups isolated single-layer phosphorene, a monolayer of black phosphorus. It attracted renewed attention because of its potential in optoelectronics and electronics due to its band gap, which can be tuned via modifying its thickness, anisotropic photoelectronic properties and carrier mobility. Phosphorene was initially prepared using mechanical cleavage, a commonly used technique in graphene production.
In 2023, alloys of arsenic-phosporene displayed higher hole mobility than pure phosphorene and were also magnetic.
Synthesis
Synthesis of phosphorene is a significant challenge. Currently, there are two main ways of phosphorene production: scotch-tape-based microcleavage and liquid exfoliation, while several other methods are being developed as well. Phosphorene production from plasma etching has also been reported.
In scotch-tape-based microcleavage, phosphorene is mechanically exfoliated from a bulk of black phosphorus crystal using scotch-tape. Phosphorene is then transferred on a Si/SiO2 substrate, where it is then cleaned with acetone, isopropyl alcohol and methanol to remove any scotch tape residue. The sample is then heated to 180 °C to remove solvent residue.
In the liquid exfoliation method, first reported by Brent et al. in 2014 and modified
|
https://en.wikipedia.org/wiki/Slingbox
|
The Slingbox was a TV streaming media device made by Sling Media that encoded local video for transmission over the Internet to a remote device (sometimes called placeshifting). It allowed users to remotely view and control their cable, satellite, or digital video recorder (DVR) system at home from a remote Internet-connected personal computer, smartphone, or tablet as if they were at home.
On November 9, 2020, Sling Media announced that all Slingboxes had been discontinued, and that the Slingbox servers would close on November 9, 2022, making all devices "inoperable".
History
The Slingbox was first developed in 2002 by two Californian brothers, Blake and Jason Krikorian, who were avid sports fans. They supported the San Francisco Giants, a Major League Baseball team whose games were broadcast regularly by their local TV station. However, when travelling away from their home state, they found they were unable to watch their favorite team because their games were not carried by television stations in other parts of the United States and could not be found for free online. The first edition of the Slingbox came to market in late 2005.
Future
Slingbox hardware is getting a second life thanks to the Open source Slinger project, written in Python.
Technology
Hardware
The traditional Slingbox embeds a video encoding chip to do real-time encoding of a video and audio stream into the SMPTE 421M / VC-1 format that can be transmitted over the Internet via the ASF streaming format. Later Slingboxes also support Apple's HTTP Live Streaming, which requires support for H.264.
The Slingboxes up until the Fourth Generation (or Next Generation Slingbox) used a Texas Instruments chipset. Current generation Slingboxes and OEM products are built around a ViXS chipset.
Control of the hosting video device, usually a set top box, is done through an IR blaster, which, on older Slingboxes, required the use of an IR blaster dongle. Current generation Slingboxes have built in IR blaste
|
https://en.wikipedia.org/wiki/Service%20Advertising%20Protocol
|
The Service Advertising Protocol (SAP) is included in the Internetwork Packet Exchange (IPX) protocol. SAP makes the process of adding and removing services on an IPX internetwork dynamic. SAP was maintained by Novell.
As servers are booted up, they may advertise their services using SAP; when they are brought down, they use SAP to indicate that their services will no longer be available. IPX network servers may use SAP to identify themselves by name and service type. All entities that use SAP must broadcast a name and Service Type that (together) are unique throughout the entire IPX internetwork. This policy is enforced by system administrators and application developers.
SAP Service Types
Further reading
Novell NetWare
Network protocols
|
https://en.wikipedia.org/wiki/IBM%20LU6.2
|
Logical Unit 6.2 is an IBM-originated communications protocol specification dating from 1974, and is part of IBM's Systems Network Architecture (SNA).
A device-independent SNA protocol, it is used for peer-to-peer communications between two systems, for example, between a computer and a device (e.g. terminal or printer), or between computers. LU6.2 is used by many of IBM's products, including Common Programming Interface for Communications Intersystem Communications (CICS ISC), and Information Management System, and also many non-IBM products. In 1986, Bruce Compton, Manager of Office Systems and Technology with General Electric, said:
LU 6.2 means I don't have to write the software communications interfaces. If I have one office server in a DEC environment, and another in a Wang environment… I can use the LU 6.2 standard to pass files between those devices, and I don't have to worry about things like block checking and clock.
Some examples of non-IBM products which implemented the SNA stack including LU6.2 are: Microsoft Host Integration Server, and NetWare for SAA.
APPC is a protocol used with LU6.2 architecture. APPC is often used to refer to the LU6.2 architecture or to specific LU6.2 features.
LU6.2-compliant devices operate as peers within the network and can perform multiple simultaneous transactions over the network. LU6.2 devices can also detect and correct errors. The LU6.2 definition provides a common API for communicating with and controlling compliant devices. Although the concepts were the same on all platforms, the actual API implementation often varied on each IBM platform which implemented it. Other vendors also implemented LU6.2 in their own products and with their own APIs. IBM later defined the Common Programming Interface for Communications (CPIC) API which would eventually become widely implemented. CPIC allowed for the authoring of multi-platform code.
Adoption was slow but steady. As of November 1987, of 207 large US companies interviewe
|
https://en.wikipedia.org/wiki/6b/8b%20encoding
|
In telecommunications, 6b/8b is a line code that expands 6-bit codes to 8-bit symbols for the purposes of maintaining DC-balance in a communications system.
The 6b/8b encoding is a balanced code --
each 8-bit output symbol contains 4 zero bits and 4 one bits. So the code can, like a parity bit, detect all single-bit errors.
The number of 8-bit patterns with 4 bits set is the binomial coefficient = 70. Further excluding the patterns 11110000 and 00001111, this allows 68 coded patterns: 64 data codes, plus 4 additional control codes.
Coding rules
The 64 possible 6-bit input codes can be classified according to their disparity, the number of 1 bits minus the number of 0 bits:
The 6-bit input codes are mapped to 8-bit output symbols as follows:
The 20 6-bit codes with disparity 0 are prefixed with 10Example: 000111 → 10000111Example: 101010 → 10101010
The 15 6-bit codes with disparity +2, other than 001111, are prefixed with 00Example: 010111 → 00010111
The 15 6-bit codes with disparity −2, other than 110000, are prefixed with 11Example: 101000 → 11101000
The remaining 20 codes: 12 with disparity ±4, 2 with disparity ±6, 001111, 110000, and the 4 control codes, are assigned to codes beginning with 01 as follows:
No data symbol contains more than four consecutive matching bits, and because the patterns 11110000 and 00001111 are excluded, no data symbol begins or ends with more than three identical bits.
Thus, the longest run of identical bits that will be produced is 6. (I.e. this is a (0,5) RLL code, with a worst-case running disparity of +3 to −3.)
Any occurrence of 6 consecutive identical bits constitutes a comma sequence or sync mark or syncword; it identifies the symbol boundaries precisely.
Those 6 bits straddle the inter-symbol boundary with exactly 3 of those identical bits at the end of one symbol, and 3 of those identical bits at the start of the following next symbol.
See also
8b/10b encoding, another fixed-table system with a higher code rat
|
https://en.wikipedia.org/wiki/Tartar%20Guided%20Missile%20Fire%20Control%20System
|
The Tartar Guided Missile Fire Control System is an air defense system developed by the United States Navy to defend warships from air attack. Since its introduction the system has been improved and sold to several United States allies.
Description
The Tartar Guided Missile Fire Control System is a component of the overall Tartar Weapons System. It consists of the target illuminators and associated computer systems needed to fire a missile once a target has been identified. It operates in conjunction with the weapon direction systems (WDS), the ship's long-range air search radars, and the guided missile launch system (GMLS) to engage air targets.
The Tartar FCS receives target designation information from the WDS. The system then acquires and tracks the target, positions the missile launcher, programs the missile with intercept data, and lets the WDS know that it is ready to fire. Once the missile is fired, the FCS provides CW illumination of the target and postfiring evaluation.
There are two major families of Tartar FCS: the Mk. 74 and the Mk. 92. The latter is used on the and the former is used everywhere else. Each Mk 74 includes the AN/SPG-51, a director Mk 73, a computer system, and associated consoles. The Mk. 92 contains a combined antenna system (CAS), a separate track illumination radar (STIR), weapon control consoles, a computer complex, and ancillary equipment.
Deployment
It was installed on numerous US cruiser and destroyers in the 1960s through early 1990s such as the s, s, s and the s. It is also in use in other countries such as the fleet escorts of the French Navy Kersaint, Bouvet, Du Chayla and Dupetit-Thouars, and was also in use on and .
RIM-66B Standard
Starting in the middle 1960s a new family of guided missiles referred to as the Standard missiles were developed to replace the poor performing missiles used by existing fire control systems. The RIM-66A/B Standard replaced the earlier RIM-24C Tartar used by the system. The new missile m
|
https://en.wikipedia.org/wiki/Tethering
|
Tethering or phone-as-modem (PAM) is the sharing of a mobile device's Internet connection with other connected computers. Connection of a mobile device with other devices can be done over wireless LAN (Wi-Fi), over Bluetooth or by physical connection using a cable, for example through USB.
If tethering is done over WLAN, the feature may be branded as a personal hotspot or mobile hotspot, which allows the device to serve as a portable router. Mobile hotspots may be protected by a PIN or password. The Internet-connected mobile device can act as a portable wireless access point and router for devices connected to it.
Mobile device's OS support
Many mobile devices are equipped with software to offer tethered Internet access. Windows Mobile 6.5, Windows Phone 7, Android (starting from version 2.2), and iOS 3.0 (or later) offer tethering over a Bluetooth PAN or a USB connection. Tethering over Wi-Fi, also known as Personal Hotspot, is available on iOS starting with iOS 4.2.5 (or later) on iPhone 4 or iPad (3rd gen), certain Windows Mobile 6.5 devices like the HTC HD2, Windows Phone 7, 8 and 8.1 devices (varies by manufacturer and model), and certain Android phones (varies widely depending on carrier, manufacturer, and software version).
For IPv4 networks, the tethering normally works via NAT on the handset's existing data connection, so from the network point of view, there is just one device with a single IPv4 network address, though it is technically possible to attempt to identify multiple machines.
On some mobile network operators, this feature is contractually unavailable by default, and may be activated only by paying to add a tethering package to a data plan or choosing a data plan that includes tethering. This is done primarily because with a computer sharing the network connection, there is typically substantially more network traffic.
Some network-provided devices have carrier-specific software that may deny the inbuilt tethering ability normally available
|
https://en.wikipedia.org/wiki/Reflective%20surfaces%20%28climate%20engineering%29
|
Reflective surfaces, or ground-based albedo modification (GBAM), is a solar radiation management method of enhancing Earth's albedo (the ability to reflect the visible, infrared, and ultraviolet wavelengths of the Sun, reducing heat transfer to the surface). The IPCC described this method as "whitening roofs, changes in land use management (e.g., no-till farming), change of albedo at a larger scale (covering glaciers or deserts with reflective sheeting and changes in ocean albedo)."
The most well-known type of reflective surface is a type of roof called the "cool roof". While cool roofs are mostly associated with white roofs, they come in a variety of colors and materials and are available for both commercial and residential buildings.
Method
As a method to address global warming, the IPCC 2018 report indicated that the potential for global temperature reduction was "small," yet was in high agreement over the recognition of temperature changes of 1-3°C on a regional scale. Limited application of reflective surfaces can mitigate urban heat island effect.
Reflective surfaces can be used to change the albedo of agricultural and urban areas, noting that a 0.04-0.1 albedo change in urban and agricultural areas could potentially reduce global temperatures for overshooting 1.0°C.
The reflective surfaces approach is similar to passive daytime radiative cooling (PDRC) being that they are both ground-based, yet PDRC focuses on "increasing the radiative heat emission from the Earth rather than merely decreasing its solar absorption."
Types of reflective surfaces
Cool Roofs
Benefits
Cool roofs, in hot climates, can offer both immediate and long-term benefits including:
Savings of up to 15% of the annual air-conditioning energy use for a single-story building
Help in mitigating the urban heat island effect.
Reduced air pollution and greenhouse gas emissions, as well as a significant offsetting of the warming impact of greenhouse gas emissions.
Cool roofs achieve co
|
https://en.wikipedia.org/wiki/IDEF1X
|
Integration DEFinition for information modeling (IDEF1X) is a data modeling language for the development of semantic data models. IDEF1X is used to produce a graphical information model which represents the structure and semantics of information within an environment or system.
IDEF1X permits the construction of semantic data models which may serve to support the management of data as a resource, the integration of information systems, and the building of computer databases. This standard is part of the IDEF family of modeling languages in the field of software engineering.
Overview
A data modeling technique is used to model data in a standard, consistent and predictable manner in order to manage it as a resource. It can be used in projects requiring a standard means of defining and analyzing the data resources within an organization. Such projects include the incorporation of a data modeling technique into a methodology, managing data as a resource, integrating information systems, or designing computer databases. The primary objectives of the IDEF1X standard are to provide:
Means for completely understanding and analyzing an organization's data resources
Common means of representing and communicating the complexity of data
A technique for presenting an overall view of the data required to run an enterprise
Means for defining an application-independent view of data which can be validated by users and transformed into a physical database design
A technique for deriving an integrated data definition from existing data resources.
A principal objective of IDEF1X is to support integration. The approach to integration focuses on the capture, management, and use of a single semantic definition of the data resource referred to as a “conceptual schema.” The “conceptual schema” provides a single integrated definition of the data within an enterprise which is not biased toward any single application of data and is independent of how the data is physically stored o
|
https://en.wikipedia.org/wiki/Netvibes
|
Netvibes is a French company that offers web services.
History
The company was founded by Tariq Krim and Florent Frémont in 2005.
In August 2006, Netvibes closed a funding round of €12 million led by Accel Partners in London along with Index Ventures.
Since May 2008, Freddy Mini is the Chief Executive Officer of Netvibes.
On February 9, 2012 Dassault Systèmes announced the acquisition of Netvibes for an undisclosed amount.
Activities
Brand monitoring – to track clients, customers and competitors across media sources all in one place, analyze live results with third party reporting tools, and provide media monitoring dashboards for brand clients.
E-reputation management – to visualize real-time online conversations and social activity online feeds, and track new trending topics.
Product marketing – to create interactive product microsites, with drag-and-drop publishing interface.
Community portals – to engage online communities
Personalized workspaces – to gather all essential company updates to support specific divisions (e.g. sales, marketing, human resources) and localizations.
The software is a multi-lingual Ajax-based start page or web portal. It is organized into tabs, with each tab containing user-defined modules. Built-in Netvibes modules include an RSS/Atom feed reader, local weather forecasts, a calendar supporting iCal, bookmarks, notes, to-do lists, multiple searches, support for POP3, IMAP4 email as well as several webmail providers including Gmail, Yahoo! Mail, Hotmail, and AOL Mail, Box.net web storage, Delicious, Meebo, Flickr photos, podcast support with a built-in audio player, and several others.
A page can be personalized further through the use of existing themes or by creating personal theme. Customized tabs, feeds and modules can be shared with others individually or via the Netvibes Ecosystem. For privacy reasons, only modules with publicly available content can be shared.
References
External links
News aggregators
Web portals
In
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.