source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/List%20of%20computer%20algebra%20systems
|
The following tables provide a comparison of computer algebra systems (CAS). A CAS is a package comprising a set of algorithms for performing symbolic manipulations on algebraic objects, a language to implement them, and an environment in which to use the language. A CAS may include a user interface and graphics capability; and to be effective may require a large library of algorithms, efficient data structures and a fast kernel.
General
These computer algebra systems are sometimes combined with "front end" programs that provide a better user interface, such as the general-purpose GNU TeXmacs.
Functionality
Below is a summary of significantly developed symbolic functionality in each of the systems.
via SymPy
<li> via qepcad optional package
Those which do not "edit equations" may have a GUI, plotting, ASCII graphic formulae and math font printing. The ability to generate plaintext files is also a sought-after feature because it allows a work to be understood by people who do not have a computer algebra system installed.
Operating system support
The software can run under their respective operating systems natively without emulation. Some systems must be compiled first using an appropriate compiler for the source language and target platform. For some platforms, only older releases of the software may be available.
Graphing calculators
Some graphing calculators have CAS features.
See also
:Category:Computer algebra systems
Comparison of numerical-analysis software
Comparison of statistical packages
List of information graphics software
List of numerical-analysis software
List of numerical libraries
List of statistical software
Mathematical software
Web-based simulation
References
External links
Comparisons of mathematical software
Mathematics-related lists
|
https://en.wikipedia.org/wiki/Characteristic%20energy%20length%20scale
|
The characteristic energy length scale describes the size of the region from which energy flows to a rapidly moving crack. If material properties change within the characteristic energy length scale, local wave speeds can dominate crack dynamics. This can lead to supersonic fracture.
Materials science
|
https://en.wikipedia.org/wiki/Universally%20measurable%20set
|
In mathematics, a subset of a Polish space is universally measurable if it is measurable with respect to every complete probability measure on that measures all Borel subsets of . In particular, a universally measurable set of reals is necessarily Lebesgue measurable (see below).
Every analytic set is universally measurable. It follows from projective determinacy, which in turn follows from sufficient large cardinals, that every projective set is universally measurable.
Finiteness condition
The condition that the measure be a probability measure; that is, that the measure of itself be 1, is less restrictive than it may appear. For example, Lebesgue measure on the reals is not a probability measure, yet every universally measurable set is Lebesgue measurable. To see this, divide the real line into countably many intervals of length 1; say, N0=[0,1), N1=[1,2), N2=[-1,0), N3=[2,3), N4=[-2,-1), and so on. Now letting μ be Lebesgue measure, define a new measure ν by
Then easily ν is a probability measure on the reals, and a set is ν-measurable if and only if it is Lebesgue measurable. More generally a universally measurable set must be measurable with respect to every sigma-finite measure that measures all Borel sets.
Example contrasting with Lebesgue measurability
Suppose is a subset of Cantor space ; that is, is a set of infinite sequences of zeroes and ones. By putting a binary point before such a sequence, the sequence can be viewed as a real number between 0 and 1 (inclusive), with some unimportant ambiguity. Thus we can think of as a subset of the interval [0,1], and evaluate its Lebesgue measure, if that is defined. That value is sometimes called the coin-flipping measure of , because it is the probability of producing a sequence of heads and tails that is an element of upon flipping a fair coin infinitely many times.
Now it follows from the axiom of choice that there are some such without a well-defined Lebesgue measure (or coin-flipping m
|
https://en.wikipedia.org/wiki/Distribution%20ensemble
|
In cryptography, a distribution ensemble or probability ensemble is a family of distributions or random variables where is a (countable) index set, and each is a random variable, or probability distribution. Often and it is required that each have a certain property for n sufficiently large.
For example, a uniform ensemble is a distribution ensemble where each is uniformly distributed over strings of length n. In fact, many applications of probability ensembles implicitly assume that the probability spaces for the random variables all coincide in this way, so every probability ensemble is also a stochastic process.
See also
Provable security
Statistically close
Pseudorandom ensemble
Computational indistinguishability
References
Goldreich, Oded (2001). Foundations of Cryptography: Volume 1, Basic Tools. Cambridge University Press. . Fragments available at the author's web site.
Theory of cryptography
|
https://en.wikipedia.org/wiki/Pseudorandom%20ensemble
|
In cryptography, a pseudorandom ensemble is a family of variables meeting the following criteria:
Let be a uniform ensemble
and be an ensemble. The ensemble is called pseudorandom if and
are indistinguishable in polynomial time.
References
Goldreich, Oded (2001). Foundations of Cryptography: Volume 1, Basic Tools. Cambridge University Press. . Fragments available at the author's web site.
Algorithmic information theory
Pseudorandomness
Cryptography
|
https://en.wikipedia.org/wiki/CESU-8
|
The Compatibility Encoding Scheme for UTF-16: 8-Bit (CESU-8) is a variant of UTF-8 that is described in Unicode Technical Report #26. A Unicode code point from the Basic Multilingual Plane (BMP), i.e. a code point in the range U+0000 to U+FFFF, is encoded in the same way as in UTF-8. A Unicode supplementary character, i.e. a code point in the range U+10000 to U+10FFFF, is first represented as a surrogate pair, like in UTF-16, and then each surrogate code point is encoded in UTF-8. Therefore, CESU-8 needs six bytes (3 bytes per surrogate) for each Unicode supplementary character while UTF-8 needs only four. Though not specified in the technical report, unpaired surrogates are also encoded as 3 bytes each, and CESU-8 is exactly the same as applying an older UCS-2 to UTF-8 converter to UTF-16 data.
The encoding of Unicode non-BMP characters works out to 11101101 1010yyyy 10xxxxxx 11101101 1011xxxx 10xxxxxx (yyyy represents the top five bits of the character minus one). The byte values 0xF0—0xF4 will not appear in CESU-8, as they start the 4-byte encodings used by UTF-8.
CESU-8 is not an official part of the Unicode Standard, because Unicode Technical Reports are informative documents only. It should be used exclusively for internal processing and never for external data exchange.
Supporting CESU-8 in HTML documents is prohibited by the W3C and WHATWG HTML standards, as it would present a cross-site scripting vulnerability.
Java's Modified UTF-8 is CESU-8 with a special overlong encoding of the NUL character (U+0000) as the two-byte sequence C0 80.
The Oracle database uses CESU-8 for its "UTF8" character set. Standard UTF-8 can be obtained using the character set "AL32UTF8" (since Oracle version 9.0).
Examples
References
External links
Unicode Technical Report #26
Modified UTF-8 definition
Graphical View of CESU-8 in ICU's Converter Explorer
Unicode Transformation Formats
Character encoding
|
https://en.wikipedia.org/wiki/Tone%20control%20circuit
|
Tone control is a type of equalization used to make specific pitches or frequencies in an audio signal softer or louder. It allows a listener to adjust the tone of the sound produced by an audio system to their liking, for example to compensate for inadequate bass response of loudspeakers or earphones, tonal qualities of the room, or hearing impairment. A tone control circuit is an electronic circuit that consists of a network of filters which modify the signal before it is fed to speakers, headphones or recording devices by way of an amplifier. Tone controls are found on many sound systems: radios, portable music players, boomboxes, public address systems, and musical instrument amplifiers.
Uses
Tone control allows listeners to adjust sound to their liking. It also enables them to compensate for recording deficiencies, hearing impairments, room acoustics or shortcomings with playback equipment. For example, older people with hearing problems may want to increase the loudness of high pitch sounds they have difficulty hearing.
Tone control is also used to adjust an audio signal during recording. For instance, if the acoustics of the recording site cause it to absorb some frequencies more than others, tone control can be used to amplify or "boost" the frequencies the room dampens.
Types
In their most basic form, tone control circuits attenuate the high or low frequencies of the signal. This is called treble or bass "cut". The simplest tone control circuits are passive circuits which utilize only resistors and capacitors or inductors. They rely on the property of capacitive reactance or inductive reactance to inhibit or enhance an AC signal, in a frequency-dependent manner. Active tone controls may also amplify or "boost" certain frequencies. More elaborate tone control circuits can boost or attenuate the middle range of frequencies.
The simplest tone control is a single knob that when turned in one direction enhances treble frequencies and the other direction
|
https://en.wikipedia.org/wiki/Basic%20hypergeometric%20series
|
In mathematics, basic hypergeometric series, or q-hypergeometric series, are q-analogue generalizations of generalized hypergeometric series, and are in turn generalized by elliptic hypergeometric series.
A series xn is called hypergeometric if the ratio of successive terms xn+1/xn is a rational function of n. If the ratio of successive terms is a rational function of qn, then the series is called a basic hypergeometric series. The number q is called the base.
The basic hypergeometric series was first considered by . It becomes the hypergeometric series in the limit when base .
Definition
There are two forms of basic hypergeometric series, the unilateral basic hypergeometric series φ, and the more general bilateral basic hypergeometric series ψ.
The unilateral basic hypergeometric series is defined as
where
and
is the q-shifted factorial.
The most important special case is when j = k + 1, when it becomes
This series is called balanced if a1 ... ak + 1 = b1 ...bkq.
This series is called well poised if a1q = a2b1 = ... = ak + 1bk, and very well poised if in addition a2 = −a3 = qa11/2.
The unilateral basic hypergeometric series is a q-analog of the hypergeometric series since
holds ().
The bilateral basic hypergeometric series, corresponding to the bilateral hypergeometric series, is defined as
The most important special case is when j = k, when it becomes
The unilateral series can be obtained as a special case of the bilateral one by setting one of the b variables equal to q, at least when none of the a variables is a power of q, as all the terms with n < 0 then vanish.
Simple series
Some simple series expressions include
and
and
The q-binomial theorem
The q-binomial theorem (first published in 1811 by Heinrich August Rothe) states that
which follows by repeatedly applying the identity
The special case of a = 0 is closely related to the q-exponential.
Cauchy binomial theorem
Cauchy binomial theorem is a special case of the q-binomial theore
|
https://en.wikipedia.org/wiki/Fast%20Infoset
|
Fast Infoset (or FI) is an international standard that specifies a binary encoding format for the XML Information Set (XML Infoset) as an alternative to the XML document format. It aims to provide more efficient serialization than the text-based XML format.
FI is effectively a lossless compression, analogous to gzip, for XML, except that while the original formatting is lost, no information is lost in the conversion from XML to FI, and back to XML. While the purpose of compression is to reduce physical data size, FI aims to optimize both document size and processing performance.
The Fast Infoset specification is defined by both the ITU-T and the ISO/IEC standards bodies. FI is officially defined in ITU-T Rec. X.891 and ISO/IEC 24824-1, and entitled Fast Infoset. The standard was published by ITU-T on May 14, 2005, and by ISO on May 4, 2007. The Fast Infoset standard document can be downloaded from the ITU website. Though the document does not assert intellectual property (IP) restrictions on implementation or use, page ii warns that it has received notices and the subject may not be completely free of IP assertions.
A common misconception is that FI requires ASN.1 tool support. Although the formal specification uses ASN.1 notation, the standard includes Encoding Control Notation (ECN) and ASN.1 tools are not required by implementations.
An alternative to FI is FleXPath.
Structure
The underlying file format is ASN.1, with tag/length/value blocks. Text values of attributes and elements are stored with length prefixes rather than end delimiters, and data segments do not require escapement for special characters. The equivalent of end tags ("terminators") are needed only at the end of a list of child-elements. Binary data is transmitted in native format, and need not be converted to a transmission format such as base64.
Fast Infoset is a higher level format built on ASN.1 forms and notation. Element and attribute names are stored within the octet stream, unl
|
https://en.wikipedia.org/wiki/Richard%20Rusczyk
|
Richard Rusczyk (; ; born September 21, 1971) is the founder and chief executive officer of Art of Problem Solving Inc. (as well as the website, which serves as a mathematics forum and place to hold online classes) and a co-author of the Art of Problem Solving textbooks. Rusczyk was a national Mathcounts participant in 1985, and he won the USA Math Olympiad (USAMO) in 1989. He is one of the co-creators of the Mandelbrot Competition, and the director of the USA Mathematical Talent Search (USAMTS). He also founded the San Diego Math Circle.
Early life
Richard Rusczyk was born in Idaho Falls, Idaho in 1971. He signed up for the MathCounts program when he was in middle school. As a high schooler, Rusczyk was a part of his high school math team and took part in the American Mathematics Competitions. Rusczyk would later go on to attend Princeton University, which he graduated from in 1993.
Art of Problem Solving
In 1994, Rusczyk and Sandor Lehoczky wrote the Art of Problem Solving books, designed to prepare students for mathematical competitions by teaching them concepts and problem-solving methods rarely taught in school. These books lent their name to the company he founded in 2003.
After working for four years as a bond trader for D. E. Shaw & Co., Rusczyk created the Art of Problem Solving website, which provides resources for middle and high school students to develop their mathematics and problem-solving abilities. These include real-time competitions to solve math problems and online tools to learn how to solve problems with increasing difficulty as well as math forums. As of May 26, 2021, there have been 709,491 students, 1,322,594 topics, and a total of 15,182,054 posts on the site. Rusczyk has also published the Art of Problem Solving series of books aimed at a similar audience. The site also provides fee-based online mathematics classes, which range from Prealgebra to Group Theory and Calculus. Additionally, Art of Problem Solving offers Python programmin
|
https://en.wikipedia.org/wiki/Enercell
|
Enercell is a battery brand that was sold exclusively by RadioShack at retail stores and online.
In a "battery of the month club" promotion introduced in the 1960s and abandoned in the early 1990s, RadioShack customers were issued a free wallet-sized cardboard card which entitled the bearer to one free battery a month when presented in RadioShack stores. The free Enercells were individual AA, C or D cells or 9V rectangular transistor radio batteries. Like the free tube,, testing offered in-store in the early 1970s, this small loss leader drew foot traffic.
There were two editions of a "Enercell Battery Guidebook", published in 1985 and 1990. The selector guide was later moved online. While the "battery of the month" card program ended in the 1990s, the Enercell name remained in use as RadioShack's store brand of dry cells and transistor radio batteries.
RadioShack for several years sold batteries branded "Enercell Plus" that were marketed as "Premium Alkaline" batteries.
For a long time, Enercell batteries were manufactured for RadioShack by Energizer's parent company as were all batteries sold under a RadioShack store brand. There have been instances of button batteries with the Eveready logo printed on the shell of the actual battery that were enclosed in a RadioShack Enercell package. (Energizer's parent company used to be called Eveready Battery Company, and Eveready is one of their brands of batteries.)
References
External links
(Redirects to the RadioShack website as of 2015)
Battery (electricity)
Consumer battery manufacturers
RadioShack
|
https://en.wikipedia.org/wiki/Electrospray
|
The name electrospray is used for an apparatus that employs electricity to disperse a liquid or for the fine aerosol resulting from this process. High voltage is applied to a liquid supplied through an emitter (usually a glass or metallic capillary). Ideally the liquid reaching the emitter tip forms a Taylor cone, which emits a liquid jet through its apex. Varicose waves on the surface of the jet lead to the formation of small and highly charged liquid droplets, which are radially dispersed due to Coulomb repulsion.
History
In the late 16th century William Gilbert set out to describe the behaviour of magnetic and electrostatic phenomena. He observed that, in the presence of a charged piece of amber, a drop of water deformed into a cone. This effect is clearly related to electrosprays, even though Gilbert did not record any observation related to liquid dispersion under the effect of the electric field.
In 1750 the French clergyman and physicist Jean-Antoine (Abbé) Nollet noted water flowing from a vessel would aerosolize if the vessel was electrified and placed near electrical ground.
In 1882, Lord Rayleigh theoretically estimated the maximum amount of charge a liquid droplet could carry; this is now known as the "Rayleigh limit". His prediction that a droplet reaching this limit would throw out fine jets of liquid was confirmed experimentally more than 100 years later.
In 1914, John Zeleny published work on the behaviour of fluid droplets at the end of glass capillaries. This report presents experimental evidence for several electrospray operating regimes (dripping, burst, pulsating, and cone-jet). A few years later, Zeleny captured the first time-lapse images of the dynamic liquid meniscus.
Between 1964 and 1969 Sir Geoffrey Ingram Taylor produced the theoretical underpinning of electrospraying. Taylor modeled the shape of the cone formed by the fluid droplet under the effect of an electric field; this characteristic droplet shape is now known as the Taylo
|
https://en.wikipedia.org/wiki/Comparison%20of%20office%20suites
|
The following tables compare general and technical information for a number of office suites:
General information
Office Suite names that are on a light purple background are discontinued.
OS support
The operating systems the office suites were designed to run on without emulation; for the given office suite/OS combination, there are five possibilities:
No indicates that it does not exist or was never released.
Partial indicates that while the office suite works, it lacks important functionality compared to versions for other OSs; it is still being developed however.
Beta indicates that while a version of the office suite is fully functional and has been released, it is still in development (e.g. for stability).
Yes indicates that the office suite has been officially released in a fully functional, stable version.
Dropped indicates that while the office suite works, new versions are no longer being released for the indicated OS; the number in parentheses is the last known stable version which was officially released for that OS.
Office Suite names that are on a light purple background are discontinued.
Supported file formats
Office Suite names that are on a light purple background are discontinued.
Main components
Office Suite names that are on a light purple background are discontinued.
Online capabilities
Office Suite names that are on a light purple background are discontinued.
See also
List of office suites
Comparison of word processors
Comparison of spreadsheet software
Notes
References
Office suites
|
https://en.wikipedia.org/wiki/SpeedStep
|
Enhanced SpeedStep is a series of dynamic frequency scaling technologies (codenamed Geyserville and including SpeedStep, SpeedStep II, and SpeedStep III) built into some Intel microprocessors that allow the clock speed of the processor to be dynamically changed (to different P-states) by software. This allows the processor to meet the instantaneous performance needs of the operation being performed, while minimizing power draw and heat generation. EIST (SpeedStep III) was introduced in several Prescott 6 series in the first quarter of 2005, namely the Pentium 4 660. Intel Speed Shift Technology (SST) was introduced in Intel Skylake Processor.
Enhanced Intel SpeedStep Technology is sometimes abbreviated as EIST. Intel's trademark of "INTEL SPEEDSTEP" was cancelled due to the trademark being invalidated in 2012.
Explanation
Running a processor at high clock speeds allows for better performance. However, when the same processor is run at a lower frequency (speed), it generates less heat and consumes less power. In many cases, the core voltage can also be reduced, further reducing power consumption and heat generation. By using SpeedStep, users can select the balance of power conservation and performance that best suits them, or even change the clock speed dynamically as the processor burden changes.
The power consumed by a CPU with a capacitance C, running at frequency f and voltage V is approximately:
For a given processor, C is a fixed value. However, V and f can vary considerably. For example, for a 1.6 GHz Pentium M, the clock frequency can be stepped down in 200 MHz decrements over the range from 1.6 to 0.6 GHz. At the same time, the voltage requirement decreases from 1.484 to 0.956 V. The result is that the power consumption theoretically goes down by a factor of 6.4. In practice, the effect may be smaller because some CPU instructions use less energy per tick of the CPU clock than others. For example, when an operating system is not busy, it tends to issu
|
https://en.wikipedia.org/wiki/Existential%20graph
|
An existential graph is a type of diagrammatic or visual notation for logical expressions, proposed by Charles Sanders Peirce, who wrote on graphical logic as early as 1882, and continued to develop the method until his death in 1914.
The graphs
Peirce proposed three systems of existential graphs:
alpha, isomorphic to sentential logic and the two-element Boolean algebra;
beta, isomorphic to first-order logic with identity, with all formulas closed;
gamma, (nearly) isomorphic to normal modal logic.
Alpha nests in beta and gamma. Beta does not nest in gamma, quantified modal logic being more general than put forth by Peirce.
Alpha
The syntax is:
The blank page;
Single letters or phrases written anywhere on the page;
Any graph may be enclosed by a simple closed curve called a cut or sep. A cut can be empty. Cuts can nest and concatenate at will, but must never intersect.
Any well-formed part of a graph is a subgraph.
The semantics are:
The blank page denotes Truth;
Letters, phrases, subgraphs, and entire graphs may be True or False;
To enclose a subgraph with a cut is equivalent to logical negation or Boolean complementation. Hence an empty cut denotes False;
All subgraphs within a given cut are tacitly conjoined.
Hence the alpha graphs are a minimalist notation for sentential logic, grounded in the expressive adequacy of And and Not. The alpha graphs constitute a radical simplification of the two-element Boolean algebra and the truth functors.
The depth of an object is the number of cuts that enclose it.
Rules of inference:
Insertion - Any subgraph may be inserted into an odd numbered depth.
Erasure - Any subgraph in an even numbered depth may be erased.
Rules of equivalence:
Double cut - A pair of cuts with nothing between them may be drawn around any subgraph. Likewise two nested cuts with nothing between them may be erased. This rule is equivalent to Boolean involution.
Iteration/Deiteration – To understand this rule, it is best to view a graph as a tree
|
https://en.wikipedia.org/wiki/Octave%20effect
|
Octave effect boxes are a type of special effects unit which mix the input signal with a synthesised signal whose musical tone is an octave lower or higher than the original. The synthesised octave signal is derived from the original input signal by halving (octave-down) or doubling (octave-up) the frequency. This is possible due to the simple two-to-one relationship between the frequencies of musical notes which are separated by an octave. One of the first popular musicians to employ the octave effect was Jimi Hendrix, who also used a variety of other effects in his recordings and public performances. Hendrix used an octave-fuzz pedal known as the octavia.
Analog octave effects differ from harmonizers and pitch shifters which digitally sample the sound and process it to change its pitch.
Creation of the octave
Octave up
Octave-up effects usually use full wave rectification using diodes to "fold up" the negative part of the waveform to make a new waveform an octave higher in pitch.
Octave down
Octave-down effects are typically produced by converting the signal to a square wave, and then using flip-flop circuits to divide the frequency by two. This creates a buzzy synthesizer like tone. The MXR Blue Box used this method to create a two octave drop (expanded to include one octave down in later re-issues). Jimmy Page used a Blue Box to record the solo on Led Zeppelin's Fool in the Rain.
The Boss OC-2 unit generates tones at one and two octaves down from the input signal. This effect also uses flip-flops to generate square waves at 1/2 and 1/4 of the input signal frequency, but rather than simply mixing in these signals, it uses them to invert the polarity of the input signal on every other cycle (every two out of four cycles for the second octave). This effectively amplitude modulates the input signal with a carrier at half the input signal, creating new frequency components at 1/2 and 3/2 the input signal. The 3/2 component is low-pass filtered out. Th
|
https://en.wikipedia.org/wiki/Open-channel%20flow
|
In fluid mechanics and hydraulics, open-channel flow is a type of liquid flow within a conduit with a free surface, known as a channel. The other type of flow within a conduit is pipe flow. These two types of flow are similar in many ways but differ in one important respect: open-channel flow has a free surface, whereas pipe flow does not, resulting in flow dominated by gravity but not hydraulic pressure.
Classifications of flow
Open-channel flow can be classified and described in various ways based on the change in flow depth with respect to time and space. The fundamental types of flow dealt with in open-channel hydraulics are:
Time as the criterion
Steady flow
The depth of flow does not change over time, or if it can be assumed to be constant during the time interval under consideration.
Unsteady flow
The depth of flow does change with time.
Space as the criterion
Uniform flow
The depth of flow is the same at every section of the channel. Uniform flow can be steady or unsteady, depending on whether or not the depth changes with time, (although unsteady uniform flow is rare).
Varied flow
The depth of flow changes along the length of the channel. Varied flow technically may be either steady or unsteady. Varied flow can be further classified as either rapidly or gradually-varied:
Rapidly-varied flow
The depth changes abruptly over a comparatively short distance. Rapidly varied flow is known as a local phenomenon. Examples are the hydraulic jump and the hydraulic drop.
Gradually-varied flow
The depth changes over a long distance.
Continuous flow
The discharge is constant throughout the reach of the channel under consideration. This is often the case with a steady flow. This flow is considered continuous and therefore can be described using the continuity equation for continuous steady flow.
Spatially-varied flow
The discharge of a steady flow is non-uniform along a channel. This happens when water enters and/or leaves the channel along the cours
|
https://en.wikipedia.org/wiki/List%20of%20properties%20of%20sets%20of%20reals
|
This article lists some properties of sets of real numbers. The general study of these concepts forms descriptive set theory, which has a rather different emphasis from general topology.
Definability properties
Borel set
Analytic set
C-measurable set
Projective set
Inductive set
Infinity-Borel set
Suslin set
Homogeneously Suslin set
Weakly homogeneously Suslin set
Set of uniqueness
Regularity properties
Property of Baire
Lebesgue measurable
Universally measurable set
Perfect set property
Universally Baire set
Largeness and smallness properties
Meager set
Comeager set - A comeager set is one whose complement is meager.
Null set
Conull set
Dense set
Nowhere dense set
Real numbers
Real numbers
|
https://en.wikipedia.org/wiki/Late%20fee
|
A late fee, also known as an overdue fine, late fine, or past due fee, is a charge fined against a client by a company or organization for not paying a bill or returning a rented or borrowed item by its due date. Its use is most commonly associated with businesses like creditors, video rental outlets and libraries. Late fees are generally calculated on a per day, per item basis.
Organizations encourage the payment of late fees by suspending a client's borrowing or rental privileges until accumulated fees are paid, sometimes after these fees have exceeded a certain level. Late fees are issued to people who do not pay on time and don't honor a lease or obligation for which they are responsible.
Library fine
Library fines, also known as overdue fines, late fees, or overdue fees, are small daily or weekly fees that libraries in many countries charge borrowers after a book or other borrowed item is kept past its due date. Library fines are an enforcement mechanism designed to ensure that library books are returned within a certain period of time and to provide increasing penalties for late items. Library fines do not typically accumulate over years or decades. Fines are usually assessed for only a few days or months, until a pre-set limit is reached.
Library fines are a small percentage of overall library budgets, but lost, stolen or un-returned library books can be costly for various levels of government that fund.
History
In the late 1800s, as modern circulating libraries began making checking out books possible for the general public, concerns rose about books being taken out and never returned. To encourage the return of books and to help fund the replacement acquisition of new books, libraries began assessing a fee on late books. For example, when the Aberdeen Free Library in Scotland opened in 1886, borrowers were fined a penny a week for every week a book was held longer than a fortnight.
Public libraries in New York began charging overdue fees in the late 1
|
https://en.wikipedia.org/wiki/Basidiospore
|
A basidiospore is a reproductive spore produced by Basidiomycete fungi, a grouping that includes mushrooms, shelf fungi, rusts, and smuts. Basidiospores typically each contain one haploid nucleus that is the product of meiosis, and they are produced by specialized fungal cells called basidia. Typically, four basidiospores develop on appendages from each basidium, of which two are of one strain and the other two of its opposite strain. In gills under a cap of one common species, there exist millions of basidia. Some gilled mushrooms in the order Agaricales have the ability to release billions of spores. The puffball fungus Calvatia gigantea has been calculated to produce about five trillion basidiospores. Most basidiospores are forcibly discharged, and are thus considered ballistospores. These spores serve as the main air dispersal units for the fungi. The spores are released during periods of high humidity and generally have a night-time or pre-dawn peak concentration in the atmosphere.
When basidiospores encounter a favorable substrate, they may germinate, typically by forming hyphae. These hyphae grow outward from the original spore, forming an expanding circle of mycelium. The circular shape of a fungal colony explains the formation of fairy rings, and also the circular lesions of skin-infecting fungi that cause ringworm. Some basidiospores germinate repetitively by forming small spores instead of hyphae.
General structure and shape
Basidiospores are generally characterized by an attachment peg (called a hilar appendage) on its surface. This is where the spore was attached to the basidium. The hilar appendage is quite prominent in some basidiospores, but less evident in others. An apical germ pore may also be present. Many basidiospores have an asymmetric shape due to their development on the basidium. Basidiospores are typically single-celled (without septa), and typically range from spherical to oval to oblong, to ellipsoid or cylindrical. The surface of the
|
https://en.wikipedia.org/wiki/Cohesion%20%28chemistry%29
|
In chemistry and physics, cohesion (), also called cohesive attraction or cohesive force, is the action or property of like molecules sticking together, being mutually attractive. It is an intrinsic property of a substance that is caused by the shape and structure of its molecules, which makes the distribution of surrounding electrons irregular when molecules get close to one another, creating electrical attraction that can maintain a microscopic structure such as a water drop. Cohesion allows for surface tension, creating a "solid-like" state upon which light-weight or low-density materials can be placed.
Water, for example, is strongly cohesive as each molecule may make four hydrogen bonds to other water molecules in a tetrahedral configuration. This results in a relatively strong Coulomb force between molecules. In simple terms, the polarity (a state in which a molecule is oppositely charged on its poles) of water molecules allows them to be attracted to each other. The polarity is due to the electronegativity of the atom of oxygen: oxygen is more electronegative than the atoms of hydrogen, so the electrons they share through the covalent bonds are more often close to oxygen rather than hydrogen. These are called polar covalent bonds, covalent bonds between atoms that thus become oppositely charged. In the case of a water molecule, the hydrogen atoms carry positive charges while the oxygen atom has a negative charge. This charge polarization within the molecule allows it to align with adjacent molecules through strong intermolecular hydrogen bonding, rendering the bulk liquid cohesive. Van der Waals gases such as methane, however, have weak cohesion due only to van der Waals forces that operate by induced polarity in non-polar molecules.
Cohesion, along with adhesion (attraction between unlike molecules), helps explain phenomena such as meniscus, surface tension and capillary action.
Mercury in a glass flask is a good example of the effects of the ratio betwe
|
https://en.wikipedia.org/wiki/Association%20scheme
|
The theory of association schemes arose in statistics, in the theory of experimental design for the analysis of variance. In mathematics, association schemes belong to both algebra and combinatorics. In algebraic combinatorics, association schemes provide a unified approach to many topics, for example combinatorial designs and the theory of error-correcting codes. In algebra, association schemes generalize groups, and the theory of association schemes generalizes the character theory of linear representations of groups.
Definition
An n-class association scheme consists of a set X together with a partition S of X × X into n + 1 binary relations, R0, R1, ..., Rn which satisfy:
; it is called the identity relation.
Defining , if R in S, then R* in S.
If , the number of such that and is a constant depending on , , but not on the particular choice of and .
An association scheme is commutative if for all , and . Most authors assume this property.
A symmetric association scheme is one in which each is a symmetric relation. That is:
if (x, y) ∈ Ri, then (y, x) ∈ Ri. (Or equivalently, R* = R.)
Every symmetric association scheme is commutative.
Note, however, that while the notion of an association scheme generalizes the notion of a group, the notion of a commutative association scheme only generalizes the notion of a commutative group.
Two points x and y are called i th associates if . The definition states that if x and y are i th associates then so are y and x. Every pair of points are i th associates for exactly one . Each point is its own zeroth associate while distinct points are never zeroth associates. If x and y are k th associates then the number of points which are both i th associates of and j th associates of is a constant .
Graph interpretation and adjacency matrices
A symmetric association scheme can be visualized as a complete graph with labeled edges. The graph has vertices, one for each point of , and the edge joining vertices a
|
https://en.wikipedia.org/wiki/Reconfigurability
|
Reconfigurability denotes the Reconfigurable Computing capability of a system, so that its behavior can be changed by reconfiguration, i. e. by loading different configware code. This static reconfigurability distinguishes between reconfiguration time and run time. Dynamic reconfigurability denotes the capability of a dynamically reconfigurable system that can dynamically change its behavior during run time, usually in response to dynamic changes in its environment.
In the context of wireless communication dynamic reconfigurability tackles the changeable behavior of wireless networks and associated equipment, specifically in the fields of radio spectrum, radio access technologies, protocol stacks, and application services.
Research regarding the (dynamic) reconfigurability of wireless communication systems is ongoing for example in working group 6 of the Wireless World Research Forum (WWRF), in the Wireless Innovation Forum (WINNF) (formerly Software Defined Radio Forum), and in the European FP6 project End-to-End Reconfigurability (E²R). Recently, E²R initiated a related standardization effort on the cohabitation of heterogeneous wireless radio systems in the framework of the IEEE P1900.4 Working Group.
See cognitive radio.
In the context of Control reconfiguration, a field of fault-tolerant control within control engineering, reconfigurability is a property of faulty systems meaning that the original control goals specified for the fault-free system can be reached after suitable control reconfiguration.
External links
Wireless World Research Forum
Wireless World Research Forum, Working Group 6
Wireless Innovation Forum (formerly Software Defined Radio Forum)
Wireless networking
Radio resource management
Reconfigurable computing
|
https://en.wikipedia.org/wiki/Railway%20Technical%20Centre
|
The Railway Technical Centre (RTC) in London Road, Derby, England, was the technical headquarters of the British Railways Board, and was built in the early 1960s. British Rail described it as the largest railway research complex in the world.
The RTC centralised most of the technical services provided by the regional Chief Mechanical & Electrical Engineers (CM&EE) to form the Department of Mechanical & Electrical Engineering (DM&EE). In addition, it housed the newly formed British Rail Research Division which reported directly to the Board. The latter is well known for its work on the experimental Advanced Passenger Train (APT-E). At that early stage this was a concept vehicle, and in time the DM&EE applied the new knowledge to existing practice in the design of the High Speed Train (HST), the later prototype APT-P and other high-speed vehicles.
History
Opening
The Research Division was the first to move into the purpose-built accommodation on London Road. This was formed initially with personnel from other departments around the country, including the Electrical Research Division from Rugby, the Mechanical Engineers Research Section, the Civil Engineering Research Unit (Track Lab), and the Chemical Research Unit, while the Scientific Services Division occupied the former LMS Scientific Research Laboratory building across the road known as Hartley House. The embryo RTC site (mainly Kelvin House and the Research Test Hall) was officially opened by Prince Philip, Duke of Edinburgh in May 1964. Later additional buildings were added: Trent House and Derwent House, the Advanced Projects lab, then Stephenson House, Lathkill House and finally Brunel House.
Department of Mechanical & Electrical Engineering
In addition to the research employees, the RTC became the headquarters of the DM&EE. This brought together engineers from the regional departments, together with its Drawing Offices, the Testing & Performance Section and the Engineering Development Unit workshop (EDU
|
https://en.wikipedia.org/wiki/British%20Rail%20Research%20Division
|
The British Rail Research Division was a division of the state-owned railway company British Rail (BR). It was charged with conducting research into improving various aspects of Britain's railways, particularly in the areas of reliability and efficiency, including achieving cost reductions and increasing service levels.
Its creation was endorsed by the newly created British Rail Board (BRB) in 1963 and incorporated personnel and existing resources from all over the country, including the LMS Scientific Research Laboratory. It was primarily based at the purpose-built Railway Technical Centre in Derby. In addition to its domestic activities, the Research Division would provide technology and personnel to other countries for varying purposes and periods under the trade name "Transmark". It became recognised as a centre of excellence in its field; the theoretical rigour of its approach to railway engineering superseded the ad hoc methods that had prevailed previously.
Its research led to advances in various sectors, such as in the field of signalling, where progress was made with block systems, remote operation systems, and the Automatic Warning System (AWS). Trackside improvements, such as the standardisation of overhead electrification equipment and refinements to the plasma torch, were also results of the Research Division's activities. Perhaps its most high-profile work was into new forms of rolling stock, such as the High Speed Freight Vehicle and railbuses, which led to the introduction of the Class 140. One of its projects that gained particularly high-profile coverage was the Advanced Passenger Train (APT), a high-speed tilting train intended for BR's Intercity services. However, due to schedule overruns, negative press coverage, and a lack of political support, work on the APT was ceased in the mid-1980s in favour of the more conventional InterCity 125 and InterCity 225 trainsets.
The Research Division was reorganised in the runup to the privatisation of Bri
|
https://en.wikipedia.org/wiki/Static%20induction%20thyristor
|
The static induction thyristor (SIT, SITh) is a thyristor with a buried gate structure in which the gate electrodes are placed in n-base region. Since they are normally on-state, gate electrodes must be negatively or anode biased to hold off-state. It has low noise, low distortion, high audio frequency power capability. The turn-on and turn-off times are very short, typically 0.25 microseconds.
History
The first static induction thyristor was invented by Japanese engineer Jun-ichi Nishizawa in 1975. It was capable of conducting large currents with a low forward bias and had a small turn-off time. It had a self controlled gate turn-off thyristor that was commercially available through Tokyo Electric Co. (now Toyo Engineering Corporation) in 1988. The initial device consisted of a p+nn+ diode and a buried p+ grid.
In 1999, an analytical model of the SITh was developed for the PSPICE circuit simulator. In 2010, a newer version of SITh was developed by Zhang Caizhen, Wang Yongshun, Liu Chunjuan and Wang Zaixing, the new feature of which was its high forward blocking voltage.
See also
Static induction transistor
MOS composite static induction thyristor
References
External links
Static induction thyristor
Semiconductor devices
Solid state switches
Power electronics
|
https://en.wikipedia.org/wiki/MOS%20composite%20static%20induction%20thyristor
|
MOS composite static induction thyristor (CSMT or MCS) is a combination of a MOS transistor connected in cascode relation to the SI-thyristor.
The SI thyristor (SITh) unit has a gate to which a source of MOS transistor is connected through a voltage regulation element. The low conduction loss and rugged structure MCS make it more favorable than conversional IGBT transistors.
In the blocking state nearly the complete voltage drops at the SITh. Thus the MOSFET is not exposed to high field stress. For fast switching the MOSFET with only 30–50 V blocking voltage is able. In IGBT, charge carrier concentration at emitter side in n-base layer is low as holes injected from collector easily pass to emitter electrode through p-base layer. Thus the wide-base pnp transistor operates by virtue of its current gain characteristics causing the rise collector-emitter saturation voltage.
In an MCS the positive difference between the voltage of regulation element and conduction voltage drop of MOSFET is applied to location between the collector region and emitter region of the pnp transistor. Hole concentration is accumulated at emitter side in n-base layer because of impossibility of the hole flow through forward bias collector-base junction of the pnp transistor. Carrier distribution in n-base is similar to that of saturation bipolar transistor and low saturation voltage of MCS, even at high voltage ratings, can be achieved.
References
Solid state switches
Semiconductor devices
|
https://en.wikipedia.org/wiki/Projection%20screen
|
A projection screen is an installation consisting of a surface and a support structure used for displaying a projected image for the view of an audience. Projection screens may be permanently installed on a wall, as in a movie theater, mounted to or placed in a ceiling using a rollable projection surface that retracts into a casing (these can be motorized or manually operated), painted on a wall, or portable with tripod or floor rising models as in a conference room or other non-dedicated viewing space. Another popular type of portable screens are inflatable screens for outdoor movie screening (open-air cinema).
Uniformly white or grey screens are used almost exclusively as to avoid any discoloration to the image, while the most desired brightness of the screen depends on a number of variables, such as the ambient light level and the luminous power of the image source. Flat or curved screens may be used depending on the optics used to project the image and the desired geometrical accuracy of the image production, flat screens being the more common of the two. Screens can be further designed for front or back projection, the more common being front projection systems, which have the image source situated on the same side of the screen as the audience.
Different markets exist for screens targeted for use with digital projectors, movie projectors, overhead projectors and slide projectors, although the basic idea for each of them is very much the same: front projection screens work on diffusely reflecting the light projected on to them, whereas back-projection screens work by diffusely transmitting the light through them.
Screens by installation type in different settings
In the commercial movie theaters, the screen is a reflective surface that may be either aluminized (for high contrast in moderate ambient light) or a white surface with small glass beads (for high brilliance under dark conditions). The screen also has hundreds of small, evenly spaced holes to all
|
https://en.wikipedia.org/wiki/Wheel%20%28computing%29
|
In Unix operating systems, the term wheel refers to a user account with a wheel bit, a system setting that provides additional special system privileges that empower a user to execute restricted commands that ordinary user accounts cannot access.
Origins
The term wheel was first applied to computer user privilege levels after the introduction of the TENEX operating system, later distributed under the name TOPS-20 in the 1960s and early 1970s. The term was derived from the slang phrase big wheel, referring to a person with great power or influence.
In the 1980s, the term was imported into Unix culture due to the migration of operating system developers and users from TENEX/TOPS-20 to Unix.
Wheel group
Modern Unix systems generally use user groups as a security protocol to control access privileges. The wheel group is a special user group used on some Unix systems, mostly BSD systems, to control access to the su or sudo command, which allows a user to masquerade as another user (usually the super user). Debian-like operating systems create a group called sudo with purpose similar to that of a wheel group.
Wheel war
The phrase wheel war, which originated at Stanford University, is a term used in computer culture, first documented in the 1983 version of The Jargon File. A 'wheel war' was a user conflict in a multi-user (see also: multiseat) computer system, in which students with administrative privileges would attempt to lock each other out of a university's computer system, sometimes causing unintentional harm to other users.
See also
Superuser
References
Unix
Computer jargon
|
https://en.wikipedia.org/wiki/Microsoft%20BizTalk%20Server
|
Microsoft BizTalk Server is an inter-organizational middleware system (IOMS) that automates business processes through the use of adapters which are tailored to communicate with different software systems used in an enterprise. Created by Microsoft, it provides enterprise application integration, business process automation, business-to-business communication, message broker and business activity monitoring.
BizTalk Server was previously positioned as both an application server and an . Microsoft changed this strategy when they released the AppFabric server which became their official application server. Research firm Gartner consider Microsoft's offering one of their 'Leaders' for Application Integration Suites. The latest release of Biztalk (Biztalk Server 2020) was released on 15 January 2020.
In a common scenario, BizTalk integrates before going out and manages automated business processes by exchanging business documents such as purchase orders and invoices between disparate applications, within or across organizational boundaries.
Development for BizTalk Server is done through Microsoft Visual Studio. A developer can create transformation maps transforming one message type to another. For example, an XML file can be transformed to SAP IDocs. Messages inside BizTalk are implemented through the XML documents and defined with the XML schemas in XSD standard. Maps are implemented with the XSLT standard. Orchestrations are implemented with the WS-BPEL compatible process language xLANG. Schemas, maps, pipelines and orchestrations are created visually using graphical tools within Microsoft Visual Studio. The additional functionality can be delivered by .NET assemblies that can be called from existing modules—including, for instance, orchestrations, maps, pipelines, business rules.
Version history
Starting in 2000, the following versions were released:
2000-12-01 BizTalk Server 2000
2002-02-04 BizTalk Server 2002
2004-03-02 BizTalk Server 2004 (First version
|
https://en.wikipedia.org/wiki/Freiman%27s%20theorem
|
In additive combinatorics, Freiman's theorem is a central result which indicates the approximate structure of sets whose sumset is small. It roughly states that if is small, then can be contained in a small generalized arithmetic progression.
Statement
If is a finite subset of with , then is contained in a generalized arithmetic progression of dimension at most and size at most , where and are constants depending only on .
Examples
For a finite set of integers, it is always true that
with equality precisely when is an arithmetic progression.
More generally, suppose is a subset of a finite proper generalized arithmetic progression of dimension such that for some real . Then , so that
History of Freiman's theorem
This result is due to Gregory Freiman (1964, 1966). Much interest in it, and applications, stemmed from a new proof by Imre Z. Ruzsa (1994). Mei-Chu Chang proved new polynomial estimates for the size of arithmetic progressions arising in the theorem in 2002. The current best bounds were provided by Tom Sanders.
Tools used in the proof
The proof presented here follows the proof in Yufei Zhao's lecture notes.
Plünnecke-Ruzsa inequality
Ruzsa covering lemma
The Ruzsa covering lemma states the following:
Let and be finite subsets of an abelian group with nonempty, and let be a positive real number. Then if , there is a subset of with at most elements such that .
This lemma provides a bound on how many copies of one needs to cover , hence the name. The proof is essentially a greedy algorithm:
Proof: Let be a maximal subset of such that the sets for are all disjoint. Then , and also , so . Furthermore, for any , there is some such that intersects , as otherwise adding to contradicts the maximality of . Thus , so .
Freiman homomorphisms and the Ruzsa modeling lemma
Let be a positive integer, and and be abelian groups. Let and . A map is a Freiman -homomorphism if
whenever for any .
If in addition is a bijection an
|
https://en.wikipedia.org/wiki/Generalized%20arithmetic%20progression
|
In mathematics, a generalized arithmetic progression (or multiple arithmetic progression) is a generalization of an arithmetic progression equipped with multiple common differences – whereas an arithmetic progression is generated by a single common difference, a generalized arithmetic progression can be generated by multiple common differences. For example, the sequence is not an arithmetic progression, but is instead generated by starting with 17 and adding either 3 or 5, thus allowing multiple common differences to generate it.
A semilinear set generalizes this idea to multiple dimensions -- it is a set of vectors of integers, rather than a set of integers.
Finite generalized arithmetic progression
A finite generalized arithmetic progression, or sometimes just generalized arithmetic progression (GAP), of dimension d is defined to be a set of the form
where . The product is called the size of the generalized arithmetic progression; the cardinality of the set can differ from the size if some elements of the set have multiple representations. If the cardinality equals the size, the progression is called proper. Generalized arithmetic progressions can be thought of as a projection of a higher dimensional grid into . This projection is injective if and only if the generalized arithmetic progression is proper.
Semilinear sets
Formally, an arithmetic progression of is an infinite sequence of the form , where and are fixed vectors in , called the initial vector and common difference respectively. A subset of is said to be linear if it is of the form
where is some integer and are fixed vectors in . A subset of is said to be semilinear if it is a finite union of linear sets.
The semilinear sets are exactly the sets definable in Presburger arithmetic.
See also
Freiman's theorem
References
Algebra
Combinatorics
|
https://en.wikipedia.org/wiki/Television%20antenna
|
A television antenna (TV aerial) is an antenna specifically designed for use with a television receiver (TV) to receive over-the-air broadcast television signals from a television station. Television reception is dependent upon the antenna as well as the transmitter. Terrestrial television is broadcast on frequencies from about 47 to 250 MHz in the very high frequency (VHF) band, and 470 to 960 MHz in the ultra high frequency (UHF) band in different countries. Television antennas are manufactured in two different types: "indoor" antennas, to be located on top of or next to the television set, and "outdoor" antennas, mounted on a mast on top of the owner's house. They can also be mounted in a loft or attic, where the dry conditions and increased elevation are advantageous for reception and antenna longevity. Outdoor antennas are more expensive and difficult to install, but are necessary for adequate reception in fringe areas far from television stations. The most common types of indoor antennas are the dipole ("rabbit ears") and loop antennas, and for outdoor antennas the Yagi, log periodic, and for UHF channels the multi-bay reflective array antenna.
Description
The purpose of the antenna is to intercept radio waves from the desired television stations and convert them to tiny radio frequency alternating currents which are applied to the television's tuner, which extracts the television signal. The antenna is connected to the television with a specialized cable designed to carry radio current, called transmission line. Earlier antennas used a flat cable called 300 Ω twin-lead. The standard today is 75 Ω coaxial cable, which is less susceptible to interference, which plugs into an F connector or Belling-Lee connector (depending on region) on the back of the TV. To convert the signal from antennas that use twin-lead line to the modern coaxial cable input, a small transformer called a balun is used in the line.
In most countries, television broadcasting is all
|
https://en.wikipedia.org/wiki/Exabyte%20Corporation
|
Exabyte Corporation was a manufacturer of magnetic tape data storage products headquartered in Boulder, Colorado, United States. Exabyte Corp. is now defunct, but the company's technology is sold by Tandberg Data under both brand names. Prior to the 2006 demise, Exabyte offered tape storage and automation solutions for servers, workstations, LANs and SANs. Exabyte is best known for introducing the Data8 (8 mm) magnetic tape format in 1987. At the time of its demise, Exabyte manufactured VXA and LTO based products. The company controlled VXA technology but did not play a large role in the LTO community.
Corporate history
The company was formed in 1985 by Juan Rodriguez, Harry Hinz, and Kelly Beavers, and a group of ex-StorageTek engineers who were interested in using consumer videotape technology for data storage. The company advanced technology for computer backups in 1987 when they introduced the Data8 magnetic tape format. The company's follow-up technologies, including Mammoth and Mammoth-2, were less successful.
Exabyte went public on the NASDAQ in 1989 under the symbol EXBT.
Acquisitions
Exabyte's history of acquisitions includes:
1992 - R-Byte, Inc., a maker of 4mm tape systems.
1993 - Tallgrass Technologies of Lenexa within Johnson County, Kansas. Tallgrass manufactured 4mm DDS drives, backup software, and had a significant distribution channel.
1993 - Everex's Mass Storage Division (MSD). Everex did its research and development in Ann Arbor, MI and manufactured its products in Fremont, CA. Everex MSD made QIC products.
October 1994 - Grundig Data Scanner GmbH, for $2.9 million and renamed Exabyte Magnetics GmbH. This subsidiary designed and manufactured helical scan tape heads.
Ecrix merger
was a magnetic tape data storage company founded in 1996 in Boulder, Colorado. The founders, Kelly Beavers and Juan Rodriguez, were two of the three founders of Exabyte. The research and development done by Ecrix focused on making a cheaper 8 m
|
https://en.wikipedia.org/wiki/Comparison%20of%20SSH%20clients
|
An SSH client is a software program which uses the secure shell protocol to connect to a remote computer. This article compares a selection of notable clients.
General
Platform
The operating systems or virtual machines the SSH clients are designed to run on without emulation include several possibilities:
Partial indicates that while it works, the client lacks important functionality compared to versions for other OSs but may still be under development.
The list is not exhaustive, but rather reflects the most common platforms today.
Technical
Features
Authentication key algorithms
This table lists standard authentication key algorithms implemented by SSH clients. Some SSH implementations include both server and client implementations and support custom non-standard authentication algorithms not listed in this table.
See also
Comparison of SSH servers
Comparison of FTP client software
Comparison of remote desktop software
References
Cryptographic software
Internet Protocol based network software
SSH clients
Secure Shell
|
https://en.wikipedia.org/wiki/List%20of%20wavelet-related%20transforms
|
A list of wavelet related transforms:
Continuous wavelet transform (CWT)
Discrete wavelet transform (DWT)
Multiresolution analysis (MRA)
Lifting scheme
Binomial QMF (BQMF)
Fast wavelet transform (FWT)
Complex wavelet transform
Non or undecimated wavelet transform, the downsampling is omitted
Newland transform, an orthonormal basis of wavelets is formed from appropriately constructed top-hat filters in frequency space
Wavelet packet decomposition (WPD), detail coefficients are decomposed and a variable tree can be formed
Stationary wavelet transform (SWT), no downsampling and the filters at each level are different
e-decimated discrete wavelet transform, depends on if the even or odd coefficients are selected in the downsampling
Second generation wavelet transform (SGWT), filters and wavelets are not created in the frequency domain
Dual-tree complex wavelet transform (DTCWT), two trees are used for decomposion to produce the real and complex coefficients
WITS: Where Is The Starlet, a collection of a hundredth of wavelet names in -let and associated multiscale, directional, geometric, representations, from activelets to x-lets through bandelets, chirplets, contourlets, curvelets, noiselets, wedgelets ...
Transforms
Wavelet-related transforms
|
https://en.wikipedia.org/wiki/Second-generation%20wavelet%20transform
|
In signal processing, the second-generation wavelet transform (SGWT) is a wavelet transform where the filters (or even the represented wavelets) are not designed explicitly, but the transform consists of the application of the Lifting scheme.
Actually, the sequence of lifting steps could be converted to a regular discrete wavelet transform, but this is unnecessary because both design and application is made via the lifting scheme.
This means that they are not designed in the frequency domain, as they are usually in the classical (so to speak first generation) transforms such as the DWT and CWT).
The idea of moving away from the Fourier domain was introduced independently by David Donoho and Harten in the early 1990s.
Calculating transform
The input signal is split into odd and even samples using shifting and downsampling. The detail coefficients are then interpolated using the values of and the prediction operator on the even values:
The next stage (known as the updating operator) alters the approximation coefficients using the detailed ones:
The functions prediction operator and updating operator
effectively define the wavelet used for decomposition.
For certain wavelets the lifting steps (interpolating and updating) are repeated several times before the result is produced.
The idea can be expanded (as used in the DWT) to create a filter bank with a number of levels.
The variable tree used in wavelet packet decomposition can also be used.
Advantages
The SGWT has a number of advantages over the classical wavelet transform in that it is quicker to compute (by a factor of 2) and it can be used to generate a multiresolution analysis that does not fit a uniform grid. Using a priori information the grid can be designed to allow the best analysis of the signal to be made.
The transform can be modified locally while preserving invertibility; it can even adapt to some extent to the transformed signal.
References
Wim Sweldens: Second-Generation Wavelets
|
https://en.wikipedia.org/wiki/Sodium%20ferulate
|
Sodium ferulate, the sodium salt of ferulic acid, is a compound used in traditional Chinese medicine thought to be useful for treatment of cardiovascular and cerebrovascular diseases and to prevent thrombosis, although there is no high-quality clinical evidence for such effects. It is found in the root of Angelica sinensis. As of 2005, it was under preliminary clinical research in China. Ferulic acid can also be extracted from the root of the Chinese herb Ligusticum chuanxiong.
Kraft Foods patented the use of sodium ferulate to mask the aftertaste of the artificial sweetener acesulfame potassium.
References
Dietary supplements
Food additives
Bitter-masking compounds
O-methylated hydroxycinnamic acids
Salts of carboxylic acids
Organic sodium salts
Vinylogous carboxylic acids
|
https://en.wikipedia.org/wiki/Stagecast%20Creator
|
Stagecast Creator is a visual programming language intended for use in teaching programming to children. It is based on the programming by demonstration
concept, where rules are created by giving examples of what actions should take place in a given situation. It can be used to construct simulations, animations and games, which run under Java on any suitable platform.
History
The software known as Creator originally started as a project by Allen Cypher and David Canfield Smith in Apple's Advanced Technology Group (ATG) known as KidSim. It was intended to allow kids to construct their own simulations, reducing the programming task to something that anyone could handle. Programming in Creator uses graphical rewrite rules augmented with non-graphical tests and actions.
In 1994, Kurt Schmucker became the project manager, and under him, the project was renamed Cocoa, and expanded to include a Netscape plug-in. It was also repositioned as "Internet Authoring for Kids", as the Internet was becoming increasingly accessible. The project was officially announced on May 13, 1996. There were three releases:
DR1 (Developer Release 1) on October 31, 1996
DR2 in June, 1997
DR3 in June, 1998
When Steve Jobs returned to Apple in 1997, he began dismantling a number of non-productive departments. One of these was the ATG. Larry Tesler, Cypher, and Smith, left to form Stagecast Software after retaining the rights to the Cocoa system.
Apple went on to reuse the Cocoa name for the entirely unrelated Cocoa application framework, which had originated as OpenStep.
Sales of Stagecast Creator ended on September 30, 2014 as part of Stagecast Software's cessation of operations and support ended on December 1, 2014.
Description
Creator is based on the idea of independent characters that have a graphical appearance and non-graphical properties. Each character has a list of rules that determine how it behaves. The rules are created by demonstrating what the character does in a specific
|
https://en.wikipedia.org/wiki/Resinous%20glaze
|
Resinous glaze is an alcohol-based solution of various types of food-grade shellac. The shellac is derived from the raw material sticklac, which is a resin scraped from the branches of trees left from when the small insect, Kerria lacca (also known as Laccifer lacca), creates a hard, waterproof cocoon. When used in food and confections, it is also known as confectioner's glaze, pure food glaze, natural glaze, or confectioner's resin. When used on medicines, it is sometimes called pharmaceutical glaze.
Pharmaceutical glaze may contain 20–51% shellac in solution in ethyl alcohol (grain alcohol) that has not been denatured (denatured alcohol is poisonous), waxes, and titanium dioxide as an opacifying agent. Confectioner’s glaze used for candy contains roughly 35% shellac, while the remaining components are volatile organic compounds that evaporate after the glaze is applied.
Pharmaceutical glaze is used by the drug and nutritional supplement industry as a coating material for tablets and capsules. It serves to improve the product's appearance, extend shelf life and protect it from moisture, as well as provide a solid finishing film for pre-print coatings. It also serves to mask unpleasant odors and aid in the swallowing of the tablet.
The shellac coating is insoluble in stomach acid and may make the tablet difficult for the body to break down or assimilate. For this reason, it can also be used as an ingredient in time-released, sustained or delayed-action pills. The product is listed on the U.S. Food and Drug Administration's (FDA) inactive ingredient list.
Shellac is labeled as GRAS (generally recognized as safe) by the US FDA and is used as glaze for several types of foods, including some fruit, coffee beans, chewing gum, and candy. Examples of candies containing shellac include candy corn, Hershey's Whoppers and Milk Duds, Nestlé's Raisinets and Goobers, Tootsie Roll Industries's Junior Mints and Sugar Babies, Jelly Belly's jelly beans and Mint Cremes, Russell
|
https://en.wikipedia.org/wiki/Virtual%20retinal%20display
|
A virtual retinal display (VRD), also known as a retinal scan display (RSD) or retinal projector (RP), is a display technology that draws a raster display (like a television) directly onto the retina of the eye.
History
In the past similar systems have been made by projecting a defocused image directly in front of the user's eye on a small "screen", normally in the form of large glasses. The user focused their eyes on the background, where the screen appeared to be floating. The disadvantage of these systems was the limited area covered by the "screen", the high weight of the small televisions used to project the display, and the fact that the image would appear focused only if the user was focusing at a particular "depth". Limited brightness made them useful only in indoor settings as well.
Only recently a number of developments have made a true VRD system practical. In particular the development of high-brightness LEDs have made the displays bright enough to be used during the day, and adaptive optics have allowed systems to dynamically correct for irregularities in the eye (although this is not always needed). The result is a high-resolution screenless display with excellent color gamut and brightness, far better than the best television technologies.
The VRD was invented by Kazuo Yoshinaka of Nippon Electric Co. in 1986. Later work at the University of Washington in the Human Interface Technology Lab resulted in a similar system in 1991. Most of the research into VRDs to date has been in combination with various virtual reality systems. In this role VRDs have the potential advantage of being much smaller than existing television-based systems. They share some of the same disadvantages however, requiring some sort of optics to send the image into the eye, typically similar to the sunglasses system used with previous technologies. It also can be used as part of a wearable computer system.
A Washington-based startup, MicroVision, Inc., has sought to commerci
|
https://en.wikipedia.org/wiki/Parallel%20programming%20model
|
In computing, a parallel programming model is an abstraction of parallel computer architecture, with which it is convenient to express algorithms and their composition in programs. The value of a programming model can be judged on its generality: how well a range of different problems can be expressed for a variety of different architectures, and its performance: how efficiently the compiled programs can execute. The implementation of a parallel programming model can take the form of a library invoked from a sequential language, as an extension to an existing language, or as an entirely new language.
Consensus around a particular programming model is important because it leads to different parallel computers being built with support for the model, thereby facilitating portability of software. In this sense, programming models are referred to as bridging between hardware and software.
Classification of parallel programming models
Classifications of parallel programming models can be divided broadly into two areas: process interaction and problem decomposition.
Process interaction
Process interaction relates to the mechanisms by which parallel processes are able to communicate with each other. The most common forms of interaction are shared memory and message passing, but interaction can also be implicit (invisible to the programmer).
Shared memory
Shared memory is an efficient means of passing data between processes. In a shared-memory model, parallel processes share a global address space that they read and write to asynchronously. Asynchronous concurrent access can lead to race conditions, and mechanisms such as locks, semaphores and monitors can be used to avoid these. Conventional multi-core processors directly support shared memory, which many parallel programming languages and libraries, such as Cilk, OpenMP and Threading Building Blocks, are designed to exploit.
Message passing
In a message-passing model, parallel processes exchange data through passing
|
https://en.wikipedia.org/wiki/Mole%20%28architecture%29
|
A mole is a massive structure, usually of stone, used as a pier, breakwater, or a causeway separating two bodies of water. A mole may have a wooden structure built on top of it that resembles a wooden pier. The defining feature of a mole, however, is that water cannot freely flow underneath it, unlike a true pier. The oldest known mole is at Wadi al-Jarf, an ancient Egyptian harbor complex on the Red Sea, constructed ca. 2500 BCE.
The word comes from Middle French mole, ultimately from Latin mōlēs, meaning a large mass, especially of rock; it has the same root as molecule and mole, the chemical unit of measurement.
San Francisco Bay Area
In the San Francisco Bay Area in California, there were several moles, combined causeways and wooden piers or trestles extending from the eastern shore and utilized by various railroads, such as the Key System, Southern Pacific Railroad (two), and Western Pacific Railroad: the Alameda Mole, the Oakland Mole, and the Western Pacific Mole. By extending the tracks the railroads could get beyond the shallow mud flats and reach the deeper waters of the Bay that could be navigated by the Bay Ferries. A train fell off the Alameda Mole through an open drawbridge in 1890 killing several people. None of the four Bay Area moles survive today, although the causeway portions of each were incorporated into the filling in of large tracts of marshland for harbor and industrial development.
A large mole was completed in 1947 at the San Francisco Naval Shipyard in the Bayview-Hunters Point neighborhood of San Francisco to accommodate the large Hunters Point gantry crane. The mole required of fill.
World War II
Dunkirk evacuation
The two concrete moles protecting the outer harbour at Dunkirk played a significant part in the evacuation of British and French troops during World War II in May/June 1940. The harbour had been made unusable by German bombing and it was clear that troops were not going to be taken directly off the beaches fast e
|
https://en.wikipedia.org/wiki/P%E2%80%B2%E2%80%B2
|
P′′ (P double prime) is a primitive computer programming language created by Corrado Böhm in 1964 to describe a family of Turing machines.
Definition
(hereinafter written P′′) is formally defined as a set of words on the four-instruction alphabet , as follows:
Syntax
and are words in P′′.
If and are words in P′′, then is a word in P′′.
If is a word in P′′, then is a word in P′′.
Only words derivable from the previous three rules are words in P′′.
Semantics
is the tape-alphabet of a Turing machine with left-infinite tape, being the blank symbol, equivalent to .
All instructions in P′′ are permutations of the set of all possible tape configurations; that is, all possible configurations of both the contents of the tape and the position of the tape-head.
is a predicate saying that the current symbol is not . It is not an instruction and is not used in programs, but is instead used to help define the language.
means move the tape-head rightward one cell (if possible).
means replace the current symbol with , and then move the tape-head leftward one cell.
means the function composition . In other words, the instruction is performed before .
means iterate in a while loop, with the condition .
Relation to other programming languages
P′′ was the first "GOTO-less" imperative structured programming language to be proven Turing-complete
The Brainfuck language (apart from its I/O commands) is a minor informal variation of P′′. Böhm gives explicit P′′ programs for each of a set of basic functions sufficient to compute any computable function, using only , and the four words where with denoting the th iterate of , and . These are the equivalents of the six respective Brainfuck commands , , , , , . Note that since , incrementing the current symbol times will wrap around so that the result is to "decrement" the symbol in the current cell by one ().
Example program
Böhm gives the following program to compute the predecessor (x-1) of an integ
|
https://en.wikipedia.org/wiki/Wavelet%20packet%20decomposition
|
Originally known as optimal subband tree structuring (SB-TS), also called wavelet packet decomposition (WPD)
(sometimes known as just wavelet packets or subband tree), is a wavelet transform where the discrete-time (sampled) signal is passed through more filters than the discrete wavelet transform (DWT).
Introduction
In the DWT, each level is calculated by passing only the previous wavelet approximation coefficients (cAj) through discrete-time low- and high-pass quadrature mirror filters. However, in the WPD, both the detail (cDj (in the 1-D case), cHj, cVj, cDj (in the 2-D case)) and approximation coefficients are decomposed to create the full binary tree.
For n levels of decomposition the WPD produces 2n different sets of coefficients (or nodes) as opposed to sets for the DWT. However, due to the downsampling process the overall number of coefficients is still the same and there is no redundancy.
From the point of view of compression, the standard wavelet transform may not produce the best result, since it is limited to wavelet bases that increase by a power of two towards the low frequencies. It could be that another combination of bases produce a more desirable representation for a particular signal. There are several algorithms for subband tree structuring that find a set of optimal bases that provide the most desirable representation of the data relative to a particular cost function (entropy, energy compaction, etc.).
There were relevant studies in signal processing and communications fields to address the selection of subband trees (orthogonal basis) of various kinds, e.g. regular, dyadic, irregular, with respect to performance metrics of interest including energy compaction (entropy), subband correlations and others.
Discrete wavelet transform theory (continuous in the time variable) offers an approximation to transform discrete (sampled) signals. In contrast, the discrete-time subband transform theory enables a perfect representation of already sa
|
https://en.wikipedia.org/wiki/Thermal%20Hall%20effect
|
In solid-state physics, the thermal Hall effect, also known as the Righi–Leduc effect, named after independent co-discoverers Augusto Righi and Sylvestre Anatole Leduc, is the thermal analog of the Hall effect. Given a thermal gradient across a solid, this effect describes the appearance of an orthogonal temperature gradient when a magnetic field is applied.
For conductors, a significant portion of the thermal current is carried by the electrons. In particular, the Righi–Leduc effect describes the heat flow resulting from a perpendicular temperature gradient and vice versa. The Maggi–Righi–Leduc effect describes changes in thermal conductivity when placing a conductor in a magnetic field.
A thermal Hall effect has also been measured in a paramagnetic insulators, called the "phonon Hall effect". In this case, there are no charged currents in the solid, so the magnetic field cannot exert a Lorentz force. An analogous thermal Hall effect for neutral particles exists in polyatomic gases, known as the Senftleben–Beenakker effect.
Measurements of the thermal Hall conductivity are used to distinguish between the electronic and lattice contributions to thermal conductivity. These measurements are especially useful when studying superconductors.
Description
Given a conductor or semiconductor with a temperature difference in the x-direction and a magnetic field B perpendicular to it in the z-direction, then a temperature difference can occur in the transverse y-direction,
The Righi–Leduc effect is a thermal analogue of the Hall effect. With the Hall effect, an externally applied electrical voltage causes an electrical current to flow. The mobile charge carriers (usually electrons) are transversely deflected by the magnetic field due to the Lorentz force. In the Righi–Leduc effect, the temperature difference causes the mobile charge carriers to flow from the warmer end to the cooler end. Here, too, the Lorentz force causes a transverse deflection. Since the electrons tran
|
https://en.wikipedia.org/wiki/Titanium%20oxide
|
Titanium oxide may refer to:
Titanium dioxide (titanium(IV) oxide), TiO2
Titanium(II) oxide (titanium monoxide), TiO, a non-stoichiometric oxide
Titanium(III) oxide (dititanium trioxide), Ti2O3
Ti3O
Ti2O
δ-TiOx (x= 0.68–0.75)
TinO2n−1 where n ranges from 3–9 inclusive, e.g. Ti3O5, Ti4O7, etc.
Reduced titanium oxides
A common reduced titanium oxide is TiO, also known as titanium monoxide. It can be prepared from titanium dioxide and titanium metal at 1500 °C.
Ti3O5, Ti4O7, and Ti5O9 are non-stoichiometric oxides. These compounds are typically formed at high temperatures in the presence of excess oxygen. As a result, they exhibit unique structural and electronic properties, and have been studied for their potential use in various applications, including in gas sensors, lithium-ion batteries, and photocatalysis.
References
Dielectrics
Electronic engineering
High-κ dielectrics
|
https://en.wikipedia.org/wiki/Peak%20meter
|
A peak meter is a type of measuring instrument that visually indicates the instantaneous level of an audio signal that is passing through it (a sound level meter). In sound reproduction, the meter, whether peak or not, is usually meant to correspond to the perceived loudness of a particular signal. The term peak is used to denote the meter's ability, regardless of the type of visual display, to indicate the highest output level at any instant.
A peak-reading electrical instrument or meter is one which measures the peak value of a waveform, rather than its mean value or RMS value.
As an example, when making audio recordings it is desirable to use a recording level that is just sufficient to reach the maximum capability of the recorder at the loudest sounds, regardless of the average sound level. A peak-reading meter is typically used to set the recording level.
Implementation
In modern audio equipment, peak meters are usually made up of a series of LEDs (small lights) that are placed in a vertical or horizontal bar and lit up sequentially as the signal increases. They typically have ranges of green, yellow, and red, to indicate when a signal is starting to overload.
A meter can be implemented with a classic moving needle device such as those on older analog equipment (similar in appearance in some ways to a pressure gauge on a bicycle pump), or by other means. Older equipment used actual moving parts instead of lights to indicate the audio level. Because of the mass of the moving parts and mechanics, the response time of these older meters could have been anywhere from a few milliseconds to a second or more. Thus, the meter might not ever accurately reflect the signal at every instant of time, but the constantly changing level, combined with the slower response time, led to more of an average indication.
By comparison, a peak meter is designed to respond so quickly that the meter display reacts in exact proportion to the voltage of the audio signal. This
|
https://en.wikipedia.org/wiki/Authenticated%20encryption
|
Authenticated Encryption (AE) is an encryption scheme which simultaneously assures the data confidentiality (also known as privacy: the encrypted message is impossible to understand without the knowledge of a secret key) and authenticity (in other words, it is unforgeable: the encrypted message includes an authentication tag that the sender can calculate only while possessing the secret key). Examples of encryption modes that provide AE are GCM, CCM.
Many (but not all) AE schemes allow the message to contain "associated data" (AD) which is not made confidential, but its integrity is protected (i.e., it is readable, but tampering with it will be detected). A typical example is the header of a network packet that contains its destination address. To properly route the packet, all intermediate nodes in the message path need to know the destination, but for security reasons they cannot possess the secret key. Schemes that allow associated data provide authenticated encryption with associated data, or AEAD.
Programming interface
A typical programming interface for an AE implementation provides the following functions:
Encryption
Input: plaintext, key, and optionally a header (also known as additional authenticated data, AAD or associated data, AD) in plaintext that will not be encrypted, but will be covered by authenticity protection.
Output: ciphertext and authentication tag (message authentication code or MAC).
Decryption
Input: ciphertext, key, authentication tag, and optionally a header (if used during the encryption).
Output: plaintext, or an error if the authentication tag does not match the supplied ciphertext or header.
The header part is intended to provide authenticity and integrity protection for networking or storage metadata for which confidentiality is unnecessary, but authenticity is desired.
History
The need for authenticated encryption emerged from the observation that securely combining separate confidentiality and authentication block ciph
|
https://en.wikipedia.org/wiki/Newton-second
|
The newton-second (also newton second; symbol: N⋅s or N s) is the unit of impulse in the International System of Units (SI). It is dimensionally equivalent to the momentum unit kilogram-metre per second (kg⋅m/s). One newton-second corresponds to a one-newton force applied for one second.
It can be used to identify the resultant velocity of a mass if a force accelerates the mass for a specific time interval.
Definition
Momentum is given by the formula:
is the momentum in newton-seconds (N⋅s) or "kilogram-metres per second" (kg⋅m/s)
is the mass in kilograms (kg)
is the velocity in metres per second (m/s)
Examples
This table gives the magnitudes of some momenta for various masses and speeds.
See also
Power factor
Newton-metre – SI unit of torque
Orders of magnitude (momentum) – examples of momenta
References
Classical mechanics
SI derived units
Units of measurement
|
https://en.wikipedia.org/wiki/QDGC
|
QDGC - Quarter Degree Grid Cells (or QDS - Quarter degree Squares) are a way of dividing the longitude latitude degree square cells into smaller squares, forming in effect a system of geocodes. Historically QDGC has been used in a lot of African atlases. Several African biodiversity projects uses QDGC, among which The atlas of Southern African Birds is the most prominent one. In 2009 a paper by Larsen et al. describes the QDGC standard in detail.
Mechanics
The squares themselves are based on the degree squares covering earth. QDGC represents a way of making approximately equal area squares covering a specific area to represent specific qualities of the area covered. However, differences in area between 'squares' enlarge along with longitudinal distance and this can violate assumptions of many statistical analyses requiring truly equal-area grids. For instance species range modelling or estimates of ecological niche could be substantially affected if data were not appropriately transformed, e.g. projected onto a plane using a special projection.
Around the equator we have 360 longitudinal lines, and from the north to the south pole we have 180 latitudinal lines. Together this gives us 64800 segments or tiles covering earth. The form of the squares becomes more rectangular the longer north we come. At the poles they are not square or even rectangular at all, but end up in elongated triangles.
Each degree square is designated by a full reference to the main degree square. S01E010 is a reference to a square in Tanzania. S means the square is south of equator, and E means it is East of the zero meridian. The numbers refer to longitudinal and latitudinal degree.
A square with no sublevel reference is also called QDGC level 0. This is square based on a full degree longitude by a full degree latitude. The QDGC level 0 squares are themselves divided into four.
To get smaller squares the above squares are again divided in four - giving us a total of 16 squares within a
|
https://en.wikipedia.org/wiki/LOCKSS
|
The LOCKSS ("Lots of Copies Keep Stuff Safe") project, under the auspices of Stanford University, is a peer-to-peer network that develops and supports an open source system allowing libraries to collect, preserve and provide their readers with access to material published on the Web. Its main goal is digital preservation.
The system attempts to replicate the way libraries do this for material published on paper. It was originally designed for scholarly journals, but is now also used for a range of other materials. Examples include the SOLINET project to preserve theses and dissertations at eight universities, US government documents, and the MetaArchive Cooperative program preserving at-risk digital archival collections, including Electronic Theses and Dissertations (ETDs), newspapers, photograph collections, and audio-visual collections.
A similar project called CLOCKSS (Controlled LOCKSS) "is a tax-exempt, 501(c)(3), not-for-profit organization, governed by a Board of Directors made up of librarians and publishers." CLOCKSS runs on LOCKSS technology.
Problem
Traditionally, academic libraries have retained issues of scholarly journals, either individually or collaboratively, providing their readers access to the content received even after the publisher has ceased or the subscription has been canceled. In the digital age, libraries often subscribe to journals that are only available digitally over the Internet. Although convenient for patron access, the model for digital subscriptions does not allow the libraries to retain a copy of the journal. If the publisher ceases to publish, or the library cancels the subscription, or if the publisher's website is down for the day, the content that has been paid for is no longer available.
Methods
The LOCKSS system allows a library, with permission from the publisher, to collect, preserve and disseminate to its patrons a copy of the materials to which it has subscribed as well as open access material (perhaps published
|
https://en.wikipedia.org/wiki/Flip-disc%20display
|
The flip-disc display (or flip-dot display) is an electromechanical dot matrix display technology used for large outdoor signs, normally those that will be exposed to direct sunlight. Flip-disc technology has been used for destination signs in buses across North America, Europe and Australia, as well as for variable-message signs on highways. It has also been used extensively on public information displays. A few game shows have also used flip-disc displays, including Canadian shows like Just Like Mom, The Joke's on Us and Uh Oh!, but most notably the American game show Family Feud from 1976 to 1995 and its British version Family Fortunes from 1980 to 2002. The Polish version of Family Feud, Familiada, still uses this board, which was bought from the Swedish version of the show. In 2012, Brooklyn-based artist studio, BREAKFAST, began engineering a modernized Flip-Disc technology which was eventually able to flip the discs at over 60 times per second.
Design
The flip-disc display consists of a grid of small metal discs that are black on one side and a bright color on the other (typically white or day-glo yellow), set into a black background. With power applied, the disc flips to show the other side. Once flipped, the discs will remain in position without power.
The disc is attached to an axle which also carries a small permanent magnet. Positioned close to the magnet is a solenoid. By pulsing the solenoid coil with the appropriate electrical polarity, the permanent magnet on the axle will align itself with the magnetic field, also turning the disc. Another style uses a magnet embedded in the disc itself, with separate solenoids arranged at the ends or side to flip it.
A computerized driver system reads data, typically characters, and flips the appropriate discs to produce the desired display. Some displays use the other end of the solenoid to actuate a reed switch, which controls an LED array behind the disc, resulting in a display that is visible at night but r
|
https://en.wikipedia.org/wiki/Rkhunter
|
rkhunter (Rootkit Hunter) is a Unix-based tool that scans for rootkits, backdoors and possible local exploits. It does this by comparing SHA-1 hashes of important files with known good ones in online databases, searching for default directories (of rootkits), wrong permissions, hidden files, suspicious strings in kernel modules, and special tests for Linux and FreeBSD. rkhunter is notable due to its inclusion in popular operating systems (Fedora, Debian, etc.)
The tool has been written in Bourne shell, to allow for portability. It can run on almost all UNIX-derived systems.
Development
In 2003, developer Michael Boelen released the version of Rootkit Hunter. After several years of development, early 2006, he agreed to hand over development to a development team. Since that time eight people have been working to set up the project properly and work towards the much-needed maintenance release. The project has since been moved to SourceForge.
See also
chkrootkit
Lynis
OSSEC
Samhain (software)
Host-based intrusion detection system comparison
Hardening (computing)
Linux malware
MalwareMustDie
Rootkit
References
External links
Old rkhunter web page
Computer security software
Unix security-related software
Rootkit detection software
|
https://en.wikipedia.org/wiki/Specification%20and%20Description%20Language
|
Specification and Description Language (SDL) is a specification language targeted at the unambiguous specification and description of the behaviour of reactive and distributed systems.
Overview
The ITU-T has defined SDL in Recommendations Z.100 to Z.106. SDL originally focused on telecommunication systems; its current areas of application include process control and real-time applications in general. Due to its nature it can be used to represent simulation systems without ambiguity and with a graphical notation.
The Specification and Description Language provides both a graphical Graphic Representation (SDL/GR) as well as a textual Phrase Representation (SDL/PR), which are both equivalent representations of the same underlying semantics. Models are usually shown in the graphical SDL/GR form, and SDL/PR is mainly used for exchanging models between tools. A system is specified as a set of interconnected abstract machines which are extensions of finite state machines (FSM).
The language is formally complete,
so it can be used for code generation for either simulation or final targets.
The Specification and Description Language covers five main aspects: structure, communication, behavior, data, and inheritance. The behavior of components is explained by partitioning the system into a series of hierarchies. Communication between the components takes place through gates connected by channels. The channels are of delayed channel type, so communication is usually asynchronous, but when the delay is set to zero (that is, no delay) the communication becomes synchronous.
The first version of the language was released in 1976 using graphical syntax (SDL-76). This was revised in 1980 with some rudimentary semantics (SDL-80). The semantics were refined in 1984 (SDL-84), the textual form was introduced for machine processing and data was introduced. In 1988, SDL-88 was released with a formal basis for the language: an abstract grammar as well as a concrete grammar and a f
|
https://en.wikipedia.org/wiki/Agrostology
|
Agrostology (from Greek , agrōstis, "type of grass"; and , -logia), sometimes graminology, is the scientific study of the grasses (the family Poaceae, or Gramineae). The grasslike species of the sedge family (Cyperaceae), the rush family (Juncaceae), and the bulrush or cattail family (Typhaceae) are often included with the true grasses in the category of graminoid, although strictly speaking these are not included within the study of agrostology. In contrast to the word graminoid, the words gramineous and graminaceous are normally used to mean "of, or relating to, the true grasses (Poaceae)".
Agrostology has importance in the maintenance of wild and grazed grasslands, agriculture (crop plants such as rice, maize, sugarcane, and wheat are grasses, and many types of animal fodder are grasses), urban and environmental horticulture, turfgrass management and sod production, ecology, and conservation.
Botanists that made important contributions to agrostology include:
Jean Bosser
Aimée Antoinette Camus
Mary Agnes Chase
Eduard Hackel
Charles Edward Hubbard
A. S. Hitchcock
Ernst Gottlieb von Steudel
Otto Stapf
Joseph Dalton Hooker
Norman Loftus Bor
Jan-Frits Veldkamp
William Derek Clayton
Robert B Shaw
Thomas Arthur Cope
Grasses
Agrostology
01
|
https://en.wikipedia.org/wiki/Home%20theater%20in%20a%20box
|
A home theater in a box (HTIB) is an integrated home theater package which "bundles" together a combination DVD or Blu-ray player, a multi-channel amplifier (which includes a surround sound decoder, a radio tuner, and other features), speaker wires, connection cables, a remote control, a set of five or more surround sound speakers (or more rarely, just left and right speakers, a lower-price option known as "2.1") and a low-frequency subwoofer cabinet. Manufacturers also have come out with the "Sound Bar," an all in one device to put underneath the television and that contains all the speakers in one unit.
Market positioning
HTIBs are marketed as an "all-in-one" way for consumers to enjoy the surround sound experience of home cinema, even if they do not want to-or do not have the electronics "know-how" to pick out all of the components one-by-one and connect the cables. If a consumer were to buy all of the items individually, they would have to have a basic knowledge of electronics, so they could, for example, ensure that the speakers were of compatible impedance and power-handling for the amplifier. As well, the consumer would have to ensure that they purchased all of the different connection cables, which could include HDMI cables, optical connectors, speaker wire, and RCA connectors.
On the downside, most HTIBs lack the features and "tweakability" of home theater components which are sold separately. For example, while a standalone home theater amplifier may offer extensive equalization options, a HTIB amplifier may simply provide a few factory-set EQ presets. As well, while a standalone home theatre subwoofer may contain a range of sound-shaping circuitry, such as a crossover control, a phase inversion switch, and a parametric equalizer, a HTIB subwoofer system usually has its crossover point set at the factory, which means that the user cannot change it. In some cases, the factory preset crossover point on an HTIB subwoofer may cause it to sound too "boomy" i
|
https://en.wikipedia.org/wiki/Effluent
|
Effluent is wastewater from sewers or industrial outfalls that flows directly into surface waters, either untreated or after being treated at a facility. The term has slightly different meanings in certain contexts, and may contain various pollutants depending on the source.
Definition
Effluent is defined by the United States Environmental Protection Agency (EPA) as "wastewater–treated or untreated–that flows out of a treatment plant, sewer, or industrial outfall. Generally refers to wastes discharged into surface waters". The Compact Oxford English Dictionary defines effluent as "liquid waste or sewage discharged into a river or the sea". Wastewater is not usually described as effluent while being recycled, re-used, or treated until it is released to surface water. Wastewater percolated or injected into groundwater may not be described as effluent if soil is assumed to perform treatment by filtration or ion exchange; although concealed flow through fractured bedrock, lava tubes, limestone caves, or gravel in ancient stream channels may allow relatively untreated wastewater to emerge as springs.
Description
Effluent in the artificial sense is in general considered to be water pollution, such as the outflow from a sewage treatment facility or an industrial wastewater discharge. An effluent sump pump, for instance, pumps waste from toilets installed below a main sewage line. In the context of waste water treatment plants, effluent that has been treated is sometimes called secondary effluent, or treated effluent. This cleaner effluent is then used to feed the bacteria in biofilters.
In the context of a thermal power station and other industrial facilities, the output of the cooling system may be referred to as the effluent cooling water, which is noticeably warmer than the environment and is called thermal pollution. In chemical engineering practice, effluent is the stream exiting a chemical reactor.
Effluent may carry pollutants such as fats, oils and greases;
|
https://en.wikipedia.org/wiki/Range%20state
|
Range state is a term generally used in zoogeography and conservation biology to refer to any nation that exercises jurisdiction over any part of a range which a particular species, taxon or biotope inhabits, or crosses or overflies at any time on its normal migration route. The term is often expanded to also include, particularly in international waters, any nation with vessels flying their flag that engage in exploitation (e.g. hunting, fishing, capturing) of that species. Countries in which a species occurs only as a vagrant or ‘accidental’ visitor outside of its normal range or migration route are not usually considered range states.
Because governmental conservation policy is often formulated on a national scale, and because in most countries, both governmental and private conservation organisations are also organised at the national level, the range state concept is often used by international conservation organizations in formulating their conservation and campaigning policy.
An example of one such organization is the Convention on the Conservation of Migratory Species of Wild Animals (CMS, or the “Bonn Convention”). It is a multilateral treaty focusing on the conservation of critically endangered and threatened migratory species, their habitats and their migration routes. Because such habitats and/or migration routes may span national boundaries, conservation efforts are less likely to succeed without the cooperation, participation, and coordination of each of the range states.
External links
Bonn Convention (CMS) — Text of Convention Agreement
Bonn Convention (CMS): List of Range States for Critically Endangered Migratory Species
References
Conservation biology
Biogeography
Biology terminology
Endangered species
|
https://en.wikipedia.org/wiki/Mock%20object
|
In object-oriented programming, mock objects are simulated objects that mimic the behaviour of real objects in controlled ways, most often as part of a software testing initiative. A programmer typically creates a mock object to test the behaviour of some other object, in much the same way that a car designer uses a crash test dummy to simulate the dynamic behaviour of a human in vehicle impacts. The technique is also applicable in generic programming.
Motivation
In a unit test, mock objects can simulate the behavior of complex, real objects and are therefore useful when a real object is impractical or impossible to incorporate into a unit test. If an object has any of the following characteristics, it may be useful to use a mock object in its place:
the object supplies non-deterministic results (e.g. the current time or the current temperature);
it has states that are difficult to create or reproduce (e.g. a network error);
it is slow (e.g. a complete database, which would have to be prepared before the test);
it does not yet exist or may change behavior;
it would have to include information and methods exclusively for testing purposes (and not for its actual task).
For example, an alarm clock program which causes a bell to ring at a certain time might get the current time from a time service. To test this, the test must wait until the alarm time to know whether it has rung the bell correctly. If a mock time service is used in place of the real time service, it can be programmed to provide the bell-ringing time (or any other time) regardless of the real time, so that the alarm clock program can be tested in isolation.
Technical details
Mock objects have the same interface as the real objects they mimic, allowing a client object to remain unaware of whether it is using a real object or a mock object. Many available mock object frameworks allow the programmer to specify which, and in what order, methods will be invoked on a mock object and what paramete
|
https://en.wikipedia.org/wiki/Blade%20PC
|
A blade PC is a form of client or personal computer (PC). In conjunction with a client access device (usually a thin client) on a user's desk, the supporting blade PC is typically housed in a rack enclosure, usually in a datacenter or specialised environment. Together, they accomplish many of the same functions of a traditional PC, but they also take advantage of many of the architectural achievements pioneered by blade servers.
Description
Like a traditional PC, a blade PC has a CPU, RAM and a hard drive. It may or may not have an integrated graphics sub-system. Some can support multiple hard drives. It is in a “blade” form that plugs into an enclosure. Enclosures offered by current blade PC vendors are similar but not identical. Most have moved the power supplies, cooling fans and some management capabilities from the blade PC to the enclosure. Up to 14 enclosures can be placed in one industry standard 42U rack.
Blade PCs support one or more common operating systems (for instance Microsoft has created a “blade PC” version of their XP and Vista Business operating systems and many Linux distributions are installable). Importantly, these solutions are intended to support one user per discrete device. This is a major difference from server-based computing, which supports multiple users simultaneously using an application hosted on one discrete server (be it a discrete piece of hardware or a discrete virtual machine on a server).
Access to the device is usually achieved via various Virtual Network Computing (VNC), which allows users to log on to the blade PC via a client device (usually a thin client). Once logged on the end user experience is largely the same as if they were logged on to a local PC. It is less effective at delivering multimedia, in part because the audio and video are not synchronized, so in circumstances where there is increasing latency, there is a proportional decrease in the quality of the end user experience. All protocols are negatively impa
|
https://en.wikipedia.org/wiki/Yie%20Ar%20Kung-Fu
|
() is an arcade fighting game developed and published by Konami. It first had a limited Japanese release in October 1984, before having a wide release nationwide in January 1985 and then internationally in March. Along with Karate Champ (1984), which influenced Yie-Ar Kung Fu, it is one of the games that established the basis for modern fighting games.
The game was inspired by Bruce Lee's Hong Kong martial arts films, with the main player character Oolong modelled after Lee (like Bruceploitation films). In contrast to the grounded realism of Karate Champ, Yie Ar Kung-Fu moved the genre towards more fantastical, fast-paced action, with various different characters having a variety of special moves and high jumps, establishing the template for subsequent fighting games. It also introduced the health meter system to the genre, in contrast to the point-scoring system of Karate Champ.
The game was a commercial success in arcades, becoming the highest-grossing arcade conversion kit of 1985 in the United States while also being successful in Japan and Europe. It was ported to various home systems, including home computer conversions which were critically and commercially successful, becoming the best-selling home video game of 1986 in the United Kingdom.
Gameplay
Oolong (or Lee in the MSX and Famicom versions) must fight all the martial arts masters given by the game (eleven in the arcade version; five to thirteen in the home ports).
The player faces a variety of opponents, each with a unique appearance and fighting style. The player can perform up to 16 different moves, using a combination of buttons and joystick movements while standing, crouching or jumping. Moves are thrown at high, middle, and low levels. Regardless of the move that defeated them, male characters (save Feedle) always fall unconscious lying on their backs with their legs apart (Oolong flails his legs), and female characters always fall lying on their sides. Feedle disappears. When a player gains a
|
https://en.wikipedia.org/wiki/L%28R%29
|
In set theory, L(R) (pronounced L of R) is the smallest transitive inner model of ZF containing all the ordinals and all the reals.
Construction
It can be constructed in a manner analogous to the construction of L (that is, Gödel's constructible universe), by adding in all the reals at the start, and then iterating the definable powerset operation through all the ordinals.
Assumptions
In general, the study of L(R) assumes a wide array of large cardinal axioms, since without these axioms one cannot show even that L(R) is distinct from L. But given that sufficient large cardinals exist, L(R) does not satisfy the axiom of choice, but rather the axiom of determinacy. However, L(R) will still satisfy the axiom of dependent choice, given only that the von Neumann universe, V, also satisfies that axiom.
Results
Given the assumptions above, some additional results of the theory are:
Every projective set of reals – and therefore every analytic set and every Borel set of reals – is an element of L(R).
Every set of reals in L(R) is Lebesgue measurable (in fact, universally measurable) and has the property of Baire and the perfect set property.
L(R) does not satisfy the axiom of uniformization or the axiom of real determinacy.
R#, the sharp of the set of all reals, has the smallest Wadge degree of any set of reals not contained in L(R).
While not every relation on the reals in L(R) has a uniformization in L(R), every such relation does have a uniformization in L(R#).
Given any (set-size) generic extension V[G] of V, L(R) is an elementary submodel of L(R) as calculated in V[G]. Thus the theory of L(R) cannot be changed by forcing.
L(R) satisfies AD+.
References
Inner model theory
Determinacy
Descriptive set theory
|
https://en.wikipedia.org/wiki/Jet%20Set%20Willy%20II
|
Jet Set Willy II: The Final Frontier is a platform game released 1985 by Software Projects as the Amstrad CPC port of Jet Set Willy. It was then rebranded as the sequel and ported other home computers. Jet Set Willy II was developed by Derrick P. Rowson and Steve Wetherill rather than Jet Set Willy programmer Matthew Smith and is an expansion of the original game, rather than an entirely new one.
Gameplay
The map is primarily an expanded version of the original mansion from Jet Set Willy, with only a few new elements over its predecessor several of which are based on rumoured events in JSW that were in fact never programmed (such as being able to launch the titular ship in the screen called "The Yacht" and explore an island). In the ZX Spectrum, Amstrad CPC and MSX versions, Willy is blasted from the Rocket Room into space, and for these 33 rooms he dons a spacesuit.
Due to the proliferation of hacking and cheating in the original game, Jet Set Willy II pays homage to this and includes a screen called Cheat that can only be accessed by cheating.
Control of Willy also differs from the original:
The player can jump in the opposite direction immediately upon landing, without releasing the jump button.
Willy now takes a step forward before jumping from a standstill.
Some previous "safe spots" in Jet Set Willy are now hazardous to the player in Jet Set Willy II - the tall candle in "The Chapel" for example.
The ending of the game is also different.
Development
Jet Set Willy II was originally created as the Amstrad conversion of Jet Set Willy by Derrick P. Rowson and Steve Wetherill, but Rowson's use of an algorithm to compress much of the screen data meant there was enough memory available to create new rooms.
It came with a form of enhanced copy protection called Padlock II. To discourage felt tip copying, it had seven pages, rather than the single page used in Jet Set Willy.
Software Projects later had Rowson remove all of the enhancements from the Amstrad ve
|
https://en.wikipedia.org/wiki/WCCT-TV
|
WCCT-TV (channel 20), branded on-air as CW 20, is a television station licensed to Waterbury, Connecticut, United States, serving the Hartford–New Haven market as an affiliate of The CW. It is owned by Tegna Inc. alongside Hartford-licensed Fox affiliate WTIC-TV (channel 61). Both stations share studios on Broad Street in downtown Hartford, while WCCT-TV's transmitter is located on Rattlesnake Mountain in Farmington, Connecticut.
Established in 1953 as WATR-TV, a television station for the Waterbury area, the station changed to a regional independent in 1982, becoming Connecticut's UPN affiliate in 1995 and switching to The WB in 2001. It has been managed by WTIC-TV since 1998.
History
WATR (1953–1966)
The station commenced operations on September 10, 1953, as WATR-TV on channel 53, the second UHF station in Connecticut. It was owned by the Thomas and Gilmore families, along with WATR radio (1320 AM). The station's studios and transmitter were located on West Peak in Meriden. At the time, the station's signal only covered Waterbury, New Haven and the southern portion of the state.
WATR-TV was originally a dual secondary affiliate of both DuMont and ABC, sharing them with New Haven-based WNHC-TV (channel 8, now WTNH). DuMont ceased operations in 1956, and shortly afterward, WNHC-TV became an exclusive ABC affiliate, as did WATR-TV. Both stations carried ABC programming through Connecticut.
In 1962, the station relocated to UHF channel 20 and moved to a new studio and transmitter site in Prospect, south of Waterbury. Channel 53 was later occupied by WEDN, Connecticut Public Television's outlet in Norwich.
NBC affiliate (1966–1982)
In August 1966, WATR-TV joined NBC. At the time, the network's primary affiliate in Connecticut, WHNB-TV (channel 30) in New Britain, was hampered by a weak signal in New Haven and the southwestern portions of the state. In the 1970s, the station offered limited local news and instead aired older syndicated programs and religious shows
|
https://en.wikipedia.org/wiki/No-three-in-line%20problem
|
The no-three-in-line problem in discrete geometry asks how many points can be placed in the grid so that no three points lie on the same line. The problem concerns lines of all slopes, not only those aligned with the grid. It was introduced by Henry Dudeney in 1900. Brass, Moser, and Pach call it "one of the oldest and most extensively studied geometric questions concerning lattice points".
At most points can be placed, because points in a grid would include a row of three or more points, by the pigeonhole principle. Although the problem can be solved with points for every up it is conjectured that fewer than points can be placed in grids of large size. Known methods can place linearly many points in grids of arbitrary size, but the best of these methods place slightly fewer than points,
Several related problems of finding points with no three in line, among other sets of points than grids, have also been studied. Although originating in recreational mathematics, the no-three-in-line problem has applications in graph drawing and to the Heilbronn triangle problem.
Small instances
The problem was first posed by Henry Dudeney in 1900, as a puzzle in recreational mathematics, phrased in terms of placing the 16 pawns of a chessboard onto the board so that no three are in a line. This is exactly the no-three-in-line problem, for the In a later version of the puzzle, Dudeney modified the problem, making its solution unique, by asking for a solution in which two of the pawns are on squares d4 and e5, attacking each other in the center of the board.
Many authors have published solutions to this problem for small values and by 1998 it was known that points could be placed on an grid with no three in a line
for all up and some larger values. The numbers of solutions with points for small values of , starting are
The numbers of equivalence classes of solutions with points under reflections and rotations are
Upper and lower bounds
The exact number of poi
|
https://en.wikipedia.org/wiki/Openwall%20Project
|
The Openwall Project is a source for various software, including Openwall GNU/*/Linux (Owl), a security-enhanced Linux distribution designed for servers. Openwall patches and security extensions have been included into many major Linux distributions.
As the name implies, Openwall GNU/*/Linux draws source code and design concepts from numerous sources, most importantly to the project is its usage of the Linux kernel and parts of the GNU userland, others include the BSDs, such as OpenBSD for its OpenSSH suite and the inspiration behind its own Blowfish-based crypt for password hashing, compatible with the OpenBSD implementation.
Public domain software
The Openwall project maintains also a list of algorithms and source code which is public domain software.
Openwall GNU/*/Linux releases
LWN.net reviewed Openwall Linux 3.0. They wrote:
PoC||GTFO
Issues of the International Journal of Proof-of-Concept or Get The Fuck Out (PoC||GTFO) are mirrored by the Openwall Project under a samizdat licence. The first issue #00 was published in 2013, issue #02 featured the Chaos Computer Club. Issue #07 in 2015 was a homage for Dr. Dobb's Journal, which could be rendered as .pdf, .zip, .bpg, or .html.
See also
Executable space protection
Comparison of Linux distributions
Security-focused operating system
John the Ripper
References
External links
Free software projects
Operating system security
Public-domain software with source code
|
https://en.wikipedia.org/wiki/Destructor%20%28computer%20programming%29
|
In object-oriented programming, a destructor (sometimes abbreviated dtor) is a method which is invoked mechanically just before the memory of the object is released. It can happen when its lifetime is bound to scope and the execution leaves the scope, when it is embedded in another object whose lifetime ends, or when it was allocated dynamically and is released explicitly. Its main purpose is to free the resources (memory allocations, open files or sockets, database connections, resource locks, etc.) which were acquired by the object during its life and/or deregister from other entities which may keep references to it. Use of destructors is needed for the process of Resource Acquisition Is Initialization (RAII).
With most kinds of automatic garbage collection algorithms, the releasing of memory may happen a long time after the object becomes unreachable, making destructors (called finalizers in this case) unsuitable for most purposes. In such languages, the freeing of resources is done either through a lexical construct (such as try..finally, Python's "with" or Java's "try-with-resources"), which is the equivalent to RAII, or explicitly by calling a function (equivalent to explicit deletion); in particular, many object-oriented languages use the Dispose pattern.
Destructor syntax
C++: destructors have the same name as the class with which they are associated, but with a tilde (~) prefix.
D: destructors are declared with name ~this() (whereas constructors are declared with this()).
Object Pascal: destructors have the keyword destructor and can have user-defined names, but are mostly named Destroy.
Objective-C: the destructor method has the name dealloc.
Perl: the destructor method has the name DESTROY; in the Moose object system extension, it is named DEMOLISH.
PHP: In PHP 5+, the destructor method has the name __destruct. There were no destructors in prior versions of PHP.
Python: there are __del__ methods called destructors by the Python 2 language guide
|
https://en.wikipedia.org/wiki/Renard%20series
|
Renard series are a system of preferred numbers dividing an interval from 1 to 10 into 5, 10, 20, or 40 steps. This set of preferred numbers was proposed in 1877 by French army engineer Colonel Charles Renard. His system was adopted by the ISO in 1949 to form the ISO Recommendation R3, first published in 1953 or 1954, which evolved into the international standard ISO 3.
The factor between two consecutive numbers in a Renard series is approximately constant (before rounding), namely the 5th, 10th, 20th, or 40th root of 10 (approximately 1.58, 1.26, 1.12, and 1.06, respectively), which leads to a geometric sequence. This way, the maximum relative error is minimized if an arbitrary number is replaced by the nearest Renard number multiplied by the appropriate power of 10. One application of the Renard series of numbers is to current rating of electric fuses. Another common use is the voltage rating of capacitors (e.g. 100 V, 160 V, 250 V, 400 V, 630 V).
Base series
The most basic R5 series consists of these five rounded numbers, which are powers of the fifth root of 10, rounded to two digits. The Renard numbers are not always rounded to the closest three-digit number to the theoretical geometric sequence:
R5: 1.00 1.60 2.50 4.00 6.30
Examples
If some design constraints were assumed so that two screws in a gadget should be placed between 32 mm and 55 mm apart, the resulting length would be 40 mm, because 4 is in the R5 series of preferred numbers.
If a set of nails with lengths between roughly 15 and 300 mm should be produced, then the application of the R5 series would lead to a product repertoire of 16 mm, 25 mm, 40 mm, 63 mm, 100 mm, 160 mm, and 250 mm long nails.
If traditional English wine cask sizes had been metricated, the rundlet (18 gallons, ca 68 liters), barrel (31.5 gal., ca 119 liters), tierce (42 gal., ca 159 liters), hogshead (63 gal., ca 239 liters), puncheon (84 gal., ca 318 liters), butt (126 gal., ca 477 liters) and tun (252 gal., ca 954 lite
|
https://en.wikipedia.org/wiki/C-element
|
In digital computing, the Muller C-element (C-gate, hysteresis flip-flop, coincident flip-flop, or two-hand safety circuit) is a small binary logic circuit widely used in design of asynchronous circuits and systems. It outputs 0 when all inputs are 0, it outputs 1 when all inputs are 1, and it retains its output state otherwise. It was specified formally in 1955 by David E. Muller and first used in ILLIAC II computer. In terms of the theory of lattices, the C-element is a semimodular distributive circuit, whose operation in time is described by a Hasse diagram. The C-element is closely related to the rendezvous and join elements, where an input is not allowed to change twice in succession. In some cases, when relations between delays are known, the C-element can be realized as a sum-of-product (SOP) circuit. Earlier techniques for implementing the C-element include Schmitt trigger, Eccles-Jordan flip-flop and last moving point flip-flop.
Truth table and delay assumptions
For two input signals the C-element is defined by the equation , which corresponds to the following truth table:
This table can be turned into a circuit using the Karnaugh map. However, the obtained implementation is naive, since nothing is said about delay assumptions. To understand under what conditions the obtained circuit is workable, it is necessary to do additional analysis, which reveals that
delay1 is a propagation delay from node 1 via environment to node 3,
delay2 is a propagation delay from node 1 via internal feedback to node 3,
delay1 must be greater than delay2.
Thus, the naive implementation is correct only for slow environment.
The definition of C-element can be generalized for multiple-valued logic , or even for continuous signals:
For example, the truth table for a balanced ternary C-element with two inputs is
Implementations of the C-element
Depending on the requirements to the switching speed and power consumption, the C-element can be realized as a coarse- or fine-grain
|
https://en.wikipedia.org/wiki/Asymmetric%20C-element
|
Asymmetric C-elements are extended C-elements which allow inputs which only effect the operation of the element when transitioning in one of the directions. Asymmetric inputs are attached to either the minus (-) or plus (+) strips of the symbol. The common inputs which effect both the transitions are connected to the centre of the symbol. When transitioning from zero to one, the C-element will take into account the common and the asymmetric plus inputs. All these inputs must be high for the up transition to take place. Similarly when transitioning from one to zero the C-element will take into account the common and the asymmetric minus inputs. All these inputs must be low for the down transition to happen.
The figure shows the gate-level and transistor-level implementations and symbol of the asymmetric C-element. In the figure the plus inputs are marked with a 'P', the minus inputs are marked with an 'm' and the common inputs are marked with a 'C'.
In addition, it is possible to extend the asymmetric input convention to inverted C-elements, where a plus (minus) on an input port means that an input is required for the inverted output to fall (rise).
References
Digital electronics
|
https://en.wikipedia.org/wiki/Business%20record
|
A business record is a document (hard copy or digital) that records an "act, condition, or event" related to business. Business records include meeting minutes, memoranda, employment contracts, and accounting source documents.
It must be retrievable at a later date so that the business dealings can be accurately reviewed as required. Since business is dependent upon confidence and trust, not only must the record be accurate and easily retrieved, but the processes surrounding its creation and retrieval must be perceived by customers and the business community to consistently deliver a full and accurate record with no gaps or additions.
Most business records have specified retention periods based on legal requirements and/or internal company policies. This is important because in many countries (including the United States), many documents may be required by law to be disclosed to government regulatory agencies or to the general public. Likewise, they may be discoverable if the business is sued. Under the business records exception in the Federal Rules of Evidence, certain types of business records, particularly those made and kept with regularity, may be considered admissible in court despite containing hearsay.
See also
Records management
Information governance
Regulation Fair Disclosure
Sarbanes-Oxley Act
References
Resources
ARMA International - Association of Records Managers and Administrators
AIIM - Association for Information and Image Management
Business documents
Information management
Records management
Information governance
|
https://en.wikipedia.org/wiki/Peak%20programme%20meter
|
A peak programme meter (PPM) is an instrument used in professional audio that indicates the level of an audio signal.
Different kinds of PPM fall into broad categories:
True peak programme meter. This shows the peak level of the waveform no matter how brief its duration.
Quasi peak programme meter (QPPM). This only shows the true level of the peak if it exceeds a certain duration, typically a few milliseconds. On peaks of shorter duration, it indicates less than the true peak level. The extent of the shortfall is determined by the 'integration time'.
Sample peak programme meter (SPPM). This is a PPM for digital audio. It shows only peak sample values, not true waveform peaks (which may fall between samples and may be higher in amplitude). It may have either a 'true' or a 'quasi' integration characteristic.
Over-sampling peak programme meter. This is a sample PPM that first oversamples the signal, typically by a factor of four, to alleviate the problems of a basic sample PPM.
In professional use, which requires consistent level measurements across an industry, audio level meters often comply with a formal standard. This ensures that all compliant meters indicate the same level for a given audio signal. The principal standard for PPMs is IEC 60268-10. It describes two different quasi-PPM designs that have roots in meters originally developed in the 1930s for the AM radio broadcasting networks of Germany (Type I) and the United Kingdom (Type II). The term Peak Programme Meter usually refers to these IEC-specified types and similar designs. Though originally designed for monitoring analogue audio signals, these PPMs are now also used with digital audio.
PPMs do not provide effective loudness monitoring. Newer types of meter do, and there is now a push within the broadcasting industry to move away from the traditional level meters in this article to two new types: loudness meters based on EBU Tech. 3341 and oversampling true PPMs. The former would be used to standardi
|
https://en.wikipedia.org/wiki/Hypervariable
|
Hypervariable may refer to:
Hypervariable sequence, a segment of a chromosome characterised by considerable variation in the number of tandem repeats at one or more loci
Hypervariable locus, a locus with many alleles; especially those whose variation is due to variable numbers of tandem repeats
Hypervariable region (HVR), a chromosomal segment characterized by multiple alleles within a population for a single genetic locus
Genetics
|
https://en.wikipedia.org/wiki/Fax%20server
|
A fax server is a system installed in a local area network (LAN) server that allows computer users whose computers are attached to the LAN to send and receive fax messages.
Alternatively the term fax server is sometimes used to describe a program that enables a computer to send and receive fax messages, set of software running on a server computer which is equipped with one or more fax-capable modems (or dedicated fax boards) attached to telephone lines or, more recently, software modem emulators which use T.38 ("Fax over IP") technology to transmit the signal over an IP network. Its function is to accept documents from users, convert them into faxes, and transmit them, as well as to receive fax calls and either store the incoming documents or pass them on to users. Users may communicate with the server in several ways, through either a local network or the Internet. In a big organization with heavy fax traffic, the computer hosting the fax server may be dedicated to that function, in which case the computer itself may also be known as a fax server.
User interfaces
For outgoing faxes, several methods are available to the user:
An e-mail message (with optional attachments) can be sent to a special e-mail address; the fax server monitoring that address converts all such messages into fax format and transmits them.
The user can tell their computer to "print" a document using a "virtual printer" which, instead of producing a paper printout, sends the document to the fax server, which then transmits it.
A web interface can be used, allowing files to be uploaded, and transmitted to the fax server for faxing.
Special client software may be used.
For incoming faxes, several user interfaces may be available:
The user may be sent an e-mail message for each fax received, with the pages included as attachments, typically in either TIFF or PDF format.
Incoming faxes may be stored in a dedicated file directory, which the user can monitor.
A website may allow users to lo
|
https://en.wikipedia.org/wiki/Retention%20basin
|
A retention basin, sometimes called a retention pond, wet detention basin, or storm water management pond (SWMP), is an artificial pond with vegetation around the perimeter and a permanent pool of water in its design. It is used to manage stormwater runoff, for protection against flooding, for erosion control, and to serve as an artificial wetland and improve the water quality in adjacent bodies of water.
It is distinguished from a detention basin, sometimes called a "dry pond", which temporarily stores water after a storm, but eventually empties out at a controlled rate to a downstream water body. It also differs from an infiltration basin which is designed to direct stormwater to groundwater through permeable soils.
Wet ponds are frequently used for water quality improvement, groundwater recharge, flood protection, aesthetic improvement, or any combination of these. Sometimes they act as a replacement for the natural absorption of a forest or other natural process that was lost when an area is developed. As such, these structures are designed to blend into neighborhoods and viewed as an amenity.
In urban areas, impervious surfaces (roofs, roads) reduce the time spent by rainfall before entering into the stormwater drainage system. If left unchecked, this will cause widespread flooding downstream. The function of a stormwater pond is to contain this surge and release it slowly. This slow release mitigates the size and intensity of storm-induced flooding on downstream receiving waters. Stormwater ponds also collect suspended sediments, which are often found in high concentrations in stormwater water due to upstream construction and sand applications to roadways.
Design features
Storm water is typically channeled to a retention basin through a system of street and/or parking lot storm drains, and a network of drain channels or underground pipes. The basins are designed to allow relatively large flows of water to enter, but discharges to receiving waters are
|
https://en.wikipedia.org/wiki/Somatic%20marker%20hypothesis
|
The somatic marker hypothesis, formulated by Antonio Damasio and associated researchers, proposes that emotional processes guide (or bias) behavior, particularly decision-making.
"Somatic markers" are feelings in the body that are associated with emotions, such as the association of rapid heartbeat with anxiety or of nausea with disgust. According to the hypothesis, somatic markers strongly influence subsequent decision-making. Within the brain, somatic markers are thought to be processed in the ventromedial prefrontal cortex (vmPFC) and the amygdala. The hypothesis has been tested in experiments using the Iowa gambling task.
Background
In economic theory, human decision-making is often modeled as being devoid of emotions, involving only logical reasoning based on cost-benefit calculations. In contrast, the somatic marker hypothesis proposes that emotions play a critical role in the ability to make fast, rational decisions in complex and uncertain situations.
Patients with frontal lobe damage, such as Phineas Gage, provided the first evidence that the frontal lobes were associated with decision-making. Frontal lobe damage, particularly to the vmPFC, results in impaired abilities to organize and plan behavior and learn from previous mistakes, without affecting intellect in terms of working memory, attention, and language comprehension and expression.
vmPFC patients also have difficulty expressing and experiencing appropriate emotions. This led Antonio Damasio to hypothesize that decision-making deficits following vmPFC damage result from the inability to use emotions to help guide future behavior based on past experiences. Consequently, vmPFC damage forces those affected to rely on slow and laborious cost-benefit analyses for every given choice situation.
Antonio Damasio
Antonio Damasio () is a Portuguese-American neuroscientist. He is currently the David Dornsife Professor of Neuroscience, Psychology and Philosophy at the University of Southern California and
|
https://en.wikipedia.org/wiki/Situation%20calculus
|
The situation calculus is a logic formalism designed for representing and reasoning about dynamical domains. It was first introduced by John McCarthy in 1963. The main version of the situational calculus that is presented in this article is based on that introduced by Ray Reiter in 1991. It is followed by sections about McCarthy's 1986 version and a logic programming formulation.
Overview
The situation calculus represents changing scenarios as a set of first-order logic formulae. The basic elements of the calculus are:
The actions that can be performed in the world
The fluents that describe the state of the world
The situations
A domain is formalized by a number of formulae, namely:
Action precondition axioms, one for each action
Successor state axioms, one for each fluent
Axioms describing the world in various situations
The foundational axioms of the situation calculus
A simple robot world will be modeled as a running example. In this world there is a single robot and several inanimate objects. The world is laid out according to a grid so that locations can be specified in terms of coordinate points. It is possible for the robot to move around the world, and to pick up and drop items. Some items may be too heavy for the robot to pick up, or fragile so that they break when they are dropped. The robot also has the ability to repair any broken items that it is holding.
Elements
The main elements of the situation calculus are the actions, fluents and the situations. A number of objects are also typically involved in the description of the world. The situation calculus is based on a sorted domain with three sorts: actions, situations, and objects, where the objects include everything that is not an action or a situation. Variables of each sort can be used. While actions, situations, and objects are elements of the domain, the fluents are modeled as either predicates or functions.
Actions
The actions form a sort of the domain. Variables of sort action can b
|
https://en.wikipedia.org/wiki/Bomberman%20%281983%20video%20game%29
|
is a maze video game developed and published by Hudson Soft. The original home computer game was released in July 1983 for the NEC PC-8801, NEC PC-6001 mkII, Fujitsu FM-7, Sharp MZ-700, Sharp MZ-2000, Sharp X1 and MSX in Japan, and a graphically modified version for the MSX and ZX Spectrum in Europe as Eric and the Floaters. A sequel, 3-D Bomberman, was produced. In 1985, Bomberman was released for the Nintendo Entertainment System. It spawned the Bomberman series with many installments building on its basic gameplay.
Gameplay
In the NES/Famicom release, the eponymous character, Bomberman, is a robot that must find his way through a maze while avoiding enemies. Doors leading to further maze rooms are found under rocks, which Bomberman must destroy with bombs. There are items that can help improve Bomberman's bombs, such as the Fire ability, which improves the blast range of his bombs. Bomberman will turn human when he escapes and reaches the surface. Each game has 50 levels in total. The original home computer games are more basic and have some different rules.
Notably, completing the NES and Famicom version reveals that the game is a prequel to Hudson Soft's NES port of Broderbund Software's 1983 game Lode Runner. Upon clearing the final screen, Bomberman is shown turning into Lode Runner's unnamed protagonist. In the Japanese version of the game, the player is explicitly told that Bomberman will 'See [them] in Lode Runner''', while in the international version, they are instead asked if they can recognise the protagonist from another Hudson game.
DevelopmentBomberman was written in 1980 to serve as a tech demo for Hudson Soft's BASIC compiler. This very basic version of the game was given a small-scale release for Japanese PCs in 1983 and the European PCs the following year.
The Famicom version was developed (ported) by Shinichi Nakamoto, who reputedly completed the task alone over a 72 hour period.
According to Zero magazine, Bomberman adopted gameplay elem
|
https://en.wikipedia.org/wiki/The%20Code%20Room
|
The Code Room is a half-hour-long reality game show produced by Microsoft. The show was conceptualized and executive produced by Paul Murphy and hosted by Jessi Knapp, accompanied by a varying project expert. Each episode consists of a number of MSDN Developer Event attendees who team up to complete a project, with given specifications, in a limited amount of time. It should not be confused with The Code Room from DMD and the group that introduced the world to the "Hybrid Hostel".
The Code Room was filmed at an MSDN Developer Event and shown on several cable television stations, as well as other streaming television stations like MSDN TV. The show could also be watched for free through the Channel 9 community, and was additionally included in several MSDN CDs and DVDs.
The "Code Room" was typically an enclosed room with one or more desks, whiteboards and computers. All required programming tools were installed, along with other stationery. However, no internet connection was available and contestants could not bring their own notes. Contestants had to use preselected development environments which were typically new to them, having only been given a quick crash course about the environment in a presentation before the contest began.
Teams that were able to complete the project within the required time were eligible for prizes and to be part of "Team Code FF". At the end of the series, the two top "Team Code FF" teams (determined by their performance, as well as community ranking) competed in the Final Code Room challenge for the ultimate "Code Room Champion" title.
The program won a Telly Award in 2004.
Episodes
The Code Room: Episode 1, published December 9, 2004
The Code Room: Episode 2, Building Mobile Apps and Bluetooth Enabled Kiosks, published May 19, 2005
The Code Room: Episode 3, Breaking Into Vegas, published February 23, 2006
See also
Microsoft Developer Network
The .NET Show
External links
(official website)
MSDN Events
MSDN TV
MSDN
T
|
https://en.wikipedia.org/wiki/Iterative%20deepening%20A%2A
|
Iterative deepening A* (IDA*) is a graph traversal and path search algorithm that can find the shortest path between a designated start node and any member of a set of goal nodes in a weighted graph. It is a variant of iterative deepening depth-first search that borrows the idea to use a heuristic function to conservatively estimate the remaining cost to get to the goal from the A* search algorithm. Since it is a depth-first search algorithm, its memory usage is lower than in A*, but unlike ordinary iterative deepening search, it concentrates on exploring the most promising nodes and thus does not go to the same depth everywhere in the search tree. Unlike A*, IDA* does not utilize dynamic programming and therefore often ends up exploring the same nodes many times.
While the standard iterative deepening depth-first search uses search depth as the cutoff for each iteration, the IDA* uses the more informative , where is the cost to travel from the root to node and is a problem-specific heuristic estimate of the cost to travel from to the goal.
The algorithm was first described by Richard Korf in 1985.
Description
Iterative-deepening-A* works as follows: at each iteration, perform a depth-first search, cutting off a branch when its total cost exceeds a given threshold. This threshold starts at the estimate of the cost at the initial state, and increases for each iteration of the algorithm. At each iteration, the threshold used for the next iteration is the minimum cost of all values that exceeded the current threshold.
As in A*, the heuristic has to have particular properties to guarantee optimality (shortest paths). See Properties below.
Pseudocode
path current search path (acts like a stack)
node current node (last node in current path)
g the cost to reach current node
f estimated cost of the cheapest path (root..node..goal)
h(node) estimated cost of the cheapest path (node..goal)
|
https://en.wikipedia.org/wiki/Multipath%20I/O
|
In computer storage, multipath I/O is a fault-tolerance and performance-enhancement technique that defines more than one physical path between the CPU in a computer system and its mass-storage devices through the buses, controllers, switches, and bridge devices connecting them.
As an example, a SCSI hard disk drive may connect to two SCSI controllers on the same computer, or a disk may connect to two Fibre Channel ports. Should one controller, port or switch fail, the operating system can route the I/O through the remaining controller, port or switch transparently and with no changes visible to the applications, other than perhaps resulting in increased latency.
Multipath software layers can leverage the redundant paths to provide performance-enhancing features, including dynamic load balancing, traffic shaping, automatic path management, and dynamic reconfiguration.
See also
Device mapper
Linux DM Multipath
External links
Linux Multipathing, Linux Symposium 2005 p. 147
VxDMP white paper, Veritas Dynamic Multi pathing
Linux Multipath Usage guide
Computer data storage
Computer storage technologies
Fault-tolerant computer systems
|
https://en.wikipedia.org/wiki/Capacitor-input%20filter
|
A capacitor-input filter is a filter circuit in which the first element is a capacitor connected in parallel with the output of the rectifier in a linear power supply. The capacitor increases the DC voltage and decreases the ripple voltage components of the output. The capacitor is often referred to as a smoothing capacitor or reservoir capacitor. The capacitor is often followed by other alternating series and parallel filter elements to further reduce ripple voltage, or adjust DC output voltage. It may also be followed by a voltage regulator which virtually eliminates any remaining ripple voltage, and adjusts the DC voltage output very precisely to match the DC voltage required by the circuit.
Operation
While during the time the rectifier is conducting and the potential is higher than the charge across the capacitor, the capacitor will store energy from the transformer; when the output of the rectifier falls below the charge on the capacitor, the capacitor will discharge energy into the circuit. Since the rectifier conducts current only in the forward direction, any energy discharged by the capacitor will flow into the load. This results in output of a DC voltage upon which is superimposed a waveform referred to as a sawtooth wave. The sawtooth wave is a convenient linear approximation to the actual waveform, which is exponential for both charge and discharge. The crests of the sawtooth waves will be more rounded when the DC resistance of the transformer secondary is higher.
Ripple current
A ripple current which is 90 degrees out of phase with the ripple voltage also passes through the capacitor.
See also
Rectifier#Capacitor input filter
Choke-input filter
References
Linear filters
Analog circuits
Electronic filter topology
|
https://en.wikipedia.org/wiki/Lusin%27s%20theorem
|
In the mathematical field of real analysis, Lusin's theorem (or Luzin's theorem, named for Nikolai Luzin) or Lusin's criterion states that an almost-everywhere finite function is measurable if and only if it is a continuous function on nearly all its domain. In the informal formulation of J. E. Littlewood, "every measurable function is nearly continuous".
Classical statement
For an interval [a, b], let
be a measurable function. Then, for every ε > 0, there exists a compact E ⊆ [a, b] such that f restricted to E is continuous and
Note that E inherits the subspace topology from [a, b]; continuity of f restricted to E is defined using this topology.
Also for any function f, defined on the interval [a, b] and almost-everywhere finite, if for any ε > 0 there is a function ϕ, continuous on [a, b], such that the measure of the set
is less than ε, then f is measurable.
General form
Let be a Radon measure space and Y be a second-countable topological space equipped with a Borel algebra, and let be a measurable function. Given , for every of finite measure there is a closed set with such that restricted to is continuous.
On the proof
The proof of Lusin's theorem can be found in many classical books. Intuitively, one expects it as a consequence of Egorov's theorem and density of smooth functions. Egorov's theorem states that pointwise convergence is nearly uniform, and uniform convergence preserves continuity.
References
Sources
N. Lusin. Sur les propriétés des fonctions mesurables, Comptes rendus de l'Académie des Sciences de Paris 154 (1912), 1688–1690.
G. Folland. Real Analysis: Modern Techniques and Their Applications, 2nd ed. Chapter 7
W. Zygmunt. Scorza-Dragoni property (in Polish), UMCS, Lublin, 1990
M. B. Feldman, "A Proof of Lusin's Theorem", American Math. Monthly, 88 (1981), 191-2
Lawrence C. Evans, Ronald F. Gariepy, "Measure Theory and fine properties of functions", CRC Press Taylor & Francis Group, Textbooks in mathematics, Theorem 1.14
Cita
|
https://en.wikipedia.org/wiki/Stationary%20wavelet%20transform
|
The Stationary wavelet transform (SWT) is a wavelet transform algorithm designed to overcome the lack of translation-invariance of the discrete wavelet transform (DWT). Translation-invariance is achieved by removing the downsamplers and upsamplers in the DWT and upsampling the filter coefficients by a factor of in the th level of the algorithm. The SWT is an inherently redundant scheme as the output of each level of SWT contains the same number of samples as the input – so for a decomposition of N levels there is a redundancy of N in the wavelet coefficients. This algorithm is more famously known as "algorithme à trous" in French (word trous means holes in English) which refers to inserting zeros in the filters. It was introduced by Holschneider et al.
Implementation
The following block diagram depicts the digital implementation of SWT.
In the above diagram, filters in each level are up-sampled versions of the previous (see figure below).
KIT
Applications
A few applications of SWT are specified below.
Signal denoising
Pattern recognition
Brain image classification
Pathological brain detection
Synonyms
Redundant wavelet transform
Algorithme à trous
Quasi-continuous wavelet transform
Translation invariant wavelet transform
Shift invariant wavelet transform
Cycle spinning
Maximal overlap wavelet transform (MODWT)
Undecimated wavelet transform (UWT)
See also
wavelet transform
wavelet entropy
wavelet packet decomposition
References
Wavelets
|
https://en.wikipedia.org/wiki/Stardock%20Central
|
Stardock Central was a software content delivery and digital rights management system used by Stardock customers to access components of the Object Desktop, TotalGaming.net and ThinkDesk product lines, as well as products under the WinCustomize brand.
Introduced in 2001 to access games on TotalGaming.net (then known as the Drengin Network), Stardock Central was later expanded to cover all Stardock products, replacing Component Manager (1999).
As of 2010, Stardock Central had been phased out in favour of its successor, Impulse. However, in March 2011 Impulse was sold to GameStop and Stardock soon reopened their own online store. As of April 2012, the Stardock Central software has been revived and released as a Beta to once again provide a proprietary platform for Stardock's digital product downloads.
Features
Software on Stardock Central was divided into components, and further divided into packages. When users purchased a product or a subscription, they gained access to it via Stardock Central. The program had the ability to break products into components so that users on slower connections could start using the main portion of the software as soon as possible, and download extras — such as in-game movies or music — at a later date.
To cater for the various frequent updates provided for many products, once a package has been downloaded and installed Stardock Central only downloaded updated files for new versions. A product archiving and restore function was available to back up components and to allow their transfer to other computers. Users could also use the program to interact on Stardock's discussion boards or access the Stardock IRC server via a built-in IRC client. WinCustomize subscribers could use the Skins and Themes section to browse and download the WinCustomize library.
Stardock Central was similar in concept to the later-developed Steam content delivery system; unlike Steam, it did not require a permanent connection to the Internet, only being re
|
https://en.wikipedia.org/wiki/WUPL
|
WUPL (channel 54) is a television station licensed to Slidell, Louisiana, United States, serving the New Orleans area as an affiliate of MyNetworkTV. It is owned by Tegna Inc. alongside CBS affiliate WWL-TV (channel 4). Both stations share studios on Rampart Street in the historic French Quarter district, while WUPL's transmitter is located on Cooper Road in Terrytown, Louisiana.
History
As a UPN affiliate
The station first signed on the air on June 1, 1995, as an affiliate of the United Paramount Network (UPN). It was owned by Texas broadcaster Larry Safir via his company, Middle America Communications. Safir also owned Univision affiliate KNVO in the Rio Grande Valley. Prior to the station's sign-on, WHNO (channel 20) was approached by UPN for an affiliation, though WHNO's owner LeSEA Broadcasting declined all netlet offers on their stations through the country, as the programming planned for both UPN and competitor The WB conflicted with the company's core programming values; as a result, programming from UPN, which launched on January 16, 1995, was only available on New Orleans-area cable and satellite providers through New York City-based national superstation WWOR for the 5½ months prior to WUPL's debut. Along with programming from UPN, the station ran a general entertainment format, offering vintage off-network sitcoms, talk shows, court shows and other syndicated programs. In 1996, Safir entered a deal with Cox Enterprises to take over operations of the station, and in 1997, he sold the station to the Paramount Stations Group subsidiary of Viacom; as a result, WUPL became a UPN owned-and-operated station (Viacom launched UPN in a programming partnership with Chris-Craft Industries/United Television, and acquired a 50% interest in the network from Chris-Craft/United in 1996).
Viacom merged with CBS in 2000. Despite Viacom's ownership of WUPL, the market's CBS affiliation remained on WWL-TV (channel 4), the highest-rated television station in New Orleans an
|
https://en.wikipedia.org/wiki/Pickling
|
Pickling is the process of preserving or extending the shelf life of food by either anaerobic fermentation in brine or immersion in vinegar. The pickling procedure typically affects the food's texture and flavor. The resulting food is called a pickle, or, to prevent ambiguity, prefaced with pickled. Foods that are pickled include vegetables, fruits, mushrooms, meats, fish, dairy and eggs.
Pickling solutions are typically highly acidic, with a pH of 4.6 or lower, and high in salt, preventing enzymes from working and micro-organisms from multiplying. Pickling can preserve perishable foods for months. Antimicrobial herbs and spices, such as mustard seed, garlic, cinnamon or cloves, are often added. If the food contains sufficient moisture, a pickling brine may be produced simply by adding dry salt. For example, sauerkraut and Korean kimchi are produced by salting the vegetables to draw out excess water. Natural fermentation at room temperature, by lactic acid bacteria, produces the required acidity. Other pickles are made by placing vegetables in vinegar. Like the canning process, pickling (which includes fermentation) does not require that the food be completely sterile before it is sealed. The acidity or salinity of the solution, the temperature of fermentation, and the exclusion of oxygen determine which microorganisms dominate, and determine the flavor of the end product.
When both salt concentration and temperature are low, Leuconostoc mesenteroides dominates, producing a mix of acids, alcohol, and aroma compounds. At higher temperatures Lactobacillus plantarum dominates, which produces primarily lactic acid. Many pickles start with Leuconostoc, and change to Lactobacillus with higher acidity.
History
Pickling with vinegar likely originated in ancient Mesopotamia around 2400 BCE. There is archaeological evidence of cucumbers being pickled in the Tigris Valley in 2030 BCE. Pickling vegetables in vinegar continued to develop in the Middle East region before spr
|
https://en.wikipedia.org/wiki/GForge
|
GForge is a commercial service originally based on the Alexandria software behind SourceForge, a web-based project management and collaboration system which was licensed under the GPL. Open source versions of the GForge code were released from 2002 to 2009, at which point the company behind GForge focused on their proprietary service offering which provides project hosting, version control (CVS, Subversion, Git), code reviews, ticketing (issues, support), release management, continuous integration and messaging. The FusionForge project emerged in 2009 to pull together open-source development efforts from the variety of software forks which had sprung up.
History
In 1999, VA Linux hired four developers, including Tim Perdue (1974-2011), to develop the SourceForge.net service to encourage open-source development and support the Open Source developer community. SourceForge.net services were offered free of charge to any Open Source project team. Following the SourceForge launch on November 17, 1999, the free software community rapidly took advantage of SourceForge.net, and traffic and users grew very quickly.
As another competitive web service, "Server 51", was being readied for launch, VA Linux released the source code for the sourceforge.net web site on January 14, 2000, as a marketing ploy to show that SourceForge was 'more open source'. Many companies began installing and using it themselves and contacting VA Linux for professional services to set up and use the software. However, their pricing was so unrealistic, they had few customers. By 2001, the company's Linux hardware business had collapsed in the dotcom bust. The company was renamed to VA Software and called the closed codebase SourceForge Enterprise Edition to try to force some of the large companies to purchase licenses. This prompted objections from open source community members. VA Software continued to say that a new source code release would be made at some point, but it never was.
Some time later
|
https://en.wikipedia.org/wiki/Surface%20roughness
|
Surface roughness can be regarded as the quality of a surface of not being smooth and it is hence linked to human (haptic) perception of the surface texture. From a mathematical perspective it is related to the spatial variability structure of surfaces, and inherently it is a multiscale property. It has different interpretations and definitions depending from the disciplines considered.
In surface metrology
Surface roughness, often shortened to roughness, is a component of surface finish (surface texture). It is quantified by the deviations in the direction of the normal vector of a real surface from its ideal form. If these deviations are large, the surface is rough; if they are small, the surface is smooth. In surface metrology, roughness is typically considered to be the high-frequency, short-wavelength component of a measured surface. However, in practice it is often necessary to know both the amplitude and frequency to ensure that a surface is fit for a purpose.
Roughness plays an important role in determining how a real object will interact with its environment. In tribology, rough surfaces usually wear more quickly and have higher friction coefficients than smooth surfaces. Roughness is often a good predictor of the performance of a mechanical component, since irregularities on the surface may form nucleation sites for cracks or corrosion. On the other hand, roughness may promote adhesion. Generally speaking, rather than scale specific descriptors, cross-scale descriptors such as surface fractality provide more meaningful predictions of mechanical interactions at surfaces including contact stiffness and static friction.
Although a high roughness value is often undesirable, it can be difficult and expensive to control in manufacturing. For example, it is difficult and expensive to control surface roughness of fused deposition modelling (FDM) manufactured parts. Decreasing the roughness of a surface usually increases its manufacturing cost. This often resul
|
https://en.wikipedia.org/wiki/Encyclopedia%20Mythica
|
Encyclopedia Mythica is an online encyclopedia that seeks to cover folklore, mythology, and religion. This encyclopedia was founded in June 1995 as a small site with about 300 entries, and established with its own domain name in March 1996. As of May 2021, it features more than 11000 articles.
References
External links
Encyclopedia Mythica
Online encyclopedias
21st-century encyclopedias
|
https://en.wikipedia.org/wiki/Divide%20and%20choose
|
Divide and choose (also Cut and choose or I cut, you choose) is a procedure for fair division of a continuous resource, such as a cake, between two parties. It involves a heterogeneous good or resource ("the cake") and two partners who have different preferences over parts of the cake. The protocol proceeds as follows: one person ("the cutter") cuts the cake into two pieces; the other person ("the chooser") selects one of the pieces; the cutter receives the remaining piece.
The procedure has been used since ancient times to divide land, cake and other resources between two parties. Currently, there is an entire field of research, called fair cake-cutting, devoted to various extensions and generalizations of cut-and-choose.
History
Divide and choose is mentioned in the Bible, in the Book of Genesis (chapter 13). When Abraham and Lot come to the land of Canaan, Abraham suggests that they divide it among them. Then Abraham, coming from the south, divides the land to a "left" (northern) part and a "right" (southern) part, and lets Lot choose. Lot chooses the eastern part which contains Sodom and Gomorrah, and Abraham is left with the western part which contains Beer Sheva, Hebron, Bethel, and Shechem.
The United Nations Convention on the Law of the Sea applies a procedure similar to divide-and-choose for allocating areas in the ocean among countries. A developed state applying for a permit to mine minerals from the ocean must prepare two areas of approximately similar value, let the UN authority choose one of them for reservation to developing states, and get the other area for mining:"Each application... shall cover a total area... sufficiently large and of sufficient estimated commercial value to allow two mining operations... of equal estimated commercial value... Within 45 days of receiving such data, the Authority shall designate which part is to be reserved solely for the conduct of activities by the Authority through the Enterprise or in association with devel
|
https://en.wikipedia.org/wiki/Bijective%20numeration
|
Bijective numeration is any numeral system in which every non-negative integer can be represented in exactly one way using a finite string of digits. The name refers to the bijection (i.e. one-to-one correspondence) that exists in this case between the set of non-negative integers and the set of finite strings using a finite set of symbols (the "digits").
Most ordinary numeral systems, such as the common decimal system, are not bijective because more than one string of digits can represent the same positive integer. In particular, adding leading zeroes does not change the value represented, so "1", "01" and "001" all represent the number one. Even though only the first is usual, the fact that the others are possible means that the decimal system is not bijective. However, the unary numeral system, with only one digit, is bijective.
A bijective base-k numeration is a bijective positional notation. It uses a string of digits from the set {1, 2, ..., k} (where k ≥ 1) to encode each positive integer; a digit's position in the string defines its value as a multiple of a power of k. calls this notation k-adic, but it should not be confused with the p-adic numbers: bijective numerals are a system for representing ordinary integers by finite strings of nonzero digits, whereas the p-adic numbers are a system of mathematical values that contain the integers as a subset and may need infinite sequences of digits in any numerical representation.
Definition
The base-k bijective numeration system uses the digit-set {1, 2, ..., k} (k ≥ 1) to uniquely represent every non-negative integer, as follows:
The integer zero is represented by the empty string.
The integer represented by the nonempty digit-string
is
.
The digit-string representing the integer m > 0 is
where
and
being the least integer not less than x (the ceiling function).
In contrast, standard positional notation can be defined with a similar recursive algorithm where
Extension to integers
For base , the bije
|
https://en.wikipedia.org/wiki/Sign%20of%20the%20horns
|
The sign of the horns is a hand gesture with a variety of meanings and uses in various cultures. It is formed by extending the index and little fingers while holding the middle and ring fingers down with the thumb.
Religious and superstitious meaning
In Hatha Yoga, a similar hand gesture – with the tips of middle and ring finger touching the thumb – is known as , a gesture believed to rejuvenate the body. In Indian classical dance forms, it symbolizes the lion. In Buddhism, the is seen as an apotropaic gesture to expel demons, remove negative energy, and ward off evil. It is commonly found on depictions of Gautama Buddha. It is also found on the Song dynasty statue of Laozi, the founder of Taoism, on Mount Qingyuan, China.
An apotropaic usage of the sign can be seen in Italy and in other Mediterranean cultures where, when confronted with unfortunate events, or simply when these events are mentioned, the sign of the horns may be given to ward off further bad luck. It is also used traditionally to counter or ward off the "evil eye" (). In Italy specifically, the gesture is known as the ('horns'). With fingers pointing down, it is a common Mediterranean apotropaic gesture, by which people seek protection in unlucky situations (a Mediterranean equivalent of knocking on wood). The President of the Italian Republic, Giovanni Leone, startled the media when, while in Naples during an outbreak of cholera, he shook the hands of patients with one hand while with the other behind his back he superstitiously made the , presumably to ward off the disease or in reaction to being confronted by such misfortune.
In Italy and other parts of the Mediterranean region, the gesture must usually be performed with the fingers tilting downward or in a leveled position not pointed at someone and without movement to signify the warding off of bad luck; in the same region and elsewhere, the gesture may take a different, offensive, and insulting meaning if it is performed with fingers upw
|
https://en.wikipedia.org/wiki/Legacy%20port
|
In computing, a legacy port is a computer port or connector that is considered by some to be fully or partially superseded. The replacement ports usually provide most of the functionality of the legacy ports with higher speeds, more compact design, or plug and play and hot swap capabilities for greater ease of use. Modern PC motherboards use separate Super I/O controllers to provide legacy ports, since current chipsets do not offer direct support for them. A category of computers called legacy-free PCs omits these ports, typically retaining only USB for external expansion.
USB adapters are often used to provide legacy ports if they are required on systems not equipped with them.
Common legacy ports
See also
Legacy encoding
Legacy system
References
Computer buses
Legacy hardware
|
https://en.wikipedia.org/wiki/WCWF
|
WCWF (channel 14) is a television station licensed to Suring, Wisconsin, United States, serving the Green Bay area as an affiliate of The CW. It is owned by Sinclair Broadcast Group alongside Fox affiliate WLUK-TV (channel 11). Both stations share studios on Lombardi Avenue (US 41) on the line between Green Bay and Ashwaubenon, while WCWF's transmitter is located on Scray Hill in Ledgeview.
History
The station launched on February 22, 1984, as religious independent station WSCO-TV, under the ownership of Northeastern Wisconsin Christian Television Incorporated. The station's former analog transmitter was located outside of the unincorporated Oconto County community of Krakow, north of Pulaski on WIS 32. Financial problems would force the station off the air by 1987; VCY America would purchase the station's license that year and return it to the air by 1993 as a sister station to Milwaukee's WVCY-TV with religious and home shopping programming. On April 30, 1997, Paxson Communications (now Ion Media Networks) purchased the station and converted it to a paid programming format under Paxson's inTV service. On August 31, 1998, WSCO became a charter owned-and-operated station of Pax TV (later i: Independent Television, now Ion Television) under the new call sign WPXG (for "Pax Green Bay").
On June 2, 1999, Paxson sold WPXG to ACME Communications; the station immediately became a primary WB affiliate and changed its call sign to WIWB, originally branded as "WB 14" and later "Wisconsin's WB" (The WPXG-TV callsign has been moved to a TV station in Manchester, New Hampshire). Before it joined the network, WB programming in Northeastern Wisconsin was previously seen either through cable providers that carried Chicago-based superstation WGN and/or Milwaukee's WVTV or during off hours on UPN affiliate WACY-TV (channel 32; Kids' WB programming aired as part of WACY's children's lineup). WIWB also continued to air Pax programming in the mornings, overnights and weekends for a
|
https://en.wikipedia.org/wiki/Rheometer
|
A rheometer is a laboratory device used to measure the way in which a viscous fluid (a liquid, suspension or slurry) flows in response to applied forces. It is used for those fluids which cannot be defined by a single value of viscosity and therefore require more parameters to be set and measured than is the case for a viscometer. It measures the rheology of the fluid.
There are two distinctively different types of rheometers. Rheometers that control the applied shear stress or shear strain are called rotational or shear rheometers, whereas rheometers that apply extensional stress or extensional strain are extensional rheometers.
Rotational or shear type rheometers are usually designed as either a native strain-controlled instrument (control and apply a user-defined shear strain which can then measure the resulting shear stress) or a native stress-controlled instrument (control and apply a user-defined shear stress and measure the resulting shear strain).
Meanings and origin
The word rheometer comes from the Greek, and means a device for measuring main flow. In the 19th century it was commonly used for devices to measure electric current, until the word was supplanted by galvanometer and ammeter. It was also used for the measurement of the flow of liquids, in medical practice (flow of blood) and in civil engineering (flow of water). This latter use persisted to the second half of the 20th century in some areas. Following the coining of the term rheology the word came to be applied to instruments for measuring the character rather than quantity of flow, and the other meanings are obsolete. (Principal Source: Oxford English Dictionary) The principle and working of rheometers is described in several texts.
Types of shear rheometer
Shearing geometries
Four basic shearing planes can be defined according to their geometry,
Couette drag plate flow
Cylindrical flow
Poiseuille flow in a tube and
Plate-plate flow
The various types of shear rheometers then use
|
https://en.wikipedia.org/wiki/Software%20as%20a%20service
|
Software as a service (SaaS ) is a software licensing and delivery model in which software is licensed on a subscription basis and is centrally hosted. SaaS is also known as on-demand software, web-based software, or web-hosted software.
SaaS is considered to be part of cloud computing, along with several other as a service business models.
SaaS apps are typically accessed by users of a web browser (a thin client). SaaS became a common delivery model for many business applications, including office software, messaging software, payroll processing software, DBMS software, management software, CAD software, development software, gamification, virtualization, accounting, collaboration, customer relationship management (CRM), management information systems (MIS), enterprise resource planning (ERP), invoicing, field service management, human resource management (HRM), talent acquisition, learning management systems, content management (CM), geographic information systems (GIS), and service desk management.
SaaS has been incorporated into the strategies of nearly all enterprise software companies.
History
Centralized hosting of business applications dates back to the 1960s. Starting in that decade, IBM and other mainframe computer providers conducted a service bureau business, often referred to as time-sharing or utility computing. Such services included offering computing power and database storage to banks and other large organizations from their worldwide data centers.
The expansion of the Internet during the 1990s brought about a new class of centralized computing, called application service providers (ASP). ASPs provided businesses with the service of hosting and managing specialized business applications to reduce costs through central administration and the provider's specialization in a particular business application. Two of the largest ASPs were USI, which was headquartered in the Washington, D.C., area, and Futurelink Corporation, headquartered in Irvine
|
https://en.wikipedia.org/wiki/Thermodynamic%20limit
|
In statistical mechanics, the thermodynamic limit or macroscopic limit, of a system is the limit for a large number of particles (e.g., atoms or molecules) where the volume is taken to grow in proportion with the number of particles.
The thermodynamic limit is defined as the limit of a system with a large volume, with the particle density held fixed.
In this limit, macroscopic thermodynamics is valid. There, thermal fluctuations in global quantities are negligible, and all thermodynamic quantities, such as pressure and energy, are simply functions of the thermodynamic variables, such as temperature and density. For example, for a large volume of gas, the fluctuations of the total internal energy are negligible and can be ignored, and the average internal energy can be predicted from knowledge of the pressure and temperature of the gas.
Note that not all types of thermal fluctuations disappear in the thermodynamic limit—only the fluctuations in system variables cease to be important.
There will still be detectable fluctuations (typically at microscopic scales) in some physically observable quantities, such as
microscopic spatial density fluctuations in a gas scatter light (Rayleigh scattering)
motion of visible particles (Brownian motion)
electromagnetic field fluctuations, (blackbody radiation in free space, Johnson–Nyquist noise in wires)
Mathematically an asymptotic analysis is performed when considering the thermodynamic limit.
Origin
The thermodynamic limit is essentially a consequence of the central limit theorem of probability theory. The internal energy of a gas of N molecules is the sum of order N contributions, each of which is approximately independent, and so the central limit theorem predicts that the ratio of the size of the fluctuations to the mean is of order 1/N1/2. Thus for a macroscopic volume with perhaps the Avogadro number of molecules, fluctuations are negligible, and so thermodynamics works. In general, almost all macroscopic volu
|
https://en.wikipedia.org/wiki/Software%20package%20metrics
|
Various software package metrics are used in modular programming. They have been mentioned by Robert Cecil Martin in his 2002 book Agile software development: principles, patterns, and practices.
The term software package here refers to a group of related classes in object-oriented programming.
Number of classes and interfaces: The number of concrete and abstract classes (and interfaces) in the package is an indicator of the extensibility of the package.
Afferent couplings (Ca): The number of classes in other packages that depend upon classes within the package is an indicator of the package's responsibility. Afferent couplings signal inward.
Efferent couplings (Ce): The number of classes in other packages that the classes in a package depend upon is an indicator of the package's dependence on externalities. Efferent couplings signal outward.
Abstractness (A): The ratio of the number of abstract classes (and interfaces) in the analyzed package to the total number of classes in the analyzed package. The range for this metric is 0 to 1, with A=0 indicating a completely concrete package and A=1 indicating a completely abstract package.
Instability (I): The ratio of efferent coupling (Ce) to total coupling (Ce + Ca) such that I = Ce / (Ce + Ca). This metric is an indicator of the package's resilience to change. The range for this metric is 0 to 1, with I=0 indicating a completely stable package and I=1 indicating a completely unstable package.
Distance from the main sequence (D): The perpendicular distance of a package from the idealized line A + I = 1. D is calculated as D = | A + I - 1 |. This metric is an indicator of the package's balance between abstractness and stability. A package squarely on the main sequence is optimally balanced with respect to its abstractness and stability. Ideal packages are either completely abstract and stable (I=0, A=1) or completely concrete and unstable (I=1, A=0). The range for this metric is 0 to 1, with D=0 indicating a
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.