source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Thermodynamic%20equations
|
Thermodynamics is expressed by a mathematical framework of thermodynamic equations which relate various thermodynamic quantities and physical properties measured in a laboratory or production process. Thermodynamics is based on a fundamental set of postulates, that became the laws of thermodynamics.
Introduction
One of the fundamental thermodynamic equations is the description of thermodynamic work in analogy to mechanical work, or weight lifted through an elevation against gravity, as defined in 1824 by French physicist Sadi Carnot. Carnot used the phrase motive power for work. In the footnotes to his famous On the Motive Power of Fire, he states: “We use here the expression motive power to express the useful effect that a motor is capable of producing. This effect can always be likened to the elevation of a weight to a certain height. It has, as we know, as a measure, the product of the weight multiplied by the height to which it is raised.” With the inclusion of a unit of time in Carnot's definition, one arrives at the modern definition for power:
During the latter half of the 19th century, physicists such as Rudolf Clausius, Peter Guthrie Tait, and Willard Gibbs worked to develop the concept of a thermodynamic system and the correlative energetic laws which govern its associated processes. The equilibrium state of a thermodynamic system is described by specifying its "state". The state of a thermodynamic system is specified by a number of extensive quantities, the most familiar of which are volume, internal energy, and the amount of each constituent particle (particle numbers). Extensive parameters are properties of the entire system, as contrasted with intensive parameters which can be defined at a single point, such as temperature and pressure. The extensive parameters (except entropy) are generally conserved in some way as long as the system is "insulated" to changes to that parameter from the outside. The truth of this statement for volume is trivial
|
https://en.wikipedia.org/wiki/Kummer%27s%20function
|
In mathematics, there are several functions known as Kummer's function. One is known as the confluent hypergeometric function of Kummer. Another one, defined below, is related to the polylogarithm. Both are named for Ernst Kummer.
Kummer's function is defined by
The duplication formula is
.
Compare this to the duplication formula for the polylogarithm:
An explicit link to the polylogarithm is given by
References
.
Special functions
hu:Kummer-függvény
|
https://en.wikipedia.org/wiki/Confluent%20hypergeometric%20function
|
In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular singularity. The term confluent refers to the merging of singular points of families of differential equations; confluere is Latin for "to flow together". There are several common standard forms of confluent hypergeometric functions:
Kummer's (confluent hypergeometric) function , introduced by , is a solution to Kummer's differential equation. This is also known as the confluent hypergeometric function of the first kind. There is a different and unrelated Kummer's function bearing the same name.
Tricomi's (confluent hypergeometric) function introduced by , sometimes denoted by , is another solution to Kummer's equation. This is also known as the confluent hypergeometric function of the second kind.
Whittaker functions (for Edmund Taylor Whittaker) are solutions to Whittaker's equation.
Coulomb wave functions are solutions to the Coulomb wave equation.
The Kummer functions, Whittaker functions, and Coulomb wave functions are essentially the same, and differ from each other only by elementary functions and change of variables.
Kummer's equation
Kummer's equation may be written as:
with a regular singular point at and an irregular singular point at . It has two (usually) linearly independent solutions and .
Kummer's function of the first kind is a generalized hypergeometric series introduced in , given by:
where:
is the rising factorial. Another common notation for this solution is . Considered as a function of , , or with the other two held constant, this defines an entire function of or , except when As a function of it is analytic except for poles at the non-positive integers.
Some values of and yield solutions that can be expressed in terms of other known functions. See #Special cases. When is a
|
https://en.wikipedia.org/wiki/Related-key%20attack
|
In cryptography, a related-key attack is any form of cryptanalysis where the attacker can observe the operation of a cipher under several different keys whose values are initially unknown, but where some mathematical relationship connecting the keys is known to the attacker. For example, the attacker might know that the last 80 bits of the keys are always the same, even though they don't know, at first, what the bits are. This appears, at first glance, to be an unrealistic model; it would certainly be unlikely that an attacker could persuade a human cryptographer to encrypt plaintexts under numerous secret keys related in some way.
KASUMI
KASUMI is an eight round, 64-bit block cipher with a 128-bit key. It is based upon MISTY1 and was designed to form the basis of the 3G confidentiality and integrity algorithms.
Mark Blunden and Adrian Escott described differential related key attacks on five and six rounds of KASUMI. Differential attacks were introduced by Biham and Shamir. Related key attacks were first introduced by Biham. Differential related key attacks are discussed in Kelsey et al.
WEP
An important example of a cryptographic protocol that failed because of a related-key attack is Wired Equivalent Privacy (WEP) used in Wi-Fi wireless networks. Each client Wi-Fi network adapter and wireless access point in a WEP-protected network shares the same WEP key. Encryption uses the RC4 algorithm, a stream cipher. It is essential that the same key never be used twice with a stream cipher. To prevent this from happening, WEP includes a 24-bit initialization vector (IV) in each message packet. The RC4 key for that packet is the IV concatenated with the WEP key. WEP keys have to be changed manually and this typically happens infrequently. An attacker therefore can assume that all the keys used to encrypt packets share a single WEP key. This fact opened up WEP to a series of attacks which proved devastating. The simplest to understand uses the fact that the 24-bit IV on
|
https://en.wikipedia.org/wiki/Column%20inch
|
A column inch was the standard measurement of the amount of content in published works that use multiple columns per page. A column inch is a unit of space one column wide by high.
A newspaper page
Newspaper pages are laid out on a grid that consists of a margin on 4 sides, a number of vertical columns and space in between columns, called gutters. Broadsheet newspaper pages in the United States usually have 6-9 columns, while tabloid sized publications have 5 columns.
Column width
In the United States, a common newspaper column measurement is about 11 picas wide —about —though this measure varies from paper to paper and in other countries. The examples in this article follow this assumption for illustrative purposes only.
Column inches and advertising
Newspapers sell advertising space on a page to retail advertisers, advertising agencies and other media buyers. Newspapers publish a "per column inch" rate based on their circulation and demographic figures. Generally, the more readers the higher the column inch rate is. Newspapers with more affluent readers may be able to command an even higher column inch rate. For most newspapers, however, the published rate is just a starting point. Sales representatives generally negotiate lower rates for frequent advertisers.
Advertisements are measured using column inches. An advertisement that is 1 column inch square is 11 picas wide by 1 inch high. The column inch size for advertisements that spread over more than one column is determined by multiplying the number of inches high by number of columns. For example, an advertisement that is 3 columns wide by 6 inches high takes up 18 column inches (3 columns wide multiplied by 6 inches high).
To determine the cost of the advertisement, multiply the number of column inches by the newspaper's rate. So, if a newspaper charges $10 per column inch, the cost for the advertisement discussed above would be $180.00 (18 column inches multiplied by $10.00). Advertisements that span o
|
https://en.wikipedia.org/wiki/Military%20Grid%20Reference%20System
|
The Military Grid Reference System (MGRS) is the geocoordinate standard used by NATO militaries for locating points on Earth. The MGRS is derived from the Universal Transverse Mercator (UTM) grid system and the Universal Polar Stereographic (UPS) grid system, but uses a different labeling convention. The MGRS is used as geocode for the entire Earth.
An example of an MGRS coordinate, or grid reference, would be [21_18_34.0_N_157_55_0.7_W_&language=en 4QFJ12345678], which consists of three parts:
4Q (grid zone designator, GZD)
FJ (the 100,000-meter square identifier)
1234 5678 (numerical location; easting is 1234 and northing is 5678, in this case specifying a location with 10 m resolution)
An MGRS grid reference is a point reference system. When the term 'grid square' is used, it can refer to a square with a side length of , 1 km, , 10 m or 1 m, depending on the precision of the coordinates provided. (In some cases, squares adjacent to a Grid Zone Junction (GZJ) are clipped, so polygon is a better descriptor of these areas.) The number of digits in the numerical location must be even: 0, 2, 4, 6, 8 or 10, depending on the desired precision. When changing precision levels, it is important to truncate rather than round the easting and northing values to ensure the more precise polygon will remain within the boundaries of the less precise polygon. Related to this is the primacy of the southwest corner of the polygon being the labeling point for an entire polygon. In instances where the polygon is not a square and has been clipped by a grid zone junction, the polygon keeps the label of the southwest corner as if it had not been clipped.
4Q ......................GZD only, precision level 6° × 8° (in most cases)
4Q FJ ...................GZD and 100 km Grid Square ID, precision level 100 km
4Q FJ 1 6 ...............precision level 10 km
4Q FJ 12 67 .............precision level 1 km
4Q FJ 123 678 ...........precision level 100 m
4Q FJ 1234 6789 .........p
|
https://en.wikipedia.org/wiki/Magnetic%20core
|
A magnetic core is a piece of magnetic material with a high magnetic permeability used to confine and guide magnetic fields in electrical, electromechanical and magnetic devices such as electromagnets, transformers, electric motors, generators, inductors, magnetic recording heads, and magnetic assemblies. It is made of ferromagnetic metal such as iron, or ferrimagnetic compounds such as ferrites. The high permeability, relative to the surrounding air, causes the magnetic field lines to be concentrated in the core material. The magnetic field is often created by a current-carrying coil of wire around the core.
The use of a magnetic core can increase the strength of magnetic field in an electromagnetic coil by a factor of several hundred times what it would be without the core. However, magnetic cores have side effects which must be taken into account. In alternating current (AC) devices they cause energy losses, called core losses, due to hysteresis and eddy currents in applications such as transformers and inductors. "Soft" magnetic materials with low coercivity and hysteresis, such as silicon steel, or ferrite, are usually used in cores.
Core materials
An electric current through a wire wound into a coil creates a magnetic field through the center of the coil, due to Ampere's circuital law. Coils are widely used in electronic components such as electromagnets, inductors, transformers, electric motors and generators. A coil without a magnetic core is called an "air core" coil. Adding a piece of ferromagnetic or ferrimagnetic material in the center of the coil can increase the magnetic field by hundreds or thousands of times; this is called a magnetic core. The field of the wire penetrates the core material, magnetizing it, so that the strong magnetic field of the core adds to the field created by the wire. The amount that the magnetic field is increased by the core depends on the magnetic permeability of the core material. Because side effects such as e
|
https://en.wikipedia.org/wiki/Kinematic%20determinacy
|
Kinematic determinacy is a term used in structural mechanics to describe a structure where material compatibility conditions alone can be used to calculate deflections. A kinematically determinate structure can be defined as a structure where, if it is possible to find nodal displacements compatible with member extensions, those nodal displacements are unique. The structure has no possible mechanisms, i.e. nodal displacements, compatible with zero member extensions, at least to a first-order approximation. Mathematically, the mass matrix of the structure must have full rank. Kinematic determinacy can be loosely used to classify an arrangement of structural members as a structure (stable) instead of a mechanism (unstable). The principles of kinematic determinacy are used to design precision devices such as mirror mounts for optics, and precision linear motion bearings.
See also
Statical determinacy
Precision engineering
Kinematic coupling
References
Mechanical engineering
|
https://en.wikipedia.org/wiki/Lidstone%20series
|
In mathematics, a Lidstone series, named after George James Lidstone, is a kind of polynomial expansion that can express certain types of entire functions.
Let ƒ(z) be an entire function of exponential type less than (N + 1)π, as defined below. Then ƒ(z) can be expanded in terms of polynomials An as follows:
Here An(z) is a polynomial in z of degree n, Ck a constant, and ƒ(n)(a) the nth derivative of ƒ at a.
A function is said to be of exponential type of less than t if the function
is bounded above by t. Thus, the constant N used in the summation above is given by
with
References
Ralph P. Boas, Jr. and C. Creighton Buck, Polynomial Expansions of Analytic Functions, (1964) Academic Press, NY. Library of Congress Catalog 63-23263. Issued as volume 19 of Moderne Funktionentheorie ed. L.V. Ahlfors, series Ergebnisse der Mathematik und ihrer Grenzgebiete, Springer-Verlag
Mathematical series
|
https://en.wikipedia.org/wiki/List%20of%20manufacturing%20processes
|
This tree lists various manufacturing processes arranged by similarity of function.
Casting
Centrifugal casting (industrial)
Continuous casting
Die casting
Evaporative-pattern casting
Full-mold casting
Lost-foam casting
Investment casting (Lost-wax casting)
Countergravity casting
Lost-foam casting
Low pressure die casting
Permanent mold casting
Plastic mold casting
Resin casting
Sand casting
Shell molding
Slush casting, Slurry casting
Vacuum molding
Data from Fundamentals of modern manufacturing
Labeling and painting
Main articles: Imaging and Coating
Laser engraving
Inkjet printing
Chemical vapor deposition
Sputter deposition
Plating
Thermal spraying
Moulding
Powder metallurgy
Compaction plus sintering
Hot isostatic pressing
Metal injection moulding
Spray forming
Plastics (see also Rapid prototyping)
Injection
Compression molding
Transfer
Extrusion
Blow molding
Dip moulding
Rotational molding
Thermoforming
Laminating
Expandable bead
Foam
Vacuum plug assist
Pressure plug assist
Matched mould
Shrink wrapping
Forming
End tube forming
Tube beading
Forging
Smith
Hammer forge
Drop forge
Press
Impact (see also Extrusion)
Upset
No draft
High-energy-rate
Cored
Incremental
Powder
Rolling (Thick plate and sheet metal)
Cold rolling
Hot rolling
Sheet metal
Shape
Ring
Transverse
Cryorolling
Orbital
Cross-rolling
Thread
Screw thread
Thread rolling
Extrusion
Impact extrusion
Pressing
Embossing
Stretch forming
Blanking (see drawing below)
Drawing (manufacturing) (pulling sheet metal, wire, bar, or tube
Bulging
Necking
Nosing
Deep drawing (sinks, auto body)
Bending
Hemming
Shearing
Blanking and piercing
Trimming
Shaving
Notching
Perforating
Nibbling
Dinking
Lancing
Cutoff
Stamping
Metal
Leather
Progressive
Coining
Straight shearing
Slitting
Other
Redrawing
Ironing
Flattening
Swaging
Spinning
Peening
Guerin process
Wheelon process
Magnetic pulse
Explosive forming
Electroforming
Staking
Seaming
Flanging
Straightening
Decambering
Cold sizing
Hubbing
Hot metal gas forming
Curlin
|
https://en.wikipedia.org/wiki/Gheorghe%20%C8%9Ai%C8%9Beica
|
Gheorghe Țițeica (; 4 October 1873 – 5 February 1939) publishing as George or Georges Tzitzéica) was a Romanian mathematician who made important contributions in geometry. He is recognized as the founder of the Romanian school of differential geometry.
Education
He was born in Turnu Severin, western Oltenia, the son of Anca (née Ciolănescu) and Radu Țiței, originally from Cilibia, in Buzău County. His name was registered as Țițeica–a combination of his parents' surnames. He showed an early interest in science, as well as music and literature. Țițeica was an accomplished violinist, having studied music since childhood: music was to remain his hobby. While studying at the Carol I High School in Craiova, he contributed to the school's magazine, writing the columns on mathematics and studies of literary critique. After graduation in 1892, he obtained a scholarship at the preparatory school in Bucharest, where he also was admitted as a student in the Mathematics Department of University of Bucharest's Faculty of Sciences. His teachers there included David Emmanuel, Spiru Haret, Constantin Gogu, Dimitrie Petrescu, and Iacob Lahovary. In June 1895, he graduated with a Bachelor of Mathematics.
In the summer of 1896, after a stint as a substitute teacher at the Bucharest theological seminary, Țițeica passed his exams for promotion to a secondary school position, becoming teacher in Galați.
In 1897, on the advice of teachers and friends, Țițeica completed his studies at a preparatory school in Paris. Among his mates were Henri Lebesgue and Paul Montel. After ranking first in his class and earning a second undergraduate degree from the Sorbonne in 1897, he was admitted at the École Normale Supérieure, where he took classes with Paul Appell, Gaston Darboux, Édouard Goursat, Charles Hermite, Gabriel Koenigs, Émile Picard, Henri Poincaré, and Jules Tannery. Țițeica chose Darboux to be his thesis advisor; after working for two years on his doctoral dissertation, titled Sur les
|
https://en.wikipedia.org/wiki/Polaritonics
|
Polaritonics is an intermediate regime between photonics and sub-microwave electronics (see Fig. 1). In this regime, signals are carried by an admixture of electromagnetic and lattice vibrational waves known as phonon-polaritons, rather than currents or photons. Since phonon-polaritons propagate with frequencies in the range of hundreds of gigahertz to several terahertz, polaritonics bridges the gap between electronics and photonics. A compelling motivation for polaritonics is the demand for high speed signal processing and linear and nonlinear terahertz spectroscopy. Polaritonics has distinct advantages over electronics, photonics, and traditional terahertz spectroscopy in that it offers the potential for a fully integrated platform that supports terahertz wave generation, guidance, manipulation, and readout in a single patterned material.
Polaritonics, like electronics and photonics, requires three elements: robust waveform generation, detection, and guidance and control. Without all three, polaritonics would be reduced to just phonon-polaritons, just as electronics and photonics would be reduced to just electromagnetic radiation. These three elements can be combined to enable device functionality similar to that in electronics and photonics.
Illustration
To illustrate the functionality of polaritonic devices, consider the hypothetical circuit in Fig. 2 (right). The optical excitation pulses that generate phonon-polaritons, in the top left and bottom right of the crystal, enter normal to the crystal face (into the page). The resulting phonon-polaritons will travel laterally away from the excitation regions. Entrance into the waveguides is facilitated by reflective and focusing structures. Phonon-polaritons are guided through the circuit by terahertz waveguides carved into the crystal. Circuit functionality resides in the interferometer structure at the top and the coupled waveguide structure at the bottom of the circuit. The latter employs a photonic bandgap st
|
https://en.wikipedia.org/wiki/Mark%20Levinson%20ML-3
|
The Mark Levinson ML-3 was a 200 watt per channel dual monaural Class AB2 power amplifier that used toroidal transformers. Produced between 1979 and 1987, the ML-3 consisted of two electrically separate amplifiers in one chassis, hence the name "Dual Monaural". It also featured discrete circuit construction; no integrated circuits were incorporated to keep the signal pure. The design was by Thomas P. Colangelo.
The ML-3 constituted the archetype of an American highend, highpower amplifier.
Specifications
200 W/channel at 8 ohms, 400 W/ch at 4 ohms, 800 W/ch at 2 ohms
Maximum output: 45 volts 30 amperes
Two 1.2 kVA Avel Lindbergh toroidal transformers, 4 Sprague 36,000 µF, 100 V capacitors and 40 output devices (20 per channel)
Range: 20 Hz to 20 kHz with less than 0.2% total harmonic distortion
Gold-plated CAMAC input connectors
Adjustable AC voltage: No, factory set
Adjustable output damping toggle switches (one per channel) (in later models only)
Weight: 116 lb (56 kg)
External links
Mark Levinson Equipment History
References
Audio amplifiers
|
https://en.wikipedia.org/wiki/Stimulus%20protocol
|
In telephony, a stimulus protocol is a type of protocol that is used to carry event notifications between end points. Such a protocol is used to control the operation of devices at each end of the link. However a stimulus protocol is not sensitive to the system state. In a typical application such a protocol will carry keystroke information from a telephone set to a central call control. It may also carry control information for simple types of text displays. MiNET from Mitel is a typical protocol of this sort.
Stimulus protocols are most suited to networks with dumb peripherals and intelligent centralized applications (see intelligent network). This is in contrast to functional protocols which are best suited to a network with an intelligent periphery and a dumb core (see dumb network). Because these architectures share core hardware over large numbers of peripherals, large and expensive computing capabilities in terms of both hardware and software may be supplied. These centralized architectures excel in solving the problems of complexity for large scale applications since the investment in hardware and software may be amortized across a great many users. However the same virtue of large scale sharing prevents these architectures from providing any significant degree of customization to the preferences of the individual user. The most that can be supplied is a degree of parameterization of service operation.
With their suitability tied to the virtues of centralized network architectures, stimulus protocols are being deprecated in favor of functional protocols with the rise of the Internet. A functional protocol such as Session Initiation Protocol (SIP) from the IETF is more suited for Internet applications.
Network protocols
Telephony
|
https://en.wikipedia.org/wiki/Gaydar%20%28website%29
|
Gaydar is a profile-based dating website for gay and bisexual men.
History
The Gaydar website, built initially for desktop only, was created as a tool to connect gay and bisexual men all over the world for friendships, hook-ups, dating and relationships. Users create a personal 'member' profile, which is then used to interact and contact other registered members.
It was founded in 1999 in Cape Town, South Africa, by London-based South Africans Gary Frisch and his partner Henry Badenhorst, after a friend complained that he was too busy to look for a new boyfriend. The initial idea was based upon a then current concept of a corporate intranet that was in development under the codename "RADAR" (Rapid Access And Deployment Resource) for a prominent South African advertising conglomerate by programmers Ian Van Schalkwyk and Stephen Hadden. The site was launched in November 1999.
In May 2007, Henry Badenhorst was named by the Independent on Sunday Pink List as the fourth most influential gay person in Britain, down from third place the previous year.
In 2009, Gaydar expanded into the app market, releasing its iOS and Android app as available to download from the Apple App Store and Google Play.
In May 2013, it was announced that the site had been sold to Charlie Parsons, the creator of Channel 4's The Big Breakfast.
In 2017, Gaydar relaunched its site, app and brand.
Registration
Registered users are able to browse through online lists of users who are logged into the site at that time, or through lists of all active profiles. Users can send messages to each other and participate in chat rooms, which — except for the Australian and Irish chat rooms — tend to be dominated by UK users. Users can upgrade to a premium account to access Gaydar VIP, which offers more features and privileges. Members may add more photos into an 'album' attached to their profile that are viewable by other members. Guests face other site restrictions, such as a daily limit of 8 messages t
|
https://en.wikipedia.org/wiki/WBKI%20%28TV%29
|
WBKI (channel 58) is a television station licensed to Salem, Indiana, United States, serving the Louisville, Kentucky, area as a dual affiliate of The CW and MyNetworkTV. It is the only full-power Louisville-area station licensed to the Indiana side of the market. WBKI is owned by Block Communications alongside Fox affiliate WDRB (channel 41). Both stations share studios on West Muhammad Ali Boulevard (near US 150) in downtown Louisville, while WBKI's transmitter is located in rural northeastern Floyd County, Indiana (northeast of Floyds Knobs). Despite Salem being WBKI's city of license, the station maintains no physical presence there.
Block formerly operated a CW affiliate with the WBKI-TV call sign on channel 34, licensed to Campbellsville, Kentucky, under a local marketing agreement (LMA) with owner LM Communications, LLC. Following the sale of channel 34's spectrum in the Federal Communications Commission (FCC)'s incentive auction, the Campbellsville station ceased broadcasting on October 25, 2017 (with its license canceled on October 31); its channels are now broadcast solely through channel 58 on that station's license.
History
The station first signed on the air on March 16, 1994, as WFTE, with the call letters being an abbreviation of its channel number. Branded on-air as "Big 58," it originally operated as an independent station. It was originally licensed to Salem, Indiana businessman Don Martin Jr. Martin sold the license in 1993 to another Salem businessman, Tom Ledford, who worked with WDRB to program the station under one of the earliest local marketing agreements in existence. WFTE also aired the police procedural series NYPD Blue during the 1994–95 season as ABC affiliate WHAS-TV (channel 11) declined to carry the program, as many ABC affiliates in the Southern United States did when it premiered, but would later cede to viewer and advertiser pressure to carry it when the show gained traction in the national ratings.
The station became a charter
|
https://en.wikipedia.org/wiki/Antitropical%20distribution
|
Antitropical (alternatives include biantitropical or amphitropical) distribution is a type of disjunct distribution where a species or clade exists at comparable latitudes across the equator but not in the tropics. For example, a species may be found north of the Tropic of Cancer and south of the Tropic of Capricorn, but not in between. With increasing time since dispersal, the disjunct populations may be the same variety, species, or clade. How the life forms distribute themselves to the opposite hemisphere when they can't normally survive in the middle depends on the species; plants may have their seed spread through wind, animal, or other methods and then germinate upon reaching the appropriate climate, while sea life may be able to travel through the tropical regions in a larval state or by going through deep ocean currents with much colder temperatures than on the surface. For the American amphitropical distribution, dispersal has been generally agreed to be more likely than vicariance from a previous distribution including the tropics in North and South America.
Known cases
Plants
Phacelia crenulata – scorpionweed
Bowlesia incana – American Bowlesia
Osmorhiza berteroi and Osmorhiza depauperata – sweet cecily species.
Ruppia megacarpa
Solenogyne
For a list of American amphitropically distributed plants (237 vascular plants), see the tables in the open access paper Simpson et al. 2017 or their working group on figshare
Animals
Scylla serrata – mud crab
Freshwater crayfish
Ground beetle genus Bembidion
Bryophytes and lichens
Tetraplodon fuegianus - dung moss
See also
Rapoport's rule
References
Biogeography
|
https://en.wikipedia.org/wiki/Lebesgue%20point
|
In mathematics, given a locally Lebesgue integrable function on , a point in the domain of is a Lebesgue point if
Here, is a ball centered at with radius , and is its Lebesgue measure. The Lebesgue points of are thus points where does not oscillate too much, in an average sense.
The Lebesgue differentiation theorem states that, given any , almost every is a Lebesgue point of .
References
Mathematical analysis
|
https://en.wikipedia.org/wiki/Fade%20%28audio%20engineering%29
|
In audio engineering, a fade is a gradual increase or decrease in the level of an audio signal. The term can also be used for film cinematography or theatre lighting in much the same way (see fade (filmmaking) and fade (lighting)).
A recorded song may be gradually reduced to silence at its end (fade-out), or may gradually increase from silence at the beginning (fade-in). Fading-out can serve as a recording solution for pieces of music that contain no obvious ending. Both fades and crossfades are very valuable since they allow the engineer to quickly and easily make sure that the beginning and the end of any audio is smooth, without any prominent glitches. It is necessary that there is a clear section of silence prior to the audio. Fade-ins and -outs can also be used to change the characteristics of a sound, such as to soften the attack in vocals where very plosive (‘b’, ‘d’, and ‘p’) sounds occur. It can also be used to soften up the attack of the drum and/or percussion instruments. A crossfade can be manipulated through its rates and coefficients in order to create different styles of fading. Almost every fade is different; this means that the fade parameters must be adjusted according to the individual needs of the mix.
Professional turntablists and DJs in hip hop music use faders on a DJ mixer, notably the horizontal crossfader, in a rapid fashion while simultaneously manipulating two or more record players (or other sound sources) to create "scratching" and develop beats. Club DJs in house music and techno use DJ mixers, two or more sound sources (two record players, two iPods, etc.) along with a skill called beatmatching (aligning the beats and tempos of two records) to make seamless dance mixes for dancers at raves, nightclubs and dance parties.
Though relatively rare, songs can fade out then fade back in. Some examples of this are "Helter Skelter" and "Strawberry Fields Forever" by The Beatles, "Suspicious Minds" by Elvis Presley, "Shine On Brightly" by P
|
https://en.wikipedia.org/wiki/Google%20Directory
|
The Google Directory was a web directory hosted by Google and is based on the open source project DMOZ. It was discontinued on July 20, 2011. However, the Google business places and recommended businesses is now commonly referred to as the Google directory.
Information
The Google Directory was organized into 16 main categories. The directory with its upper level topics and sub-categories could provide more specific results than the usual keyword search.
Arts
Business
Computers
Games
Health
Home
Kids and teens
News
Recreation
Reference
Regional
Science
Shopping
Society
Sports
World
The World link offered the directory in other languages. The Kids and Teens link was a separate web archive for kids and teens.
The Google Directory was based on the Open Directory Project. Unlike the keyword search function of Google, the directory organization was created by humans.
Structure
Main page
The main page had links to the 16 main categories, along with the World and Kids and Teens links. There was a search box on top that allowed users to search the Google Directory. On top of that was the slogan in green letters: "The web organized by topic into categories." On top of that were links to other Google services.
Main category pages
Each main category page had links to sub-main category pages in alphabetical order, as well as a search box on top of it. Each sub-category entry is followed by a number that gives the number of items in that sub-category. Sub-categories would be created as needed, and eventually the user would get to a page with no more subcategories. Each page might have links to related categories. Some links were redirects to other pages.
World link
The World link had the names of languages. If the user clicked on one, they would be taken to a version of the directory in that language.
Kids and Teens
As the name states, it had pages for kids and teens. It was completely disconnected from the rest of the directory, so if you clicked it by accident, you woul
|
https://en.wikipedia.org/wiki/Google%20Scholar
|
Google Scholar is a freely accessible web search engine that indexes the full text or metadata of scholarly literature across an array of publishing formats and disciplines. Released in beta in November 2004, the Google Scholar index includes peer-reviewed online academic journals and books, conference papers, theses and dissertations, preprints, abstracts, technical reports, and other scholarly literature, including court opinions and patents.
Google Scholar uses a web crawler, or web robot, to identify files for inclusion in the search results. For content to be indexed in Google Scholar, it must meet certain specified criteria. An earlier statistical estimate published in PLOS One using a mark and recapture method estimated approximately 79–90% coverage of all articles published in English with an estimate of 100 million. This estimate also determined how many documents were freely available on the internet. Google Scholar has been criticized for not vetting journals and for including predatory journals in its index.
The University of Michigan Library and other libraries whose collections Google scanned for Google Books and Google Scholar retained copies of the scans and have used them to create the HathiTrust Digital Library.
History
Google Scholar arose out of a discussion between Alex Verstak and Anurag Acharya, both of whom were then working on building Google's main web index. Their goal was to "make the world's problem solvers 10% more efficient" by allowing easier and more accurate access to scientific knowledge. This goal is reflected in the Google Scholar's advertising slogan "Stand on the shoulders of giants", which was taken from an idea attributed to Bernard of Chartres, quoted by Isaac Newton, and is a nod to the scholars who have contributed to their fields over the centuries, providing the foundation for new intellectual achievements. One of the original sources for the texts in Google Scholar is the University of Michigan's print collection.
|
https://en.wikipedia.org/wiki/Journal%20of%20the%20British%20Interplanetary%20Society
|
The Journal of the British Interplanetary Society (JBIS) is a monthly peer-reviewed scientific journal that was established in 1934. The journal covers research on astronautics and space science and technology, including spacecraft design, nozzle theory, launch vehicle design, mission architecture, space stations, lunar exploration, spacecraft propulsion, robotic and crewed exploration of the solar system, interstellar travel, interstellar communications, extraterrestrial intelligence, philosophy, and cosmology. It is published monthly by the British Interplanetary Society.
History
The journal was established in 1934 when the British Interplanetary Society was founded. The inaugural editorial stated:
The first issue was only a six-page pamphlet, but has the distinction of being the world's oldest surviving astronautical publication.
Notable papers
Notable papers published in the journal include:
The B.I.S Space-Ship, H.E.Ross, JBIS, 5, pp. 4–9, 1939
The Challenge of the Spaceship (Astronautics and its Impact Upon Human Society), Arthur C. Clarke, JBIS, 6, pp. 66–78, 1946
Atomic rocket papers by Les Shepherd, Val Cleaver and others, 1948–1949.
Interstellar Flight, L.R.Shepherd, JBIS, 11, pp. 149–167, 1952
A Programme for Achieving Interplanetary Flight, A.V.Cleaver, JBIS, 13, pp. 1–27, 1954
Special Issue on World Ships, JBIS, 37, 6, June 1984
Project Daedalus - Final Study Reports, Alan Bond & Anthony R Martin et al., Special Supplement JBIS, pp.S1-192, 1978
Editors
Some of the people that have been editor-in-chief of the journal are:
Philip E. Cleator
J. Hardy
Gerald V. Groves
Anthony R. Martin
Mark Hempsell
Chris Toomer
Kelvin Long
Roger Longstaff
See also
Spaceflight (magazine)
References
External links
British Interplanetary Society
Space science journals
Academic journals established in 1934
Planetary engineering
Monthly journals
English-language journals
1934 establishments in the United Kingdom
|
https://en.wikipedia.org/wiki/Nonelementary%20integral
|
In mathematics, a nonelementary antiderivative of a given elementary function is an antiderivative (or indefinite integral) that is, itself, not an elementary function (i.e. a function constructed from a finite number of quotients of constant, algebraic, exponential, trigonometric, and logarithmic functions using field operations). A theorem by Liouville in 1835 provided the first proof that nonelementary antiderivatives exist. This theorem also provides a basis for the Risch algorithm for determining (with difficulty) which elementary functions have elementary antiderivatives.
Examples
Examples of functions with nonelementary antiderivatives include:
(elliptic integral)
(logarithmic integral)
(error function, Gaussian integral)
and (Fresnel integral)
(sine integral, Dirichlet integral)
(exponential integral)
(in terms of the exponential integral)
(in terms of the logarithmic integral)
(incomplete gamma function); for the antiderivative can be written in terms of the exponential integral; for in terms of the error function; for any positive integer, the antiderivative elementary.
Some common non-elementary antiderivative functions are given names, defining so-called special functions, and formulas involving these new functions can express a larger class of non-elementary antiderivatives. The examples above name the corresponding special functions in parentheses.
Properties
Nonelementary antiderivatives can often be evaluated using Taylor series. Even if a function has no elementary antiderivative, its Taylor series can be integrated term-by-term like a polynomial, giving the antiderivative function as a Taylor series with the same radius of convergence. However, even if the integrand has a convergent Taylor series, its sequence of coefficients often has no elementary formula and must be evaluated term by term, with the same limitation for the integral Taylor series.
Even if it is not possible to evaluate an indefinite integral (antiderivative)
|
https://en.wikipedia.org/wiki/Micronet%20800
|
Micronet 800 was an information provider (IP) on Prestel, aimed at the 1980s personal computer market. It was an online magazine that gave subscribers computer related news, reviews, general subject articles and downloadable telesoftware.
Users would log onto the Prestel network (which was usually a local call) and then access the Micronet 800 home page by entering *800# (hence the name) on their modem or computer. Most Micronet 800 members would have their default main index page set to page 800 automatically.
History
The name Micronet 800 derives from its home page, 800, on the BT Prestel videotext service.
Micronet 800 derived from the earlier development in 1980 and 1981 of 'Electronic Insight' by Bob Denton. Electronic Insight was a Prestel-based feature-and-price-comparison site listing computers, calculators and other electronic and IT products, whose main page was on page 800 of Prestel. Electronic Insight was acquired by Telemap Group, a part of EMAP, East Midland (note, not Midlands) Allied Press, in 1982 on the recommendation of Richard Hease, a number of whose computer magazines EMAP had just bought. Telemap had been formed in 1981 to explore the opportunities of British Telecom's Prestel videotext service. It had been looking at the horticultural market that EMAP served with a number of magazine titles, notably providing a 'Closed User Group' purchasing network for garden centre businesses, complementing EMAP's printed 'Garden Trade News' magazine. But horticulturalists and IT proved not to be a natural marriage, and the service had insufficient users to make it viable.
Richard Hease, in 1982 Chairman of EMAP's Computer & Business Press which had acquired Electronic Insight, organised a pitch to the Telemap Group by David Babsky of a projected interactive online computer magazine to replace the existing content of Electronic Insight. Babsky showed a 'dummy issue' of the intended online magazine, programmed in Integer BASIC on an Apple II compute
|
https://en.wikipedia.org/wiki/MKC%20Networks
|
MKC Networks was a privately owned supplier of VoIP (Voice over IP) equipment and software components headquartered in Ottawa, Ontario, Canada. It designed and sold a family of SIP-based products including advanced SIP Enterprise Application Servers and scalable communication platforms.
MKC Networks bought certain SIP intellectual property in 2003 from a company called Mitel Knowledge. Mitel Knowledge was created to hold the intellectual property of Mitel Networks. Through some internal exploratory R&D work, it evolved into a supplier of SIP-based equipment.
The intellectual property of Mitel Networks was returned to that company's ownership in 2003.
MKC Networks was acquired by NewHeights Software on September 1, 2006. In 2007, NewHeights was acquired by CounterPath Corporation.
References
External links
CounterPath
VoIP companies of Canada
Companies based in Ottawa
Defunct networking companies
Year of establishment missing
Defunct VoIP companies
Defunct computer companies of Canada
|
https://en.wikipedia.org/wiki/Mark%20Levinson%20No.%2026
|
The Mark Levinson No. 26S Dual Monaural Preamplifiers used Teflon circuit boards to supposedly differentiate the No. 26 and the 26S models, produced between 1991 and 1994 by Madrigal Audio Laboratories. This unit utilizes Camac coaxial connectors (except for the XLR balanced ones), which were used in the medical industry as well, because they break hot before they break ground when unplugged. No shorting electronics means a safe preamp and safe ER patients. An external power block, the PLS-226, was used to keep interference to a minimum.
The No. 26 came in three versions, a balanced audio version, a phono version, and a basic version that could do neither. There was only space for one expansion board inside the unit, so it was not possible to put in the phono and the balanced board. This made the phono and balanced inputs on the unit shared; only one set could be active in the unit based on which board was present. The 26S included both phono and one balanced input as standard.
Specifications
XLR Balanced Stereo Input and Output
3 Outputs: Main/Tape 1/Tape 2
6 Inputs: Tuner/Tape 1/Tape 2/CD/Aux 1/Phono or Aux 2
8 Selector knobs: Input/Output Level/Record/Monitor/Phase/Balanced 1/Balanced 2/Stereo, Mono
Voltage can be adjusted internally: 100, 120, 200, 220, 240 VAC @ 50–60 Hz
External links
Mark Levinson Equipment History
Audio amplifiers
|
https://en.wikipedia.org/wiki/Superquadrics
|
In mathematics, the superquadrics or super-quadrics (also superquadratics) are a family of geometric shapes defined by formulas that resemble those of ellipsoids and other quadrics, except that the squaring operations are replaced by arbitrary powers. They can be seen as the three-dimensional relatives of the superellipses. The term may refer to the solid object or to its surface, depending on the context. The equations below specify the surface; the solid is specified by replacing the equality signs by less-than-or-equal signs.
The superquadrics include many shapes that resemble cubes, octahedra, cylinders, lozenges and spindles, with rounded or sharp corners. Because of their flexibility and relative simplicity, they are popular geometric modeling tools, especially in computer graphics. It becomes an important geometric primitive widely used in computer vision, robotics, and physical simulation.
Some authors, such as Alan Barr, define "superquadrics" as including both the superellipsoids and the supertoroids. In modern computer vision literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics. Comprehensive coverage of geometrical properties of superquadrics and methods of their recovery from range images and point clouds are covered in several computer vision literatures. Useful tools and algorithms for superquadrics visualization, sampling, and recovery are open-sourced here.
Formulas
Implicit equation
The surface of the basic superquadric is given by
where r, s, and t are positive real numbers that determine the main features of the superquadric. Namely:
less than 1: a pointy octahedron modified to have concave faces and sharp edges.
exactly 1: a regular octahedron.
between 1 and 2: an octahedron modified to have convex faces, blunt edges and blunt corners.
exactly 2: a sphere
greater than 2: a cube modified to have rounded edges an
|
https://en.wikipedia.org/wiki/Ptolemy%27s%20theorem
|
In Euclidean geometry, Ptolemy's theorem is a relation between the four sides and two diagonals of a cyclic quadrilateral (a quadrilateral whose vertices lie on a common circle). The theorem is named after the Greek astronomer and mathematician Ptolemy (Claudius Ptolemaeus). Ptolemy used the theorem as an aid to creating his table of chords, a trigonometric table that he applied to astronomy.
If the vertices of the cyclic quadrilateral are A, B, C, and D in order, then the theorem states that:
This relation may be verbally expressed as follows:
If a quadrilateral is cyclic then the product of the lengths of its diagonals is equal to the sum of the products of the lengths of the pairs of opposite sides.
Moreover, the converse of Ptolemy's theorem is also true:
In a quadrilateral, if the sum of the products of the lengths of its two pairs of opposite sides is equal to the product of the lengths of its diagonals, then the quadrilateral can be inscribed in a circle i.e. it is a cyclic quadrilateral.
Corollaries on Inscribed Polygons
Equilateral triangle
Ptolemy's Theorem yields as a corollary a pretty theorem regarding an equilateral triangle inscribed in a circle.
Given An equilateral triangle inscribed on a circle and a point on the circle.
The distance from the point to the most distant vertex of the triangle is the sum of the distances from the point to the two nearer vertices.
Proof: Follows immediately from Ptolemy's theorem:
Square
Any square can be inscribed in a circle whose center is the center of the square. If the common length of its four sides is equal to then the length of the diagonal is equal to according to the Pythagorean theorem, and Ptolemy's relation obviously holds.
Rectangle
More generally, if the quadrilateral is a rectangle with sides a and b and diagonal d then Ptolemy's theorem reduces to the Pythagorean theorem. In this case the center of the circle coincides with the point of intersection of the diagonals. The product of
|
https://en.wikipedia.org/wiki/Schur%27s%20theorem
|
In discrete mathematics, Schur's theorem is any of several theorems of the mathematician Issai Schur. In differential geometry, Schur's theorem is a theorem of Axel Schur. In functional analysis, Schur's theorem is often called Schur's property, also due to Issai Schur.
Ramsey theory
In Ramsey theory, Schur's theorem states that for any partition of the positive integers into a finite number of parts, one of the parts contains three integers x, y, z with
For every positive integer c, S(c) denotes the smallest number S such that for every partition of the integers into c parts, one of the parts contains integers x, y, and z with . Schur's theorem ensures that S(c) is well-defined for every positive integer c. The numbers of the form S(c) are called Schur's number.
Folkman's theorem generalizes Schur's theorem by stating that there exist arbitrarily large sets of integers, all of whose nonempty sums belong to the same part.
Using this definition, the only known Schur numbers are S(n) 2, 5, 14, 45, and 161 () The proof that was announced in 2017 and took up 2 petabytes of space.
Combinatorics
In combinatorics, Schur's theorem tells the number of ways for expressing a given number as a (non-negative, integer) linear combination of a fixed set of relatively prime numbers. In particular, if is a set of integers such that , the number of different tuples of non-negative integer numbers such that when goes to infinity is:
As a result, for every set of relatively prime numbers there exists a value of such that every larger number is representable as a linear combination of in at least one way. This consequence of the theorem can be recast in a familiar context considering the problem of changing an amount using a set of coins. If the denominations of the coins are relatively prime numbers (such as 2 and 5) then any sufficiently large amount can be changed using only these coins. (See Coin problem.)
Differential geometry
In differential geometry, Schur's
|
https://en.wikipedia.org/wiki/Network%20performance
|
Network performance refers to measures of service quality of a network as seen by the customer.
There are many different ways to measure the performance of a network, as each network is different in nature and design. Performance can also be modeled and simulated instead of measured; one example of this is using state transition diagrams to model queuing performance or to use a Network Simulator.
Performance measures
The following measures are often considered important:
Bandwidth commonly measured in bits/second is the maximum rate that information can be transferred
Throughput is the actual rate that information is transferred
Latency the delay between the sender and the receiver decoding it, this is mainly a function of the signals travel time, and processing time at any nodes the information traverses
Jitter variation in packet delay at the receiver of the information
Error rate the number of corrupted bits expressed as a percentage or fraction of the total sent
Bandwidth
The available channel bandwidth and achievable signal-to-noise ratio determine the maximum possible throughput. It is not generally possible to send more data than dictated by the Shannon-Hartley Theorem.
Throughput
Throughput is the number of messages successfully delivered per unit time. Throughput is controlled by available bandwidth, as well as the available signal-to-noise ratio and hardware limitations. Throughput for the purpose of this article will be understood to be measured from the arrival of the first bit of data at the receiver, to decouple the concept of throughput from the concept of latency. For discussions of this type, the terms 'throughput' and 'bandwidth' are often used interchangeably.
The Time Window is the period over which the throughput is measured. The choice of an appropriate time window will often dominate calculations of throughput, and whether latency is taken into account or not will determine whether the latency affects the throughput or not.
Laten
|
https://en.wikipedia.org/wiki/Gonochorism
|
In biology, gonochorism is a sexual system where there are only two sexes and each individual organism is either male or female. The term gonochorism is usually applied in animal species, the vast majority of which are gonochoric.
Gonochorism contrasts with simultaneous hermaphroditism but it may be hard to tell if a species is gonochoric or sequentially hermaphroditic. (e.g. Parrotfish, Patella ferruginea). However, in gonochoric species individuals remain either male or female throughout their lives. Species that reproduce by thelytokous parthenogenesis and do not have males can still be classified as gonochoric.
Terminology
The term is derived from Greek (gone, generation) + (chorizein, to separate). The term gonochorism originally came from German gonochorismus.
Gonochorism is also referred to as unisexualism or gonochory.
Evolution
Gonochorism has evolved independently multiple times and is very evolutionarily stable in animals. Its stability and advantages have received little attention. Its origin owes to the evolution of anisogamy, but it is unclear if the evolution of anisogamy first led to hermaphroditism or gonochorism.
Gonochorism is thought to be ancestral in polychaetes, hexacorallia, nematodes, and hermaphroditic fishes. Gonochorism is thought to be ancestral in hermaphroditic fishes because it is widespread in basal clades of fish and other vertebrate lineages.
Two papers from 2008 have suggested that transitions between hermaphroditism and gonochorism or vice versa have occurred in certain animal taxonomy groups between 10 and 20 times. In a 2017 study involving 165 taxon groups, more evolutionary transitions from gonochorism to hermaphroditism were found than the reverse.
Use across species
Animals
The term is most often used with animals, in which the species are usually gonochoric.
Gonochorism has been estimated to occur in 95% of animal species. It is very common in vertebrate species, 99% of which are gonochoric. 98% of fishes a
|
https://en.wikipedia.org/wiki/Matrix%20representation
|
Matrix representation is a method used by a computer language to store matrices of more than one dimension in memory.
Fortran and C use different schemes for their native arrays. Fortran uses "Column Major", in which all the elements for a given column are stored contiguously in memory. C uses "Row Major", which stores all the elements for a given row contiguously in memory.
LAPACK defines various matrix representations in memory. There is also Sparse matrix representation and Morton-order matrix representation.
According to the documentation, in LAPACK the unitary matrix representation is optimized. Some languages such as Java store matrices using Iliffe vectors. These are particularly useful for storing irregular matrices. Matrices are of primary importance in linear algebra.
Basic mathematical operations
An m × n (read as m by n) order matrix is a set of numbers arranged in m rows and n columns. Matrices of the same order can be added by adding the corresponding elements. Two matrices can be multiplied, the condition being that the number of columns of the first matrix is equal to the number of rows of the second matrix. Hence, if an m × n matrix is multiplied with an n × r matrix, then the resultant matrix will be of the order m × r.
Operations like row operations or column operations can be performed on a matrix, using which we can obtain the inverse of a matrix. The inverse may be obtained by determining the adjoint as well. rows and columns are the different classes of matrices
In 3D graphics
The choice of representation for 4×4 matrices commonly used in 3D graphics affects the implementation of matrix/vector operations in systems with packed SIMD instructions:
Row major
With row-major matrix order, it is easy to transform vectors using dot product operations, since the coefficients of each component are sequential in memory. Consequently, this layout may be desirable if a processor supports dot product operations natively. It is also possible to ef
|
https://en.wikipedia.org/wiki/Shirt-sleeve%20environment
|
"Shirt-sleeve environment" is a term used in aircraft design to describe the interior of an aircraft in which no special clothing need be worn. Early aircraft had no internal pressurization, so the crews of those that reached the stratosphere had to be garbed to withstand the low temperature and pressure of the air outside. Respirator masks needed to cover the mouth and nose. Silk socks were worn to retain heat. Sometimes leather clothing, such as boots, were electrically heated. When jet fighter aircraft reached still higher altitudes, something similar to a space suit had to be worn, and pilots of the highest reconnaissance aircraft wore real space suits.
Commercial jet airliners fly in the stratosphere, but because they are pressurized, they could be said to have a shirt-sleeve environment. Crews of the US Apollo spacecraft always began the flight phases of launch, docking, and re-entry in space suits, although they could remove them for many hours. The Soviets tried to perfect this to save weight. This worked well, until an accidental depressurization on entry resulted in the deaths of an entire Soyuz crew. Protocols were changed shortly thereafter to require at least partial spacesuits. Early Soyuz spacecraft had no provision for space suits in the re-entry module, although the orbital module was intended for use as an airlock. Thus these operated in a shirt-sleeve environment except for spacewalks.
This term is also used in science fiction to describe an alien planet with an atmosphere breathable by humans without special equipment.
The Space Shuttle's Spacelab Habitable module was an area with expanded volume for astronauts to work in a shirt sleeve environment and had space for equipment racks and related support equipment for operations in Low Earth orbit.
One of the goals for MOLAB rover was to achieve a shirt-sleeve environment (compared to a lunar rover which was open to space and required the use of space suits to operate). One of the consideratio
|
https://en.wikipedia.org/wiki/Chgrp
|
The (from change group) command may be used by unprivileged users on various operating systems to change the group associated with a file system object (such as a computer file, directory, or link) to one of which they are a member. A file system object has 3 sets of access permissions, one set for the owner, one set for the group and one set for others. Changing the group of an object could be used to change which users can write to a file.
History
The command was originally developed as part of the Unix operating system by AT&T Bell Laboratories.
It is also available in the Plan 9 and Inferno operating systems and in most Unix-like systems.
The command has also been ported to the IBM i operating system.
Syntax
chgrp [options] group FSO
The group parameter specifies the new group with which the files or directories should be associated. It may either be a symbolic name or an identifier.
The FSO specifies one or more file system objects, which may be the result of a glob expression like .
Frequently implemented options
recurse through subdirectories.
verbosely output names of objects changed. Most useful when is a list.
force or forge ahead with other objects even if an error is encountered.
Example
$ ls -l *.conf
-rw-rw-r-- 1 gbeeker wheel 3545 Nov 04 2011 prog.conf
-rw-rw-r-- 1 gbeeker wheel 3545 Nov 04 2011 prox.conf
$ chgrp staff *.conf
$ ls -l *.conf
-rw-rw-r-- 1 gbeeker staff 3545 Nov 04 2011 prog.conf
-rw-rw-r-- 1 gbeeker staff 3545 Nov 04 2011 prox.conf
The above command changes the group associated with file prog.conf from to (provided the executing user is a member of that group). This could be used to allow members of staff to modify the configuration for programs and .
See also
chmod
chown
Group identifier (Unix)
List of Unix commands
id (Unix)
References
External links
Operating system security
Standard Unix programs
Unix SUS2008 utilities
Plan 9 commands
Inferno
|
https://en.wikipedia.org/wiki/D%26B%20Hoovers
|
D&B Hoovers was founded by Gary Hoover and Patrick Spain in 1990 as an American business research company that provided information on companies and industries through their primary product platform named "Hoover's". In 2003, it was acquired by Dun & Bradstreet and operated for a time as a wholly owned subsidiary. In 2017, the Hoover's product was re-branded D&B Hoovers. Dun & Bradstreet is headquartered in Jacksonville, Florida, US. D&B Hoovers has sales, marketing and development resources in Austin, Texas, US.
Origins and expansion
Hoovers was started in 1990 by Gary Hoover, Patrick J. Spain, Alan Chai, and Alta Campbell. Leading up to this, Hoover had founded the Bookstop book store chain, ultimately purchase by Barnes & Noble.[citation needed] Hoover's initially was called The Reference Press as it published reference books about companies.
The company grew rapidly under a business team led by Spain. This team iincluded Carl Shepherd, Lynn Atchison, Elisabeth DeMarse, Jani Spede, Kris Rao, and Gordon Anderson, among others. Spain was CEO from 1993 to 2001, and chairman from 1994 to 2002.
Hoovers made an initial public offering on the NASDAQ exchange in 1999. The company then became a subsidiary of Dun & Bradstreet, which bought Hoover's for $119 million in 2003. After the acquisition of Avention by Dun & Bradstreet in 2017, the D&B Hoovers solution was launched, and replaced the existing Hoovers' product.
Operations
Dun & Bradstreet maintains a database of more than 330 million companies with 30,000 global data sources updated 5 million times per day. It combines Dun & Bradstreet's data with Avention's sales acceleration platform. Subscriptions are sold primarily to sales, marketing, and business development professionals seeking contact information for prospective customers. The database is used for lead generation, outreach, prepping for sales calls, researching companies, being alerted to changes in leadership, and acquiring new customers.
Besides pub
|
https://en.wikipedia.org/wiki/Red%20Hat%20Enterprise%20Linux%20derivatives
|
Red Hat Enterprise Linux derivatives are Linux distributions that are based on the source code of Red Hat Enterprise Linux (RHEL).
History
Red Hat Linux was one of the first and most popular Linux distributions. This was largely because, while a paid-for supported version was available, a freely downloadable version was also available. Since the only difference between the paid-for option and the free option was support, a great number of people chose to use the free version.
In 2003, Red Hat made the decision to split its Red Hat Linux product into two: Red Hat Enterprise Linux for customers who were willing to pay for it, and Fedora that was made available free of charge but gets updates for every release for approximately 13 months.
Fedora has its own beta cycle and has some issues fixed by contributors who include Red Hat staff. However, its quick and nonconservative release cycle means it might not be suitable for some users. Fedora is somewhat a test-bed for Red Hat, allowing them to beta test their new features before they get included in Red Hat Enterprise Linux. Since the release of Fedora, Red Hat has no longer made binary versions of its commercial product available free-of-charge.
Motivations
Red Hat does not make a compiled version of its Enterprise Linux product available for free download. However, as the license terms on which it is mostly based explicitly stipulate, Red Hat has made the entire source code available in RPM format via their network of servers. The availability of the complete source code of the distribution in RPM format makes it relatively easy to recompile the entire distribution. Several distributions were created that took Red Hat's source code, recompiled it, and released it.
Features
The Red Hat Enterprise Linux derivatives generally include the union set, which is included in the different versions of RHEL. The version numbers are typically identical to the ones featured in RHEL; as such, the free versions maintain bina
|
https://en.wikipedia.org/wiki/List%20of%20text%20editors
|
The following is a list of notable text editors.
Graphical and text user interface
The following editors can either be used with a graphical user interface or a text user interface.
Graphical user interface
Text user interface
System default
Others
vi clones
Sources:
No user interface (editor libraries/toolkits)
ASCII and ANSI art
Editors that are specifically designed for the creation of ASCII and ANSI text art.
ACiDDraw – designed for editing ASCII text art. Supports ANSI color (ANSI X3.64)
JavE – ASCII editor, portable to any platform running a Java GUI
PabloDraw – ANSI/ASCII editor allowing multiple users to edit via TCP/IP network connections
TheDraw – ANSI/ASCII text editor for DOS and PCBoard file format support
ASCII font editors
FIGlet – for creating ASCII art text
TheDraw – DOS ANSI/ASCII text editor with built-in editor and manager of ASCII fonts
PabloDraw – .NET text editor designed for creating ANSI and ASCII art
Historical
Visual and full-screen editors
Line editors
See also
Comparison of text editors
Editor war
Line editor
List of HTML editors
List of word processors
Outliner, a specialized type of word processor
Source code editor
Notes
Text editors
|
https://en.wikipedia.org/wiki/Comparison%20of%20text%20editors
|
This article provides basic comparisons for notable text editors. More feature details for text editors are available from the Category of text editor features and from the individual products' articles. This article may not be up-to-date or necessarily all-inclusive.
Feature comparisons are made between stable versions of software, not the upcoming versions or beta releases – and are exclusive of any add-ons, extensions or external programs (unless specified in footnotes).
Overview
Operating system support
This section lists the operating systems that different editors can run on. Some editors run on additional operating systems that are not listed.
Cross-platform
Natural language (localization)
Document interface
Notes
Multiple instances: multiple instances of the program can be opened simultaneously for editing multiple files. Applies both for single document interface (SDI) and multiple document interface (MDI) programs. Also applies for program that has a user interface that looks like multiple instances of the same program (such as some versions of Microsoft Word).
Single document window splitting: window can be split to simultaneously view different areas of a file.
MDI: Overlappable windows: each opened document gets its own fully movable window inside the editor environment.
MDI: Tabbed document interface: multiple documents can be viewed as tabs in a single window.
MDI: Window splitting: splitting application window to show multiple documents (non-overlapping windows).
Basic features
Programming features
Notes
Syntax highlighting: Displays text in different colors and fonts according to the category of terms.
Function list: Lists all functions from current file in a window or sidebar and allows user to jump directly to the definition of that function for example by double-clicking on the function name in the list. More or less realtime (does not require creating a symbol database, see below).
Symbol database: Database of functions, variable
|
https://en.wikipedia.org/wiki/Froebel%20gifts
|
The Froebel gifts () are educational play materials for young children, originally designed by Friedrich Fröbel for the first kindergarten at Bad Blankenburg. Playing with Froebel gifts, singing, dancing, and growing plants were each important aspects of this child-centered approach to education. The series was later extended from the original six to at least ten sets of gifts.
Description
The Sunday Papers () published by Fröbel between 1838 and 1840 explained the meaning and described the use of each of his six initial "play gifts" (): "The active and creative, living and life producing being of each person, reveals itself in the creative instinct of the child. All human education is bound up in the quiet and conscientious nurture of this instinct of activity; and in the ability of the child, true to this instinct, to be active."
Between May 1837 and 1850, the Froebel gifts were made in Bad Blankenburg in the principality of Schwarzburg Rudolstadt, by master carpenter Löhn, assisted by artisans and women of the village. In 1850, production was moved to the Erzgebirge region of the Kingdom of Saxony in a factory established for this purpose by S F Fischer.
Fröbel also developed a series of activities ("occupations") such as sewing, weaving, and modeling with clay, for children to extend their experiences through play. Ottilie de Liagre in a letter to Fröbel in 1844 observed that playing with the Froebel gifts empowers children to be lively and free, but people can degrade it into a mechanical routine.
Each of the first five gifts was assigned a number by Fröbel in the Sunday Papers, which indicated the sequence in which each gift was to be given to the child.
Gift 1 (infant)
The first gift is a soft ball or yarn ball in solid color, which is the right size for the hand of a small child. When attached to a matching string, the ball can be moved by a mother in various ways as she sings to the child. Although Fröbel sold single balls, they are now usually suppl
|
https://en.wikipedia.org/wiki/Apostolos%20Doxiadis
|
Apostolos K. Doxiadis (; born 1953) is a Greek writer. He is best known for his international bestsellers Uncle Petros and Goldbach's Conjecture (2000) and Logicomix (2009).
Early life
Doxiadis was born in Australia, where his father, the architect Constantinos Apostolou Doxiadis was working. Soon after his birth, the family returned to Athens, where Doxiadis grew up. Though his earliest interests were in poetry, fiction and the theatre, an intense interest in mathematics led Doxiadis to leave school at age fifteen, to attend Columbia University, in New York, from which he obtained a bachelor's degree in mathematics. He then attended the École Pratique des Hautes Études in Paris from which he got a master's degree, with a thesis on the mathematical modelling of the nervous system. His father's death and family reasons made him return to Greece in 1975, interrupting his graduate studies. In Greece, although involved for some years with the computer software industry, Doxiadis returned to his childhood and adolescence loves of theatre and the cinema, before becoming a full-time writer.
Work
Fiction in Greek
Doxiadis began to write in Greek. His first published work was A Parallel Life (Βίος Παράλληλος, 1985), a novella set in the monastic communities of 4th-century CE Egypt. His first novel, Makavettas (Μακαβέττας, 1988), recounted the adventures of a fictional power-hungry colonel at the time of the Greek military junta of 1967–1974. Written in a tongue-in-cheek imitation of Greek folk military memoirs, such as that of Yannis Makriyannis, it follows the plot of Shakespeare's Macbeth, of which the eponymous hero's name is a Hellenized form. Doxiadis next novel, Uncle Petros and Goldbach's Conjecture (Ο Θείος Πέτρος και η Εικασία του Γκόλντμπαχ, 1992), was the first long work of fiction whose plot takes place in the world of pure mathematics research. The first Greek critics did not find the mathematical themes appealing, and it received mediocre reviews, unlike Dox
|
https://en.wikipedia.org/wiki/Clapp%20oscillator
|
The Clapp oscillator or Gouriet oscillator is an LC electronic oscillator that uses a particular combination of an inductor and three capacitors to set the oscillator's frequency. LC oscillators use a transistor (or vacuum tube or other gain element) and a positive feedback network. The oscillator has good frequency stability.
History
The Clapp oscillator design was published by James Kilton Clapp in 1948 while he worked at General Radio. According to Czech engineer Jiří Vackář, oscillators of this kind were independently developed by several inventors, and one developed by Gouriet had been in operation at the BBC since 1938.
Circuit
The Clapp oscillator uses a single inductor and three capacitors to set its frequency. The Clapp oscillator is often drawn as a Colpitts oscillator that has an additional capacitor () placed in series with the inductor.
The oscillation frequency in Hertz (cycles per second) for the circuit in the figure, which uses a field-effect transistor (FET), is
The capacitors and are usually much larger than , so the term dominates the other capacitances, and the frequency is near the series resonance of and . Clapp's paper gives an example where and are 40 times larger than ; the change makes the Clapp circuit about 400 times more stable than the Colpitts oscillator for capacitance changes of .
Capacitors , and form a voltage divider that determines the amount of feedback voltage applied to the transistor input.
Although, the Clapp circuit is used as a variable frequency oscillator (VFO) by making a variable capacitor, Vackář states that the Clapp oscillator "can only be used for operation on fixed frequencies or at the most over narrow bands (max. about 1:1.2)." The problem is that under typical conditions, the Clapp oscillator's loop gain varies as , so wide ranges will overdrive the amplifier. For VFOs, Vackář recommends other circuits. See Vackář oscillator.
References
Further reading
Ulrich L. Rohde, Ajay K. Poddar, Ge
|
https://en.wikipedia.org/wiki/Assimilation%20%28biology%29
|
' is the process of absorption of vitamins, minerals, and other chemicals from food as part of the nutrition of an organism. In humans, this is always done with a chemical breakdown (enzymes and acids) and physical breakdown (oral mastication and stomach churning).chemical alteration of substances in the bloodstream by the liver or cellular secretions. Although a few similar compounds can be absorbed in digestion bio assimilation, the bioavailability of many compounds is dictated by this second process since both the liver and cellular secretions can be very specific in their metabolic action (see chirality). This second process is where the absorbed food reaches the cells via the liver.
Most foods are composed of largely indigestible components depending on the enzymes and effectiveness of an animal's digestive tract. The most well-known of these indigestible compounds is cellulose; the basic chemical polymer in the makeup of plant cell walls. Most animals, however, do not produce cellulase; the enzyme needed to digest cellulose. However some animal and species have developed symbiotic relationships with cellulase-producing bacteria (see termites and metamonads.) This allows termites to use the energy-dense cellulose carbohydrate. Other such enzymes are known to significantly improve bio-assimilation of nutrients. Because of the use of bacterial derivatives, enzymatic dietary supplements now contain such enzymes as amylase, glucoamylase, protease, invertase, peptidase, lipase, lactase, phytase, and cellulase.
Examples of biological assimilation
Photosynthesis, a process whereby carbon dioxide and water are transformed into a number of organic molecules in plant cells.
Nitrogen fixation from the soil into organic molecules by symbiotic bacteria which live in the roots of certain plants, such as Leguminosae.
Magnesium supplements orotate, oxide, sulfate, citrate, and glycerate are all structurally similar. However, oxide and sulfate are not water-soluble
|
https://en.wikipedia.org/wiki/THE%20multiprogramming%20system
|
The THE multiprogramming system or THE OS was a computer operating system designed by a team led by Edsger W. Dijkstra, described in monographs in 1965-66 and published in 1968.
Dijkstra never named the system; "THE" is simply the abbreviation of "Technische Hogeschool Eindhoven", then the name (in Dutch) of the Eindhoven University of Technology of the Netherlands. The THE system was primarily a batch system that supported multitasking; it was not designed as a multi-user operating system. It was much like the SDS 940, but "the set of processes in the THE system was static".
The THE system apparently introduced the first forms of software-based paged virtual memory (the Electrologica X8 did not support hardware-based memory management), freeing programs from being forced to use physical locations on the drum memory. It did this by using a modified ALGOL compiler (the only programming language supported by Dijkstra's system) to "automatically generate calls to system routines, which made sure the requested information was in memory, swapping if necessary". Paged virtual memory was also used for buffering input/output (I/O) device data, and for a significant portion of the operating system code, and nearly all the ALGOL 60 compiler. In this system, semaphores were used as a programming construct for the first time.
Design
The design of the THE multiprogramming system is significant for its use of a layered structure, in which "higher" layers depend on "lower" layers only:
Layer 0 was responsible for the multiprogramming aspects of the operating system. It decided which process was allocated to the central processing unit (CPU), and accounted for processes that were blocked on semaphores. It dealt with interrupts and performed the context switches when a process change was needed. This is the lowest level. In modern terms, this was the scheduler.
Layer 1 was concerned with allocating memory to processes. In modern terms, this was the pager.
Layer 2 dealt with co
|
https://en.wikipedia.org/wiki/Rodrigues%27%20rotation%20formula
|
In the theory of three-dimensional rotation, Rodrigues' rotation formula, named after Olinde Rodrigues, is an efficient algorithm for rotating a vector in space, given an axis and angle of rotation. By extension, this can be used to transform all three basis vectors to compute a rotation matrix in , the group of all rotation matrices, from an axis–angle representation. In terms of Lie theory, the Rodrigues' formula provides an algorithm to compute the exponential map from the Lie algebra to its Lie group .
This formula is variously credited to Leonhard Euler, Olinde Rodrigues, or a combination of the two. A detailed historical analysis in 1989 concluded that the formula should be attributed to Euler, and recommended calling it "Euler's finite rotation formula." This proposal has received notable support, but some others have viewed the formula as just one of many variations of the Euler–Rodrigues formula, thereby crediting both.
Statement
If is a vector in and is a unit vector describing an axis of rotation about which rotates by an angle according to the right hand rule, the Rodrigues formula for the rotated vector is
The intuition of the above formula is that the first term scales the vector down, while the second skews it (via vector addition) toward the new rotational position. The third term re-adds the height (relative to ) that was lost by the first term.
An alternative statement is to write the axis vector as a cross product of any two nonzero vectors and which define the plane of rotation, and the sense of the angle is measured away from and towards . Letting denote the angle between these vectors, the two angles and are not necessarily equal, but they are measured in the same sense. Then the unit axis vector can be written
This form may be more useful when two vectors defining a plane are involved. An example in physics is the Thomas precession which includes the rotation given by Rodrigues' formula, in terms of two non-collinear boost
|
https://en.wikipedia.org/wiki/Plasmogamy
|
Plasmogamy is a stage in the sexual reproduction of fungi, in which the protoplasm of two parent cells (usually from the mycelia) fuse without the fusion of nuclei, effectively bringing two haploid nuclei close together in the same cell. This state is followed by karyogamy, where the two nuclei fuse and then undergo meiosis to produce spores.
The dikaryotic state that comes after plasmogamy will often persist for many generations before the fungi undergoes karyogamy. In lower fungi however, plasmogamy is usually immediately followed by karyogamy. A comparative genomic study indicated the presence of the machinery for plasmogamy, karyogamy and meiosis in the Amoebozoa.
References
Mycology
|
https://en.wikipedia.org/wiki/Heterokaryon
|
A heterokaryon is a multinucleate cell that contains genetically different nuclei. Heterokaryotic and heterokaryosis are derived terms. This is a special type of syncytium. This can occur naturally, such as in the mycelium of fungi during sexual reproduction, or artificially as formed by the experimental fusion of two genetically different cells, as e.g., in hybridoma technology.
Etymology
Heterokaryon is from neo-classic Greek hetero, meaning different, and karyon, meaning kernel or in this case nucleus.
The term was coined in 1965, independently by B. Ephrussi and M. Weiss, by H. Harris and J. F. Watkins, and by Y. Okada and F. Murayama.
Occurrence
Heterokaryons are found in the life cycle of yeasts, for example Saccharomyces cerevisiae, a genetic model organism. The heterokaryon stage is produced from the fusion of two haploid cells. This transient heterokaryon can produce further haploid buds, or cell nuclei can fuse and produce a diploid cell, which can then undergo mitosis.
Ciliate protozoans
The term was first used for ciliate protozoans such as Tetrahymena. This has two types of cell nuclei, a large, somatic macronucleus and a small, germline micronucleus. Both exist in a single cell at the same time and carry out different functions with distinct cytological and biochemical properties.
True fungi
Many fungi (notably the arbuscular mycorrhizal fungi) exhibit heterokaryosis. The haploid nuclei within a mycelium may differ from one another not merely by accumulating mutations, but by the non-sexual fusion of genetically distinct fungal hyphae, although a self / non-self recognition system exists in Fungi and usually prevents fusions with non-self.
Heterokaryosis is also common upon mating, as in Dikarya (Ascomycota and Basidiomycota). Mating requires the encounter of two haploid nuclei of compatible mating types. These nuclei do not immediately fuse, and remain haploid in a n+n state until the very onset of meiosis: this phenomenon is called delayed karyo
|
https://en.wikipedia.org/wiki/Clairaut%27s%20theorem%20%28gravity%29
|
Clairaut's theorem characterizes the surface gravity on a viscous rotating ellipsoid in hydrostatic equilibrium under the action of its gravitational field and centrifugal force. It was published in 1743 by Alexis Claude Clairaut in a treatise which synthesized physical and geodetic evidence that the Earth is an oblate rotational ellipsoid. It was initially used to relate the gravity at any point on the Earth's surface to the position of that point, allowing the ellipticity of the Earth to be calculated from measurements of gravity at different latitudes. Today it has been largely supplanted by the Somigliana equation.
History
Although it had been known since antiquity that the Earth was spherical, by the 17th century evidence was accumulating that it was not a perfect sphere. In 1672 Jean Richer found the first evidence that gravity was not constant over the Earth (as it would be if the Earth were a sphere); he took a pendulum clock to Cayenne, French Guiana and found that it lost minutes per day compared to its rate at Paris. This indicated the acceleration of gravity was less at Cayenne than at Paris. Pendulum gravimeters began to be taken on voyages to remote parts of the world, and it was slowly discovered that gravity increases smoothly with increasing latitude, gravitational acceleration being about 0.5% greater at the poles than at the equator.
British physicist Isaac Newton explained this in his Principia Mathematica (1687) in which he outlined his theory and calculations on the shape of the Earth. Newton theorized correctly that the Earth was not precisely a sphere but had an oblate ellipsoidal shape, slightly flattened at the poles due to the centrifugal force of its rotation. Since the surface of the Earth is closer to its center at the poles than at the equator, gravity is stronger there. Using geometric calculations, he gave a concrete argument as to the hypothetical ellipsoid shape of the Earth.
The goal of Principia was not to provide e
|
https://en.wikipedia.org/wiki/Implicit%20surface
|
In mathematics, an implicit surface is a surface in Euclidean space defined by an equation
An implicit surface is the set of zeros of a function of three variables. Implicit means that the equation is not solved for or or .
The graph of a function is usually described by an equation and is called an explicit representation. The third essential description of a surface is the parametric one:
, where the -, - and -coordinates of surface points are represented by three functions depending on common parameters . Generally the change of representations is simple only when the explicit representation is given: (implicit), (parametric).
Examples:
The plane
The sphere
The torus
A surface of genus 2: (see diagram).
The surface of revolution (see diagram wineglass).
For a plane, a sphere, and a torus there exist simple parametric representations. This is not true for the fourth example.
The implicit function theorem describes conditions under which an equation can be solved (at least implicitly) for , or . But in general the solution may not be made explicit. This theorem is the key to the computation of essential geometric features of a surface: tangent planes, surface normals, curvatures (see below). But they have an essential drawback: their visualization is difficult.
If is polynomial in , and , the surface is called algebraic. Example 5 is non-algebraic.
Despite difficulty of visualization, implicit surfaces provide relatively simple techniques to generate theoretically (e.g. Steiner surface) and practically (see below) interesting surfaces.
Formulas
Throughout the following considerations the implicit surface is represented by an equation
where function meets the necessary conditions of differentiability. The partial derivatives of
are .
Tangent plane and normal vector
A surface point is called regular if and only if the gradient of at is not the zero vector , meaning
.
If the surface point is not regular, it is called singular.
Th
|
https://en.wikipedia.org/wiki/Omitted-variable%20bias
|
In statistics, omitted-variable bias (OVB) occurs when a statistical model leaves out one or more relevant variables. The bias results in the model attributing the effect of the missing variables to those that were included.
More specifically, OVB is the bias that appears in the estimates of parameters in a regression analysis, when the assumed specification is incorrect in that it omits an independent variable that is a determinant of the dependent variable and correlated with one or more of the included independent variables.
In linear regression
Intuition
Suppose the true cause-and-effect relationship is given by:
with parameters a, b, c, dependent variable y, independent variables x and z, and error term u. We wish to know the effect of x itself upon y (that is, we wish to obtain an estimate of b).
Two conditions must hold true for omitted-variable bias to exist in linear regression:
the omitted variable must be a determinant of the dependent variable (i.e., its true regression coefficient must not be zero); and
the omitted variable must be correlated with an independent variable specified in the regression (i.e., cov(z,x) must not equal zero).
Suppose we omit z from the regression, and suppose the relation between x and z is given by
with parameters d, f and error term e. Substituting the second equation into the first gives
If a regression of y is conducted upon x only, this last equation is what is estimated, and the regression coefficient on x is actually an estimate of (b + cf ), giving not simply an estimate of the desired direct effect of x upon y (which is b), but rather of its sum with the indirect effect (the effect f of x on z times the effect c of z on y). Thus by omitting the variable z from the regression, we have estimated the total derivative of y with respect to x rather than its partial derivative with respect to x. These differ if both c and f are non-zero.
The direction and extent of the bias are both contained in cf, since the ef
|
https://en.wikipedia.org/wiki/Totally%20bounded%20space
|
In topology and related branches of mathematics, total-boundedness is a generalization of compactness for circumstances in which a set is not necessarily closed. A totally bounded set can be covered by finitely many subsets of every fixed “size” (where the meaning of “size” depends on the structure of the ambient space).
The term precompact (or pre-compact) is sometimes used with the same meaning, but precompact is also used to mean relatively compact. These definitions coincide for subsets of a complete metric space, but not in general.
In metric spaces
A metric space is totally bounded if and only if for every real number , there exists a finite collection of open balls of radius whose centers lie in M and whose union contains . Equivalently, the metric space M is totally bounded if and only if for every , there exists a finite cover such that the radius of each element of the cover is at most . This is equivalent to the existence of a finite ε-net. A metric space is said to be totally bounded if every sequence admits a Cauchy subsequence; in complete metric spaces, a set is compact if and only if it is closed and totally bounded.
Each totally bounded space is bounded (as the union of finitely many bounded sets is bounded). The reverse is true for subsets of Euclidean space (with the subspace topology), but not in general. For example, an infinite set equipped with the discrete metric is bounded but not totally bounded: every discrete ball of radius or less is a singleton, and no finite union of singletons can cover an infinite set.
Uniform (topological) spaces
A metric appears in the definition of total boundedness only to ensure that each element of the finite cover is of comparable size, and can be weakened to that of a uniform structure. A subset of a uniform space is totally bounded if and only if, for any entourage , there exists a finite cover of by subsets of each of whose Cartesian squares is a subset of . (In other words, replaces
|
https://en.wikipedia.org/wiki/Interactor
|
An interactor is a person who interacts with the members of the audience.
or
An interactor is an entity that natural selection acts upon.
Definition
Interactor is a concept commonly used in the field of evolutionary biology. A widely accepted theory of evolution is the theory from Charles Darwin. He states, in short, that in a population there is often variation in heritable traits among individuals, in which a form of the trait might be more beneficial than the other form(s). Due to this difference the change of getting more adjusted offspring to the environment is higher. The process describing the selection of the environment on the traits of organisms is called natural selection. Based on this idea natural selection seems to act on traits of individuals, which evolutionary biologist like to call the interactor. So stated in a different way; an interactor is defined as a part of an organism that natural selection acts upon.
Replicators and vehicles
Replicators
Other terms that are often mentioned in the same context as interactors, are replicators and vehicles. When replicators are mentioned, they mean things that pass on their entire structure through successive replications, like genes. This is not the same as an interactor, as interactors are things that interact with their environment and natural selection can act upon. Due to this interaction with the environment, interactors cause differential replication. However, some things (for example genes) can be both replicators and interactors.
Vehicles
Vehicles are often used as a synonym of interactors, only in a way that vehicles can "drive" natural selection, as if they have the behaviour to steer natural selection in a specific way. The term "vehicle" makes it look that way and therefore some people (like Hull) prefer the word "interactor" to "vehicle" for the same concept. An example of an interactor is the shell colour of snails (see below).
Research on common garden snails as illustration for na
|
https://en.wikipedia.org/wiki/Through-hole%20technology
|
In electronics, through-hole technology (also spelled "thru-hole") is a manufacturing scheme in which leads on the components are inserted through holes drilled in printed circuit boards (PCB) and soldered to pads on the opposite side, either by manual assembly (hand placement) or by the use of automated insertion mount machines.
History
Through-hole technology almost completely replaced earlier electronics assembly techniques such as point-to-point construction. From the second generation of computers in the 1950s until surface-mount technology (SMT) became popular in the mid 1980s, every component on a typical PCB was a through-hole component. PCBs initially had tracks printed on one side only, later both sides, then multi-layer boards were in use. Through holes became plated-through holes (PTH) in order for the components to make contact with the required conductive layers. Plated-through holes are no longer required with SMT boards for making the component connections, but are still used for making interconnections between the layers and in this role are more usually called vias.
Leads
Axial and radial leads
Components with wire leads are generally used on through-hole boards. Axial leads protrude from each end of a typically cylindrical or elongated box-shaped component, on the geometrical axis of symmetry. Axial-leaded components resemble wire jumpers in shape, and can be used to span short distances on a board, or even otherwise unsupported through an open space in point-to-point wiring. Axial components do not protrude much above the surface of a board, producing a low-profile or flat configuration when placed "lying down" or parallel to the board.
Radial leads project more or less in parallel from the same surface or aspect of a component package, rather than from opposite ends of the package. Originally, radial leads were defined as more-or-less following a radius of a cylindrical component (such as a ceramic disk capacitor). Over time, this definiti
|
https://en.wikipedia.org/wiki/Neighbourhood%20%28mathematics%29
|
In topology and related areas of mathematics, a neighbourhood (or neighborhood) is one of the basic concepts in a topological space. It is closely related to the concepts of open set and interior. Intuitively speaking, a neighbourhood of a point is a set of points containing that point where one can move some amount in any direction away from that point without leaving the set.
Definitions
Neighbourhood of a point
If is a topological space and is a point in then a of is a subset of that includes an open set containing ,
This is also equivalent to the point belonging to the topological interior of in
The neighbourhood need be an open subset of but when is open in then it is called an . Some authors have been known to require neighbourhoods to be open, so it is important to note conventions.
A set that is a neighbourhood of each of its points is open since it can be expressed as the union of open sets containing each of its points. A closed rectangle, as illustrated in the figure, is not a neighbourhood of all its points; points on the edges or corners of the rectangle are not contained in any open set that is contained within the rectangle.
The collection of all neighbourhoods of a point is called the neighbourhood system at the point.
Neighbourhood of a set
If is a subset of a topological space , then a neighbourhood of is a set that includes an open set containing ,It follows that a set is a neighbourhood of if and only if it is a neighbourhood of all the points in Furthermore, is a neighbourhood of if and only if is a subset of the interior of
A neighbourhood of that is also an open subset of is called an of
The neighbourhood of a point is just a special case of this definition.
In a metric space
In a metric space a set is a neighbourhood of a point if there exists an open ball with center and radius such that
is contained in
is called uniform neighbourhood of a set if there exists a positive number such that f
|
https://en.wikipedia.org/wiki/Zone%20Routing%20Protocol
|
Zone Routing Protocol, or ZRP is a hybrid wireless networking routing protocol that uses both proactive and reactive routing protocols when sending information over the network. ZRP was designed to speed up delivery and reduce processing overhead by selecting the most efficient type of protocol to use throughout the route.
How ZRP works
If a packet's destination is in the same zone as the origin, the proactive protocol using an already stored routing table is used to deliver the packet immediately.
If the route extends outside the packet's originating zone, a reactive protocol takes over to check each successive zone in the route to see whether the destination is inside that zone. This reduces the processing overhead for those routes. Once a zone is confirmed as containing the destination node, the proactive protocol, or stored route-listing table, is used to deliver the packet.
In this way packets with destinations within the same zone as the originating zone are delivered immediately using a stored routing table. Packets delivered to nodes outside the sending zone avoid the overhead of checking routing tables along the way by using the reactive protocol to check whether each zone encountered contains the destination node.
Thus ZRP reduces the control overhead for longer routes that would be necessary if using proactive routing protocols throughout the entire route, while eliminating the delays for routing within a zone that would be caused by the route-discovery processes of reactive routing protocols.
Details
What is called the Intra-zone Routing Protocol (IARP), or a proactive routing protocol, is used inside routing zones. What is called the Inter-zone Routing Protocol (IERP), or a reactive routing protocol, is used between routing zones. IARP uses a routing table. Since this table is already stored, this is considered a proactive protocol. IERP uses a reactive protocol.
Any route to a destination that is within the same local zone is quickly establish
|
https://en.wikipedia.org/wiki/NT%20%28cassette%29
|
NT (sometimes marketed under the name Scoopman) is a digital memo recording system introduced by Sony in 1992.
The NT system was introduced to compete with the Microcassette, introduced by Olympus, and the Mini-Cassette, by Philips.
Design
The system was an R-DAT based system which stored memos using helical scan on special microcassettes, which were with a tape width of 2.5 mm, with a recording capacity of up to 120 minutes similar to Digital Audio Tape. The cassettes are offered in three versions: The Sony NTC-60, -90, and -120, each describing the length of time (in minutes) the cassette can record.
NT stands for Non-Tracking, meaning the head does not precisely follow the tracks on the tape. Instead, the head moves over the tape at approximately the correct angle and speed, but performs more than one pass over each track. The data in each track is stored on the tape in blocks with addressing information that enables reconstruction in memory from several passes. This considerably reduced the required mechanical precision, reducing the complexity, size, and cost of the recorder.
Another feature of NT cassettes is Non-Loading, which means instead of having a mechanism to pull the tape out of the cassette and wrap it around the drum, the drum is pushed inside the cassette to achieve the same effect. This also significantly reduces the complexity, size, and cost of the mechanism.
Audio sampling is in stereo at 32 kHz with 12 bit nonlinear quantization, corresponding to 17 bit linear quantization. Data written to the tape is packed into data blocks and encoded with LDM-2 low deviation modulation.
Uses
The Sony NT-1 Digital Micro Recorder, introduced in 1992, features a real-time clock that records a time signal on the digital track along with the sound data, making it useful for journalism, police and legal work. Due to the machine's buffer memory, it is capable of automatically reversing the tape direction at the end of the reel without an interruption in
|
https://en.wikipedia.org/wiki/Mutation%20rate
|
In genetics, the mutation rate is the frequency of new mutations in a single gene or organism over time. Mutation rates are not constant and are not limited to a single type of mutation; there are many different types of mutations. Mutation rates are given for specific classes of mutations. Point mutations are a class of mutations which are changes to a single base. Missense and Nonsense mutations are two subtypes of point mutations. The rate of these types of substitutions can be further subdivided into a mutation spectrum which describes the influence of the genetic context on the mutation rate.
There are several natural units of time for each of these rates, with rates being characterized either as mutations per base pair per cell division, per gene per generation, or per genome per generation. The mutation rate of an organism is an evolved characteristic and is strongly influenced by the genetics of each organism, in addition to strong influence from the environment. The upper and lower limits to which mutation rates can evolve is the subject of ongoing investigation. However, the mutation rate does vary over the genome. Over DNA, RNA or a single gene, mutation rates are changing.
When the mutation rate in humans increases certain health risks can occur, for example, cancer and other hereditary diseases. Having knowledge of mutation rates is vital to understanding the future of cancers and many hereditary diseases.
Background
Different genetic variants within a species are referred to as alleles, therefore a new mutation can create a new allele. In population genetics, each allele is characterized by a selection coefficient, which measures the expected change in an allele's frequency over time. The selection coefficient can either be negative, corresponding to an expected decrease, positive, corresponding to an expected increase, or zero, corresponding to no expected change. The distribution of fitness effects of new mutations is an important parameter in po
|
https://en.wikipedia.org/wiki/Banburismus
|
Banburismus was a cryptanalytic process developed by Alan Turing at Bletchley Park in Britain during the Second World War. It was used by Bletchley Park's Hut 8 to help break German Kriegsmarine (naval) messages enciphered on Enigma machines. The process used sequential conditional probability to infer information about the likely settings of the Enigma machine. It gave rise to Turing's invention of the ban as a measure of the weight of evidence in favour of a hypothesis. This concept was later applied in Turingery and all the other methods used for breaking the Lorenz cipher.
Overview
The aim of Banburismus was to reduce the time required of the electromechanical Bombe machines by identifying the most likely right-hand and middle wheels of the Enigma. Hut 8 performed the procedure continuously for two years, stopping only in 1943 when sufficient bombe time became readily available. Banburismus was a development of the "clock method" invented by the Polish cryptanalyst Jerzy Różycki.
Hugh Alexander was regarded as the best of the Banburists. He and I. J. Good considered the process more an intellectual game than a job. It was "not easy enough to be trivial, but not difficult enough to cause a nervous breakdown".
History
In the first few months after arriving at Bletchley Park in September 1939, Alan Turing correctly deduced that the message-settings of Kriegsmarine Enigma signals were enciphered on a common Grundstellung (starting position of the rotors), and were then super-enciphered with a bigram and a trigram lookup table. These trigram tables were in a book called the Kenngruppenbuch (K book). However, without the bigram tables, Hut 8 were unable to start attacking the traffic. A breakthrough was achieved after the Narvik pinch in which the disguised armed trawler Polares, which was on its way to Narvik in Norway, was seized by in the North Sea on 26 April 1940. The Germans did not have time to destroy all their cryptographic documents, and the captured mat
|
https://en.wikipedia.org/wiki/Planning%20Domain%20Definition%20Language
|
The Planning Domain Definition Language (PDDL) is an attempt to standardize Artificial Intelligence (AI) planning languages. It was first developed by Drew McDermott and his colleagues in 1998 (inspired by STRIPS and ADL among others) mainly to make the 1998/2000 International Planning Competition (IPC) possible, and then evolved with each competition. The standardization provided by PDDL has the benefit of making research more reusable and easily comparable, though at the cost of some expressive power, compared to domain-specific systems.
De facto official versions of PDDL
PDDL1.2
This was the official language of the 1st and 2nd IPC in 1998 and 2000 respectively.
It separated the model of the planning problem in two major parts: (1) domain description and (2) the related problem description. Such a division of the model allows for an intuitive separation of those elements, which are (1) present in every specific problem of the problem-domain (these elements are contained in the domain-description), and those elements, which (2) determine the specific planning-problem (these elements are contained in the problem-description). Thus several problem-descriptions may be connected to the same domain-description (just as several instances may exist of a class in OOP (Object Oriented Programming) or in OWL (Web Ontology Language) for example). Thus a domain and a connecting problem description forms the PDDL-model of a planning-problem, and eventually this is the input of a planner (usually domain-independent AI planner) software, which aims to solve the given planning-problem via some appropriate planning algorithm. The output of the planner is not specified by PDDL, but it is usually a totally or partially ordered plan (a sequence of actions, some of which may be executed even in parallel sometimes). Now lets take a look at the contents of a PDDL1.2 domain and problem description in general...(1) The domain description consisted of a domain-name definition, definition
|
https://en.wikipedia.org/wiki/Surface-conduction%20electron-emitter%20display
|
A surface-conduction electron-emitter display (SED) was a display technology for flat panel displays developed by a number of companies. SEDs used nanoscopic-scale electron emitters to energize colored phosphors and produce an image. In a general sense, a SED consists of a matrix of tiny cathode-ray tubes, each "tube" forming a single sub-pixel on the screen, grouped in threes to form red-green-blue (RGB) pixels. SEDs combine the advantages of CRTs, namely their high contrast ratios, wide viewing angles, and very fast response times, with the packaging advantages of LCD and other flat panel displays.
After considerable time and effort in the early and mid-2000s, SED efforts started winding down in 2009 as LCD became the dominant technology. In August 2010, Canon announced they were shutting down their joint effort to develop SEDs commercially, signaling the end of development efforts. SEDs were closely related to another developing display technology, the field emission display, or FED, differing primarily in the details of the electron emitters. Sony, the main backer of FED, has similarly backed off from their development efforts.
Description
A conventional cathode ray tube (CRT) is powered by an electron gun, essentially an open-ended vacuum tube. At one end of the gun, electrons are produced by "boiling" them off a metal filament, which requires relatively high currents and consumes a large proportion of the CRT's power. The electrons are then accelerated and focused into a fast-moving beam, flowing forward towards the screen. Electromagnets surrounding the gun end of the tube are used to steer the beam as it travels forward, allowing the beam to be scanned across the screen to produce a 2D display. When the fast-moving electrons strike the phosphor on the back of the screen, light is produced. Color images are produced by painting the screen with spots or stripes of three colored phosphors, each for red, green, and blue (RGB). When viewed from a distance, the
|
https://en.wikipedia.org/wiki/Wien%20bridge%20oscillator
|
A Wien bridge oscillator is a type of electronic oscillator that generates sine waves. It can generate a large range of frequencies. The oscillator is based on a bridge circuit originally developed by Max Wien in 1891 for the measurement of impedances.
The bridge comprises four resistors and two capacitors. The oscillator can also be viewed as a positive gain amplifier combined with a bandpass filter that provides positive feedback. Automatic gain control, intentional non-linearity and incidental non-linearity limit the output amplitude in various implementations of the oscillator.
The circuit shown to the right depicts a once-common implementation of the oscillator, with automatic gain control using an incandescent lamp. Under the condition that R1=R2=R and C1=C2=C, the frequency of oscillation is given by:
and the condition of stable oscillation is given by
Background
There were several efforts to improve oscillators in the 1930s. Linearity was recognized as important. The "resistance-stabilized oscillator" had an adjustable feedback resistor; that resistor would be set so the oscillator just started (thus setting the loop gain to just over unity). The oscillations would build until the vacuum tube's grid would start conducting current, which would increase losses and limit the output amplitude. Automatic amplitude control was investigated. Frederick Terman states, "The frequency stability and wave-shape form of any common oscillator can be improved by using an automatic-amplitude-control arrangement to maintain the amplitude of oscillations constant under all conditions."
In 1937, Larned Meacham described using a filament lamp for automatic gain control in bridge oscillators.
Also in 1937, Hermon Hosmer Scott described audio oscillators based on various bridges including the Wien bridge.
Terman, at Stanford University, was interested in Harold Stephen Black's work on negative feedback, so he held a graduate seminar on negative feedback. Bill Hewlett atte
|
https://en.wikipedia.org/wiki/AOHell
|
AOHell was a Windows application that was used to simplify 'cracking' (computer hacking) using AOL. The program contained a very early use of the term phishing. It was created by a teenager under the pseudonym Da Chronic, whose expressed motivation was anger that child abuse took place on AOL without being curtailed by AOL administrators.
History
AOHell was the first of what would become thousands of programs designed for hackers created for use with AOL.
In 1994, seventeen year old hacker Koceilah Rekouche, from Pittsburgh, PA, known online as "Da Chronic", used Visual Basic to create a toolkit that provided: a new DLL for the AOL client, a credit card number generator, email bomber, IM bomber, Punter, and a basic set of instructions. It was billed as, "An all-in-one nice convenient way to break federal fraud law, violate interstate trade regulations, and rack up a couple of good ol' telecommunications infractions in one fell swoop". When the program was loaded, it would play a short clip from Dr. Dre's 1993 song "Nuthin but a G Thang".
Most notably, the program included a function for stealing the passwords of America Online users and, according to its creator, contains the first recorded mention of the term "phishing". AOHell provided a number of other utilities which ran on top of the America Online client software. Though most of these utilities simply manipulated the AOL interface, some were powerful enough to let almost any curious party anonymously cause havoc on AOL. The first version of the program was released in 1994 by hackers known as The Rizzer, and The Squirrel.
Features
A fake account generator which would generate a new, fully functional AOL account for the user that lasted for about a month. This generator worked by exploiting the algorithm used by credit card companies known as the Luhn algorithm to dynamically generate apparently legitimate credit card numbers. The account would not be disabled until AOL first billed it (and discovered th
|
https://en.wikipedia.org/wiki/Phase%20problem
|
In physics, the phase problem is the problem of loss of information concerning the phase that can occur when making a physical measurement. The name comes from the field of X-ray crystallography, where the phase problem has to be solved for the determination of a structure from diffraction data. The phase problem is also met in the fields of imaging and signal processing. Various approaches of phase retrieval have been developed over the years.
Overview
Light detectors, such as photographic plates or CCDs, measure only the intensity of the light that hits them. This measurement is incomplete (even when neglecting other degrees of freedom such as polarization and angle of incidence) because a light wave has not only an amplitude (related to the intensity), but also a phase (related to the direction), and polarization which are systematically lost in a measurement. In diffraction or microscopy experiments, the phase part of the wave often contains valuable information on the studied specimen. The phase problem constitutes a fundamental limitation ultimately related to the nature of measurement in quantum mechanics.
In X-ray crystallography, the diffraction data when properly assembled gives the amplitude of the 3D Fourier transform of the molecule's electron density in the unit cell. If the phases are known, the electron density can be simply obtained by Fourier synthesis. This Fourier transform relation also holds for two-dimensional far-field diffraction patterns (also called Fraunhofer diffraction) giving rise to a similar type of phase problem.
Phase retrieval
There are several ways to retrieve the lost phases. The phase problem must be solved in x-ray crystallography, neutron crystallography, and electron crystallography.
Not all of the methods of phase retrieval work with every wavelength (x-ray, neutron, and electron) used in crystallography.
Direct (ab initio) methods
If the crystal diffracts to high resolution (<1.2 Å), the initial phases can be esti
|
https://en.wikipedia.org/wiki/Split%20exact%20sequence
|
In mathematics, a split exact sequence is a short exact sequence in which the middle term is built out of the two outer terms in the simplest possible way.
Equivalent characterizations
A short exact sequence of abelian groups or of modules over a fixed ring, or more generally of objects in an abelian category
is called split exact if it is isomorphic to the exact sequence where the middle term is the direct sum of the outer ones:
The requirement that the sequence is isomorphic means that there is an isomorphism such that the composite is the natural inclusion and such that the composite equals b. This can be summarized by a commutative diagram as:
The splitting lemma provides further equivalent characterizations of split exact sequences.
Examples
A trivial example of a split short exact sequence is
where are R-modules, is the canonical injection and is the canonical projection.
Any short exact sequence of vector spaces is split exact. This is a rephrasing of the fact that any set of linearly independent vectors in a vector space can be extended to a basis.
The exact sequence (where the first map is multiplication by 2) is not split exact.
Related notions
Pure exact sequences can be characterized as the filtered colimits of split exact sequences.
References
Sources
Abstract algebra
|
https://en.wikipedia.org/wiki/Homeomorphism%20group
|
In mathematics, particularly topology, the homeomorphism group of a topological space is the group consisting of all homeomorphisms from the space to itself with function composition as the group operation. Homeomorphism groups are very important in the theory of topological spaces and in general are examples of automorphism groups. Homeomorphism groups are topological invariants in the sense that the homeomorphism groups of homeomorphic topological spaces are isomorphic as groups.
Properties and examples
There is a natural group action of the homeomorphism group of a space on that space. Let be a topological space and denote the homeomorphism group of by . The action is defined as follows:
This is a group action since for all ,
where denotes the group action, and the identity element of (which is the identity function on ) sends points to themselves. If this action is transitive, then the space is said to be homogeneous.
Topology
As with other sets of maps between topological spaces, the homeomorphism group can be given a topology, such as the compact-open topology.
In the case of regular, locally compact spaces the group multiplication is then continuous.
If the space is compact and Hausdorff, the inversion is continuous as well and becomes a topological group.
If is Hausdorff, locally compact and locally connected this holds as well.
However there are locally compact separable metric spaces for which the inversion map is not continuous and therefore not a topological group.
In the category of topological spaces with homeomorphisms, group objects are exactly homeomorphism groups.
Mapping class group
In geometric topology especially, one considers the quotient group obtained by quotienting out by isotopy, called the mapping class group:
The MCG can also be interpreted as the 0th homotopy group, .
This yields the short exact sequence:
In some applications, particularly surfaces, the homeomorphism group is studied via this short exact sequence, an
|
https://en.wikipedia.org/wiki/Accent%20kernel
|
Accent is an operating system kernel, most notable for being the predecessor to the Mach kernel. Originally developed at Carnegie Mellon University (CMU), Accent was influenced by the Aleph kernel developed at the University of Rochester. Accent improves upon the older kernel, fixing several problems and re-targeting hardware support for networks of workstation machines (specifically, the Three Rivers PERQ) instead of minicomputers. Accent was part of the SPICE Project at CMU which ran from 1981 to 1985. Development of Accent led directly to the introduction of Mach, used in NeXTSTEP, GNU Hurd, and modern Apple operating systems including Mac OS and iOS.
The original Aleph project used data copying to allow programs to communicate. Applications could open ports, which would allow them to receive data sent to them by other programs. The idea was to write a number of servers that would control resources on the machine, passing data along until it reached an end user. In this respect it was similar in concept to Unix, although the implementation was much different, using messages instead of memory. This turned out to have a number of problems, notably that copying memory on their Data General Eclipse was very expensive.
In 1979 one of the Aleph engineers, Richard Rashid, left for CMU and started work on a new version of Aleph that avoided its problems. In particular, Accent targeted workstation machines featuring a MMU, using the MMU to "copy" large blocks of memory via mapping, making the memory appear to be in two different places. Only data that was changed by one program or another would have to be physically copied, using the copy-on-write algorithm.
To understand the difference, consider two interacting programs, one feeding a file to another. Under Aleph the data from the provider would have to be copied 2kB at a time (due to features of the Eclipse) into the user process. Under Accent the data simply "appeared" in the user process for the cost of a few ins
|
https://en.wikipedia.org/wiki/General-purpose%20input/output
|
A general-purpose input/output (GPIO) is an uncommitted digital signal pin on an integrated circuit or electronic circuit (e.g. MCUs/MPUs) board which may be used as an input or output, or both, and is controllable by software.
GPIOs have no predefined purpose and are unused by default. If used, the purpose and behavior of a GPIO is defined and implemented by the designer of higher assembly-level circuitry: the circuit board designer in the case of integrated circuit GPIOs, or system integrator in the case of board-level GPIOs.
Integrated circuit GPIOs
Integrated circuit (IC) GPIOs are implemented in a variety of ways. Some ICs provide GPIOs as a primary function whereas others include GPIOs as a convenient "accessory" to some other primary function. Examples of the former include the Intel 8255, which interfaces 24 GPIOs to a parallel communication bus, and various GPIO expander ICs, which interface GPIOs to serial communication buses such as I²C and SMBus. An example of the latter is the Realtek ALC260 IC, which provides eight GPIOs along with its main function of audio codec.
Microcontroller ICs usually include GPIOs. Depending on the application, a microcontroller's GPIOs may comprise its primary interface to external circuitry or they may be just one type of I/O used among several, such as analog signal I/O, counter/timer, and serial communication.
In some ICs, particularly microcontrollers, a GPIO pin may be capable of other functions than GPIO. Often in such cases it is necessary to configure the pin to operate as a GPIO (vis-á-vis its other functions) in addition to configuring the GPIO's behavior. Some microcontroller devices (e.g., Microchip dsPIC33 family) incorporate internal signal routing circuitry that allows GPIOs to be programmatically mapped to device pins. Field-programmable gate arrays (FPGA) extend this ability by allowing GPIO pin mapping, instantiation and architecture to be programmatically controlled.
Board-level GPIOs
Many circuit boar
|
https://en.wikipedia.org/wiki/Electric%20form%20factor
|
The electric form factor is the Fourier transform of electric charge distribution in a nucleon. Nucleons (protons and neutrons) are made of up and down quarks which have charges associated with them (2/3 & -1/3, respectively). The study of Form Factors falls within the regime of Perturbative QCD.
The idea originated from young William Thomson.
See also
Form factor (disambiguation)
Electrodynamics
|
https://en.wikipedia.org/wiki/Rarita%E2%80%93Schwinger%20equation
|
In theoretical physics, the Rarita–Schwinger equation is the
relativistic field equation of spin-3/2 fermions in a four-dimensional flat spacetime. It is similar to the Dirac equation for spin-1/2 fermions. This equation was first introduced by William Rarita and Julian Schwinger in 1941.
In modern notation it can be written as:
where is the Levi-Civita symbol,
and are Dirac matrices,
is the mass,
,
and is a vector-valued spinor with additional components compared to the four component spinor in the Dirac equation. It corresponds to the representation of the Lorentz group, or rather, its part.
This field equation can be derived as the Euler–Lagrange equation corresponding to the Rarita–Schwinger Lagrangian:
where the bar above denotes the Dirac adjoint.
This equation controls the propagation of the wave function of composite objects such as the delta baryons () or for the conjectural gravitino. So far, no elementary particle with spin 3/2 has been found experimentally.
The massless Rarita–Schwinger equation has a fermionic gauge symmetry: is invariant under the gauge transformation , where is an arbitrary spinor field. This is simply the local supersymmetry of supergravity, and the field must be a gravitino.
"Weyl" and "Majorana" versions of the Rarita–Schwinger equation also exist.
Equations of motion in the massless case
Consider a massless Rarita–Schwinger field described by the Lagrangian density
where the sum over spin indices is implicit, are Majorana spinors, and
To obtain the equations of motion we vary the Lagrangian with respect to the fields , obtaining:
using the Majorana flip properties
we see that the second and first terms on the RHS are equal, concluding that
plus unimportant boundary terms.
Imposing we thus see that the equation of motion for a massless Majorana Rarita–Schwinger spinor reads:
Drawbacks of the equation
The current description of massive, higher spin fields through either Rarita–Schwinger or Fierz–Pauli forma
|
https://en.wikipedia.org/wiki/Comparison%20of%20documentation%20generators
|
The following tables compare general and technical information for a number of documentation generators. Please see the individual products' articles for further information. Unless otherwise specified in footnotes, comparisons are based on the stable versions without any add-ons, extensions or external programs. Note that many of the generators listed are no longer maintained.
General information
Basic general information about the generators, including: creator or company, license, and price.
Supported formats
The output formats the generators can write.
Other features
See also
Code readability
Documentation generator
Literate programming
Self-documenting code
Notes
References
Documentation generators
|
https://en.wikipedia.org/wiki/Materials%20recovery%20facility
|
A materials recovery facility, materials reclamation facility, materials recycling facility or Multi re-use facility (MRF, pronounced "murf") is a specialized plant that receives, separates and prepares recyclable materials for marketing to end-user manufacturers. Generally, there are two different types: clean and dirty materials recovery facilities.
Industry and locations
In the United States, there are over 300 materials recovery facilities. The total market size is estimated at $6.6B as of 2019.
As of 2016, the top 75 were headed by Sims Municipal Recycling out of Brooklyn, New York. Waste Management operated 95 MRF facilities total, with 26 in the top 75. ReCommunity operated 6 in the top 75. Republic Services operated 6 in the top 75. Waste Connections operated 4 in the top 75.
Business economics
In 2018, a survey in the Northeast United States found that the processing cost per ton was $82, versus a value of around $45 per ton. Composition of the ton included 28% mixed paper and 24% old corrugated containers (OCC).
Prices for OCC declined into 2019. Three paper mill companies have announced initiatives to use more recycled fiber.
Glass recycling is expensive for these facilities, but a study estimated that costs could be cut significantly by investments in improved glass processing. In Texas, Austin and Houston have facilities which have invested glass recycling, built and operated by Balcones Recycling and FCC Environment, respectively.
Robots have spread across the industry, helping with sorting.
Process
Waste enters a MRF when it is dumped onto the tipping floor by the collection trucks. The materials are then scooped up and placed onto conveyor belts, which transports it to the pre-sorting area. Here, human workers remove some items that are not recyclable, which will either be sent to a landfill or an incinerator. Between 5 and 45% of "dirty" MRF material is recovered. Potential hazards are also removed, such as lithium batteries, propane tan
|
https://en.wikipedia.org/wiki/Equidistant
|
A point is said to be equidistant from a set of objects if the distances between that point and each object in the set are equal.
In two-dimensional Euclidean geometry, the locus of points equidistant from two given (different) points is their perpendicular bisector. In three dimensions, the locus of points equidistant from two given points is a plane, and generalising further, in n-dimensional space the locus of points equidistant from two points in n-space is an (n−1)-space.
For a triangle the circumcentre is a point equidistant from each of the three vertices. Every non-degenerate triangle has such a point. This result can be generalised to cyclic polygons: the circumcentre is equidistant from each of the vertices. Likewise, the incentre of a triangle or any other tangential polygon is equidistant from the points of tangency of the polygon's sides with the circle. Every point on a perpendicular bisector of the side of a triangle or other polygon is equidistant from the two vertices at the ends of that side. Every point on the bisector of an angle of any polygon is equidistant from the two sides that emanate from that angle.
The center of a rectangle is equidistant from all four vertices, and it is equidistant from two opposite sides and also equidistant from the other two opposite sides. A point on the axis of symmetry of a kite is equidistant between two sides.
The center of a circle is equidistant from every point on the circle. Likewise the center of a sphere is equidistant from every point on the sphere.
A parabola is the set of points in a plane equidistant from a fixed point (the focus) and a fixed line (the directrix), where distance from the directrix is measured along a line perpendicular to the directrix.
In shape analysis, the topological skeleton or medial axis of a shape is a thin version of that shape that is equidistant from its boundaries.
In Euclidean geometry, parallel lines (lines that never intersect) are equidistant in the sense that t
|
https://en.wikipedia.org/wiki/A2%20%28operating%20system%29
|
A2 (formerly named Active Object System (AOS), and then Bluebottle) is a modular, object-oriented operating system with unconventional features including automatic garbage-collected memory management, and a zooming user interface. It was developed originally at ETH Zurich in 2002. It is free and open-source software under a BSD-like license.
History
A2 is the next generation of Native Oberon, the x86 PC version of Niklaus Wirth's operating system Oberon. It is small, fast, supports multiprocessing computers, and provides soft real-time computing operation. It is entirely written in an upward-compatible dialect of the programming language Oberon named Active Oberon. Both languages are members of the Pascal family, along with Modula-2.
A2's design allows developing efficient systems based on active objects which run directly on hardware, with no mediating interpreter or virtual machine. Active objects represent a combination of the traditional object-oriented programming (OOP) model of an object, combined with a thread that executes in the context of that object. In the Active Oberon implementation, an active object may include activity of its own, and of its ancestor objects.
Other differences between A2 and more mainstream operating systems is a very minimalist design, completely implemented in a type-safe language, with automatic memory management, combined with a powerful and flexible set of primitives (at the level of programming language and runtime system) for synchronising access to the internal properties of objects in competing execution contexts.
Above the kernel layer, A2 provides a flexible set of modules providing unified abstractions for devices and services, such as file systems, user interfaces, computer network connections, media codecs, etc.
User interface
Bluebottle replaced the older Oberon OS's unique text-based user interface (TUI) with a zooming user interface (ZUI), which is significantly more like a conventional graphical user interfac
|
https://en.wikipedia.org/wiki/List%20of%20geneticists
|
This is a list of people who have made notable contributions to genetics. The growth and development of genetics represents the work of many people. This list of geneticists is therefore by no means complete. Contributors of great distinction to genetics are not yet on the list.
A
Dagfinn Aarskog (1928–2014), Norwegian pediatrician and geneticist, described Aarskog–Scott syndrome
Jon Aase (born 1936), US dysmorphologist, described Aase syndrome, expert on fetal alcohol syndrome
John Abelson (born c. 1939), US biochemist, studies of machinery and mechanism of RNA splicing
Susan L. Ackerman, US neurogeneticist, genes controlling brain development and neuron survival
Jerry Adams (born 1940), US molecular biologist in Australia, hematopoietic genetics and cancer
Bruce Alberts (born 1938), US biochemist, phage worker, studied DNA replication and cell division
William Allan (1881–1943), US country doctor, pioneered human genetics
C. David Allis (born 1951), US biologist with a fascination for chromatin
Robin Allshire (born 1960), UK-based Irish molecular biologist/geneticist and expert in formation of heterochromatin and centromeres
Carl-Henry Alström (1907–1993), Swedish psychiatrist, described genetic disease: Alström syndrome
Frederick Alt, American geneticist known for research on maintenance of genome stability in the cells of the mammalian immunological system
Russ Altman, US geneticist and bioengineer known for his work in pharmacogenomics
Sidney Altman (1939–2022), Canadian-US biophysicist who won Nobel Prize for catalytic functions of RNA
Cecil A. Alport (1880–1959), UK internist, identified Alport syndrome (hereditary nephritis and deafness)
David Altshuler (born c. 1965), US endocrinologist and geneticist, the genetics of type 2 diabetes
Bruce Ames (born 1928), US molecular geneticist, created Ames test to screen chemicals for mutagenicity
D. Bernard Amos (1923–2003), UK-US immunologist who studied the genetics of individuality
Edgar Anderson (1897–1969),
|
https://en.wikipedia.org/wiki/Volatility%20smile
|
Volatility smiles are implied volatility patterns that arise in pricing financial options. It is a parameter (implied volatility) that is needed to be modified for the Black–Scholes formula to fit market prices. In particular for a given expiration, options whose strike price differs substantially from the underlying asset's price command higher prices (and thus implied volatilities) than what is suggested by standard option pricing models. These options are said to be either deep in-the-money or out-of-the-money.
Graphing implied volatilities against strike prices for a given expiry produces a skewed "smile" instead of the expected flat surface. The pattern differs across various markets. Equity options traded in American markets did not show a volatility smile before the Crash of 1987 but began showing one afterwards. It is believed that investor reassessments of the probabilities of fat-tail have led to higher prices for out-of-the-money options. This anomaly implies deficiencies in the standard Black–Scholes option pricing model which assumes constant volatility and log-normal distributions of underlying asset returns. Empirical asset returns distributions, however, tend to exhibit fat-tails (kurtosis) and skew. Modelling the volatility smile is an active area of research in quantitative finance, and better pricing models such as the stochastic volatility model partially address this issue.
A related concept is that of term structure of volatility, which describes how (implied) volatility differs for related options with different maturities. An implied volatility surface is a 3-D plot that plots volatility smile and term structure of volatility in a consolidated three-dimensional surface for all options on a given underlying asset.
Implied volatility
In the Black–Scholes model, the theoretical value of a vanilla option is a monotonic increasing function of the volatility of the underlying asset. This means it is usually possible to compute a unique implie
|
https://en.wikipedia.org/wiki/Photoelasticity
|
In materials science, photoelasticity describes changes in the optical properties of a material under mechanical deformation. It is a property of all dielectric media and is often used to experimentally determine the stress distribution in a material.
History
The photoelastic phenomenon was first discovered by the Scottish physicist David Brewster, who immediately recognized it as stress-induced birefringence. That diagnosis was confirmed in a direct refraction experiment by Augustin-Jean Fresnel. Experimental frameworks were developed at the beginning of the twentieth century with the works of E. G. Coker and L. N. G. Filon of University of London. Their book Treatise on Photoelasticity, published in 1930 by Cambridge Press, became a standard text on the subject. Between 1930 and 1940, many other books appeared on the subject, including books in Russian, German and French. Max M. Frocht published the classic two volume work, Photoelasticity, in the field. At the same time, much development occurred in the field – great improvements were achieved in technique, and the equipment was simplified. With refinements in the technology, photoelastic experiments were extended to determining three-dimensional states of stress. In parallel to developments in experimental technique, the first phenomenological description of photoelasticity was given in 1890 by Friedrich Pockels, however this was proved inadequate almost a century later by Nelson & Lax as the description by Pockels only considered the effect of mechanical strain on the optical properties of the material.
With the advent of the digital polariscope – made possible by light-emitting diodes – continuous monitoring of structures under load became possible. This led to the development of dynamic photoelasticity, which has contributed greatly to the study of complex phenomena such as fracture of materials.
Applications
Photoelasticity has been used for a variety of stress analyses and even for routine use in desig
|
https://en.wikipedia.org/wiki/RISKS%20Digest
|
The RISKS Digest or Forum On Risks to the Public in Computers and Related Systems is an online periodical published since 1985 by the Committee on Computers and Public Policy of the Association for Computing Machinery. The editor is Peter G. Neumann.
It is a moderated forum concerned with the security and safety of computers, software, and technological systems. Security, and risk, here are taken broadly; RISKS is concerned not merely with so-called security holes in software, but with unintended consequences and hazards stemming from the design (or lack thereof) of automated systems. Other recurring subjects include cryptography and the effects of technically ill-considered public policies. RISKS also publishes announcements and Calls for Papers from various technical conferences, and technical book reviews (usually by Rob Slade, though occasionally by others).
Although RISKS is a forum of a computer science association, most contributions are readable and informative to anyone with an interest in the subject. It is heavily read by system administrators, and computer security managers, as well as computer scientists and engineers.
The RISKS Digest is published on a frequent but irregular schedule through the moderated Usenet newsgroup comp.risks, which exists solely to carry the Digest.
Summaries of the forum appear as columns edited by Neumann in the ACM SIGSOFT Software Engineering Notes (SEN) and the Communications of the ACM (CACM).
References
External links
RISKS Digest web archive
RISKS Digest (Usenet newsgroup comp.risks)
Google groups interface to comp.risks
Risk
Safety engineering
Computer security procedures
Magazines established in 1985
Association for Computing Machinery magazines
Professional and trade magazines
SRI International
Engineering magazines
Irregularly published magazines published in the United States
1985 establishments in the United States
|
https://en.wikipedia.org/wiki/Planar%20Systems
|
Planar Systems, Inc. is an American digital display manufacturing corporation with a facility in Hillsboro, Oregon. Founded in 1983 as a spin-off from Tektronix, it was the first U.S. manufacturer of electroluminescent (EL) digital displays. Planar currently makes a variety of other specialty displays, and is an independent subsidiary of Leyard Optoelectronic Co. since 2015.The headquarters, leadership team and employees still remain in Hillsboro, Oregon.
History
1980s
Planar was founded on May 23, 1983 by Jim Hurd, Chris King, John Laney and others as a spin-off from the Solid State Research and Development Group of the Beaverton, Oregon, based Tektronix. In 1986, a division spun off from Planar to work on projection technology and formed InFocus.
1990s
In 1991, Planar purchased FinLux, a competitor in Espoo, Finland. This location now serves as the company's European headquarters. Planar's executives took the company public in 1993, listing the stock on the NASDAQ boards Planar acquired Tektronix's avionics display business, creating the short-lived Planar Advance in 1994. Standish Industries, a manufacturer of flat panel LCDs in Lake Mills, Wisconsin, was sold to Planar in 1997. This plant was closed in 2002 as worldwide LCD manufacturing shifted to East Asian countries.
2000s
On April 23, 2002, DOME Imaging Systems was purchased by Planar and became the company's medical business unit. Planar acquired Clarity Visual Systems (founded by former InFocus employees) on September 12, 2006, now referred to as the Control Room and Signage business unit. On June 19, 2006, Planar acquired Runco International, a leading brand in the high-end, custom home theater market. On August 6, 2008, Planar sold its medical business unit to NDS Surgical Imaging.
2010s
In November 2012, Planar announced the sale of its electroluminescent business to Beneq Oy, a supplier of production and research equipment for thin film coatings. Under the terms of the transaction, consideratio
|
https://en.wikipedia.org/wiki/Motion%20estimation
|
In computer vision and image processing, motion estimation is the process of determining motion vectors that describe the transformation from one 2D image to another; usually from adjacent frames in a video sequence. It is an ill-posed problem as the motion happens in three dimensions (3D) but the images are a projection of the 3D scene onto a 2D plane. The motion vectors may relate to the whole image (global motion estimation) or specific parts, such as rectangular blocks, arbitrary shaped patches or even per pixel. The motion vectors may be represented by a translational model or many other models that can approximate the motion of a real video camera, such as rotation and translation in all three dimensions and zoom.
Related terms
More often than not, the term motion estimation and the term optical flow are used interchangeably. It is also related in concept to image registration and stereo correspondence. In fact all of these terms refer to the process of finding corresponding points between two images or video frames. The points that correspond to each other in two views (images or frames) of a real scene or object are "usually" the same point in that scene or on that object. Before we do motion estimation, we must define our measurement of correspondence, i.e., the matching metric, which is a measurement of how similar two image points are. There is no right or wrong here; the choice of matching metric is usually related to what the final estimated motion is used for as well as the optimisation strategy in the estimation process.
Each motion vector is used to represent a macroblock in a picture based on the position of this macroblock (or a similar one) in another picture, called the reference picture.
The H.264/MPEG-4 AVC standard defines motion vector as:
motion vector: a two-dimensional vector used for inter prediction that provides an offset from the coordinates in the decoded picture to the coordinates in a reference picture.
Algorithms
The methods f
|
https://en.wikipedia.org/wiki/Land%20development
|
Land development is the alteration of landscape in any number of ways such as:
Changing landforms from a natural or semi-natural state for a purpose such as agriculture or housing
Subdividing real estate into lots, typically for the purpose of building homes
Real estate development or changing its purpose, for example by converting an unused factory complex into a condominium.
Economic aspects
In an economic context, land development is also sometimes advertised as land improvement or land amelioration. It refers to investment making land more usable by humans. For accounting purposes it refers to any variety of projects that increase the value of the process . Most are depreciable, but some land improvements are not able to be depreciated because a useful life cannot be determined. Home building and containment are two of the most common and the oldest types of development.
In an urban context, land development furthermore includes:
Road construction
Access roads, walkways, and parking lots
Bridges
Landscaping
Clearing, terracing, or land levelling
Land preparation (development) for gardens
Setup of fences and, to a lesser degree, hedges
Service connections to municipal services and public utilities
Drainage, canal systems
External lighting (street lamps etc.)
A landowner or developer of a project of any size, will often want to maximise profits, minimise risk, and control cash flow. This "profitable energy" means identifying and developing the best scheme for the local marketplace, whilst satisfying the local planning process.
Development analysis puts development prospects and the development process itself under the microscope, identifying where enhancements and improvements can be introduced. These improvements aim to align with best design practice, political sensitivities, and the inevitable social requirements of a project, with the overarching objective of increasing land values and profit margins on behalf of the landowner or devel
|
https://en.wikipedia.org/wiki/Similarity%20invariance
|
In linear algebra, similarity invariance is a property exhibited by a function whose value is unchanged under similarities of its domain. That is, is invariant under similarities if where is a matrix similar to A. Examples of such functions include the trace, determinant, characteristic polynomial, and the minimal polynomial.
A more colloquial phrase that means the same thing as similarity invariance is "basis independence", since a matrix can be regarded as a linear operator, written in a certain basis, and the same operator in a new basis is related to one in the old basis by the conjugation , where is the transformation matrix to the new basis.
See also
Invariant (mathematics)
Gauge invariance
Trace diagram
Functions and mappings
|
https://en.wikipedia.org/wiki/Ammonium%20sulfate
|
Ammonium sulfate (American English and international scientific usage; ammonium sulphate in British English); (NH4)2SO4, is an inorganic salt with a number of commercial uses. The most common use is as a soil fertilizer. It contains 21% nitrogen and 24% sulfur.
Uses
The primary use of ammonium sulfate is as a fertilizer for alkaline soils. In the soil the ammonium ion is released and forms a small amount of acid, lowering the pH balance of the soil, while contributing essential nitrogen for plant growth. The main disadvantage to the use of ammonium sulfate is its low nitrogen content relative to ammonium nitrate, which elevates transportation costs.
It is also used as an agricultural spray adjuvant for water-soluble insecticides, herbicides, and fungicides. There, it functions to bind iron and calcium cations that are present in both well water and plant cells. It is particularly effective as an adjuvant for 2,4-D (amine), glyphosate, and glufosinate herbicides.
Laboratory use
Ammonium sulfate precipitation is a common method for protein purification by precipitation. As the ionic strength of a solution increases, the solubility of proteins in that solution decreases. Ammonium sulfate is extremely soluble in water due to its ionic nature, therefore it can "salt out" proteins by precipitation. Due to the high dielectric constant of water, the dissociated salt ions being cationic ammonium and anionic sulfate are readily solvated within hydration shells of water molecules. The significance of this substance in the purification of compounds stems from its ability to become more so hydrated compared to relatively more nonpolar molecules and so the desirable nonpolar molecules coalesce and precipitate out of the solution in a concentrated form. This method is called salting out and necessitates the use of high salt concentrations that can reliably dissolve in the aqueous mixture. The percentage of the salt used is in comparison to the maximal concentration of the sal
|
https://en.wikipedia.org/wiki/Military%20Message%20Handling%20System
|
Military Message Handling System (MMHS) is a profile and set of extensions to X.400 for messaging in military environments. It is NATO standard STANAG 4406 and CCEB standard ACP 123. It adds to standard X.400 email support for military requirements such as mandatory access control (i.e. Classified/Secret/Top Secret messages and users, etc.). In particular it defines a new message format, P772 that is used in place of X.400's interpersonal message formats P2 (1984 standard) and P22 (1988 standard).
MMHS specifications are implemented by several X.400 vendors, particularly those located in Europe, such as Raytheon UK, Boldon James, Deep-Secure, Thales Group, Nexor, Cassidian and Isode.
Several RFC are supported:
Implementations
See also
Defense Message System
Automated Message Handling System
References
Email
|
https://en.wikipedia.org/wiki/Proximity%20space
|
In topology, a proximity space, also called a nearness space, is an axiomatization of the intuitive notion of "nearness" that hold set-to-set, as opposed to the better known point-to-set notion that characterize topological spaces.
The concept was described by but ignored at the time. It was rediscovered and axiomatized by V. A. Efremovič in 1934 under the name of infinitesimal space, but not published until 1951. In the interim, discovered a version of the same concept under the name of separation space.
Definition
A is a set with a relation between subsets of satisfying the following properties:
For all subsets
implies
implies
implies
implies ( or )
(For all or ) implies
Proximity without the first axiom is called (but then Axioms 2 and 4 must be stated in a two-sided fashion).
If we say is near or and are ; otherwise we say and are . We say is a or of written if and only if and are apart.
The main properties of this set neighborhood relation, listed below, provide an alternative axiomatic characterization of proximity spaces.
For all subsets
implies
implies
( and ) implies
implies
implies that there exists some such that
A proximity space is called if implies
A or is one that preserves nearness, that is, given if in then in Equivalently, a map is proximal if the inverse map preserves proximal neighborhoodness. In the same notation, this means if holds in then holds in
Properties
Given a proximity space, one can define a topology by letting be a Kuratowski closure operator. If the proximity space is separated, the resulting topology is Hausdorff. Proximity maps will be continuous between the induced topologies.
The resulting topology is always completely regular. This can be proven by imitating the usual proofs of Urysohn's lemma, using the last property of proximal neighborhoods to create the infinite increasing chain used in proving the lemma.
Given a compact Hausdorff space, the
|
https://en.wikipedia.org/wiki/Kruskal%E2%80%93Katona%20theorem
|
In algebraic combinatorics, the Kruskal–Katona theorem gives a complete characterization of the f-vectors of abstract simplicial complexes. It includes as a special case the Erdős–Ko–Rado theorem and can be restated in terms of uniform hypergraphs. It is named after Joseph Kruskal and Gyula O. H. Katona, but has been independently discovered by several others.
Statement
Given two positive integers N and i, there is a unique way to expand N as a sum of binomial coefficients as follows:
This expansion can be constructed by applying the greedy algorithm: set ni to be the maximal n such that replace N with the difference, i with i − 1, and repeat until the difference becomes zero. Define
Statement for simplicial complexes
An integral vector is the f-vector of some -dimensional simplicial complex if and only if
Statement for uniform hypergraphs
Let A be a set consisting of N distinct i-element subsets of a fixed set U ("the universe") and B be the set of all -element subsets of the sets in A. Expand N as above. Then the cardinality of B is bounded below as follows:
Lovász' simplified formulation
The following weaker but useful form is due to . Let A be a set of i-element subsets of a fixed set U ("the universe") and B be the set of all -element subsets of the sets in A. If then .
In this formulation, x need not be an integer. The value of the binomial expression is .
Ingredients of the proof
For every positive i, list all i-element subsets a1 < a2 < … ai of the set N of natural numbers in the colexicographical order. For example, for i = 3, the list begins
Given a vector with positive integer components, let Δf be the subset of the power set 2N consisting of the empty set together with the first i-element subsets of N in the list for i = 1, …, d. Then the following conditions are equivalent:
Vector f is the f-vector of a simplicial complex Δ.
Δf is a simplicial complex.
The difficult implication is 1 ⇒ 2.
History
The theorem is named
|
https://en.wikipedia.org/wiki/Intercellular%20adhesion%20molecule
|
In molecular biology, intercellular adhesion molecules (ICAMs) and vascular cell adhesion molecule-1 (VCAM-1) are part of the immunoglobulin superfamily. They are important in inflammation, immune responses and in intracellular signalling events. The ICAM family consists of five members, designated ICAM-1 to ICAM-5. They are known to bind to leucocyte integrins CD11/CD18 such as LFA-1 and Macrophage-1 antigen, during inflammation and in immune responses. In addition, ICAMs may exist in soluble forms in human plasma, due to activation and proteolysis mechanisms at cell surfaces.
Mammalian intercellular adhesion molecules include:
ICAM-1
ICAM2
ICAM3
ICAM4
ICAM5
References
Cell biology
Protein families
|
https://en.wikipedia.org/wiki/Descent%20direction
|
In optimization, a descent direction is a vector that points towards a local minimum of an objective function .
Computing by an iterative method, such as line search defines a descent direction at the th iterate to be any such that , where denotes the inner product. The motivation for such an approach is that small steps along guarantee that is reduced, by Taylor's theorem.
Using this definition, the negative of a non-zero gradient is always a
descent direction, as .
Numerous methods exist to compute descent directions, all with differing merits, such as gradient descent or the conjugate gradient method.
More generally, if is a positive definite matrix, then
is a descent direction at . This generality is used in preconditioned gradient descent methods.
See also
Directional derivative
References
Mathematical optimization
|
https://en.wikipedia.org/wiki/Matrix%20chain%20multiplication
|
Matrix chain multiplication (or the matrix chain ordering problem) is an optimization problem concerning the most efficient way to multiply a given sequence of matrices. The problem is not actually to perform the multiplications, but merely to decide the sequence of the matrix multiplications involved. The problem may be solved using dynamic programming.
There are many options because matrix multiplication is associative. In other words, no matter how the product is parenthesized, the result obtained will remain the same. For example, for four matrices A, B, C, and D, there are five possible options:
((AB)C)D = (A(BC))D = (AB)(CD) = A((BC)D) = A(B(CD)).
Although it does not affect the product, the order in which the terms are parenthesized affects the number of simple arithmetic operations needed to compute the product, that is, the computational complexity. The straightforward multiplication of a matrix that is by a matrix that is requires ordinary multiplications and ordinary additions. In this context, it is typical to use the number of ordinary multiplications as a measure of the runtime complexity.
If A is a 10 × 30 matrix, B is a 30 × 5 matrix, and C is a 5 × 60 matrix, then
computing (AB)C needs (10×30×5) + (10×5×60) = 1500 + 3000 = 4500 operations, while
computing A(BC) needs (30×5×60) + (10×30×60) = 9000 + 18000 = 27000 operations.
Clearly the first method is more efficient. With this information, the problem statement can be refined as "how to determine the optimal parenthesization of a product of n matrices?" The number of possible parenthesizations is given by the (n–1)th Catalan number, which is O(4n / n3/2), so checking each possible parenthesization (brute force) would require a run-time that is exponential in the number of matrices, which is very slow and impractical for large n. A quicker solution to this problem can be achieved by breaking up the problem into a set of related subproblems.
A dynamic programming algorithm
To begin,
|
https://en.wikipedia.org/wiki/Business%20object
|
A business object is an entity within a multi-tiered software application that works in conjunction with the data access and business logic layers to transport data.
Business objects separate state from behaviour because they are communicated across the tiers in a multi-tiered system, while the real work of the application is done in the business tier and does not move across the tiers.
Function
Whereas a program may implement classes, which typically end in objects managing or executing behaviours, a business object usually does nothing itself but holds a set of instance variables or properties, also known as attributes, and associations with other business objects, weaving a map of objects representing the business relationships.
A domain model where business objects do not have behaviour is called an anemic domain model.
Examples
For example, a "Manager" would be a business object where its attributes can be "Name", "Second name", "Age", "Area", "Country" and it could hold a 1-n association with its employees (a collection of "Employee" instances).
Another example would be a concept like "Process" having "Identifier", "Name", "Start date", "End date" and "Kind" attributes and holding an association with the "Employee" (the responsible) that started it.
See also
Active record pattern, design pattern that stores object data in memory in relational databases, with functions to insert, update, and delete records
Business intelligence, a field within information technology that provides decision support and business-critical information based on data
Data access object, design pattern that provides an interface to a type of database or other persistent mechanism, and offers data operations to application calls without exposing database details
Data transfer object, design pattern where an object carries aggregated data between processes to reduce the number of calls
References
Rockford Lhotka, Visual Basic 6.0 Business Objects,
Rockford Lhotka, Expert C# Bus
|
https://en.wikipedia.org/wiki/Syntactic%20foam
|
Syntactic foams are composite materials synthesized by filling a metal, polymer, cementitious or ceramic matrix with hollow spheres called microballoons or cenospheres or non-hollow spheres (e.g. perlite) as aggregates. In this context, "syntactic" means "put together." The presence of hollow particles results in lower density, higher specific strength (strength divided by density), lower coefficient of thermal expansion, and, in some cases, radar or sonar transparency.
History
The term was originally coined by the Bakelite Company, in 1955, for their lightweight composites made of hollow phenolic microspheres bonded to a matrix of phenolic, epoxy, or polyester.
These materials were developed in early 1960s as improved buoyancy materials for marine applications. Other characteristics led these materials to aerospace and ground transportation vehicle applications.
Research on syntactic foams has recently been advanced by Nikhil Gupta.
Characteristics
Tailorability is one of the biggest advantages of these materials. The matrix material can be selected from almost any metal, polymer, or ceramic. Microballoons are available in a variety of sizes and materials, including glass microspheres, cenospheres, carbon, and polymers. The most widely used and studied foams are glass microspheres (in epoxy or polymers), and cenospheres or ceramics (in aluminium). One can change the volume fraction of microballoons or use microballoons of different effective density, the latter depending on the average ratio between the inner and outer radii of the microballoons.
A manufacturing method for low density syntactic foams is based on the principle of buoyancy.
Strength
The compressive properties of syntactic foams, in most cases, strongly depend on the properties of the filler particle material. In general, the compressive strength of the material is proportional to its density. Cementitious syntactic foams are reported to achieve compressive strength values greater than while ma
|
https://en.wikipedia.org/wiki/International%20Karate
|
International Karate is a fighting game developed and published by System 3 for the ZX Spectrum in 1985 and ported to various home computers over the following years. In the United States it was published by Epyx in 1986 as World Karate Championship.
It was the first European-developed game to become a major hit in the United States, where it sold over 1.5 million copies. However, it drew controversy for its similarities to Karate Champ (1984), which led to Data East filing a lawsuit against Epyx. International Karate +, a successor which expanded the gameplay, was released in 1987.
Gameplay
The core game is a two-dimensional, one-on-one, versus fighting game. Players take on the roles of martial artists competing in a kumite tournament. Rather than wearing down an opponent's health, the goal is instead to score single solid hits. After each hit, combat stops and both combatants are returned to their starting positions. Depending on how well players hit their opponent, they score either a half-point or a full point. Matches can be quite brief, as only two full points are required to win, and a point can be quickly scored just seconds after a round begins.
In single-player mode, successive opponents increase in difficulty from novice white belts to master black belts. Play continues as long as the player continues to win matches. Between fights, bonus mini-games focusing on rhythm and timing appear, including one in which the player must break a number of stacked boards using the fighter's head. As in newer games in the genre, starting specifically with Street Fighter, the fights take place against a variety of backdrops (eight in total) representing different locations in the world: the Mount Fuji (Tokyo, Japan), the Sydney Harbour (Sydney, Australia), the Statue of Liberty (New York, USA), the Forbidden City (Beijing, China), the Christ the Redeemer (Rio de Janeiro, Brazil), the Palace of Westminster (London, England), the Parthenon (Athens, Greece), and the Gr
|
https://en.wikipedia.org/wiki/FIPS%20140
|
The 140 series of Federal Information Processing Standards (FIPS) are U.S. government computer security standards that specify requirements for cryptographic modules.
, FIPS 140-2 and FIPS 140-3 are both accepted as current and active. FIPS 140-3 was approved on March 22, 2019 as the successor to FIPS 140-2 and became effective on September 22, 2019. FIPS 140-3 testing began on September 22, 2020, and a small number of validation certificates have been issued. FIPS 140-2 testing is still available until September 21, 2021 (later changed for applications already in progress to April 1, 2022), creating an overlapping transition period of one year. FIPS 140-2 test reports that remain in the CMVP queue will still be granted validations after that date, but all FIPS 140-2 validations will be moved to the Historical List on September 21, 2026 regardless of their actual final validation date.
Purpose of FIPS 140
The National Institute of Standards and Technology (NIST) issues the 140 Publication Series to coordinate the requirements and standards for cryptographic modules which include both hardware and software components for use by departments and agencies of the United States federal government. FIPS 140 does not purport to provide sufficient conditions to guarantee that a module conforming to its requirements is secure, still less that a system built using such modules is secure. The requirements cover not only the cryptographic modules themselves but also their documentation and (at the highest security level) some aspects of the comments contained in the source code.
User agencies desiring to implement cryptographic modules should confirm that the module they are using is covered by an existing validation certificate. FIPS 140-1 and FIPS 140-2 validation certificates specify the exact module name, hardware, software, firmware, and/or applet version numbers. For Levels 2 and higher, the operating platform upon which the validation is applicable is also listed. Ve
|
https://en.wikipedia.org/wiki/Reversible%20computing
|
Reversible computing is any model of computation where the computational process, to some extent, is time-reversible. In a model of computation that uses deterministic transitions from one state of the abstract machine to another, a necessary condition for reversibility is that the relation of the mapping from states to their successors must be one-to-one. Reversible computing is a form of unconventional computing.
Due to the unitarity of quantum mechanics, quantum circuits are reversible, as long as they do not "collapse" the quantum states on which they operate.
Reversibility
There are two major, closely related types of reversibility that are of particular interest for this purpose: physical reversibility and logical reversibility.
A process is said to be physically reversible if it results in no increase in physical entropy; it is isentropic. There is a style of circuit design ideally exhibiting this property that is referred to as charge recovery logic, adiabatic circuits, or adiabatic computing (see Adiabatic process). Although in practice no nonstationary physical process can be exactly physically reversible or isentropic, there is no known limit to the closeness with which we can approach perfect reversibility, in systems that are sufficiently well isolated from interactions with unknown external environments, when the laws of physics describing the system's evolution are precisely known.
A motivation for the study of technologies aimed at implementing reversible computing is that they offer what is predicted to be the only potential way to improve the computational energy efficiency (i.e., useful operations performed per unit energy dissipated) of computers beyond the fundamental von Neumann–Landauer limit of energy dissipated per irreversible bit operation. Although the Landauer limit was millions of times below the energy consumption of computers in the 2000s and thousands of times less in the 2010s, proponents of reversible computing argue that this
|
https://en.wikipedia.org/wiki/XHTML%20Friends%20Network
|
XHTML Friends Network (XFN) is an HTML microformat developed by Global Multimedia Protocols Group that provides a simple way to represent human relationships using links. XFN enables web authors to indicate relationships to the people in their blogrolls by adding one or more keywords as the rel attribute to their links. XFN was the first microformat, introduced in December 2003.
Example
A friend of Jimmy Example could indicate that relationship by publishing a link on their site like this:
<a href="http://jimmy.example.com/" rel="friend">Jimmy Example</a>
Multiple values may be used, so if that friend has met Jimmy:
<a href="http://jimmy.example.com/" rel="friend met">Jimmy Example</a>
See also
FOAF
hCard
References
External links
XFN at the Global Multimedia Protocols Group
Microformats
Social networking services
XML-based standards
Semantic HTML
|
https://en.wikipedia.org/wiki/Sheet%20resistance
|
Sheet resistance, is the resistance of a square piece of a thin material with contacts made to two opposite sides of the square. It is usually a measurement of electrical resistance of thin films that are uniform in thickness. It is commonly used to characterize materials made by semiconductor doping, metal deposition, resistive paste printing, and glass coating. Examples of these processes are: doped semiconductor regions (e.g., silicon or polysilicon), and the resistors that are screen printed onto the substrates of thick-film hybrid microcircuits.
The utility of sheet resistance as opposed to resistance or resistivity is that it is directly measured using a four-terminal sensing measurement (also known as a four-point probe measurement) or indirectly by using a non-contact eddy-current-based testing device. Sheet resistance is invariable under scaling of the film contact and therefore can be used to compare the electrical properties of devices that are significantly different in size.
Calculations
Sheet resistance is applicable to two-dimensional systems in which thin films are considered two-dimensional entities. When the term sheet resistance is used, it is implied that the current is along the plane of the sheet, not perpendicular to it.
In a regular three-dimensional conductor, the resistance can be written as
where
is material resistivity,
is the length,
is the cross-sectional area, which can be split into:
width ,
thickness .
Upon combining the resistivity with the thickness, the resistance can then be written as
where is the sheet resistance. If the film thickness is known, the bulk resistivity (in Ω·m) can be calculated by multiplying the sheet resistance by the film thickness in m:
Units
Sheet resistance is a special case of resistivity for a uniform sheet thickness. Commonly, resistivity (also known as bulk resistivity, specific electrical resistivity, or volume resistivity) is in units of Ω·m, which is more completely stated in un
|
https://en.wikipedia.org/wiki/Suzhou%20numerals
|
The Suzhou numerals, also known as (), is a numeral system used in China before the introduction of Hindu numerals. The Suzhou numerals are also known as (), (), (), () and ().
History
The Suzhou numeral system is the only surviving variation of the rod numeral system. The rod numeral system is a positional numeral system used by the Chinese in mathematics. Suzhou numerals are a variation of the Southern Song rod numerals.
Suzhou numerals were used as shorthand in number-intensive areas of commerce such as accounting and bookkeeping. At the same time, standard Chinese numerals were used in formal writing, akin to spelling out the numbers in English. Suzhou numerals were once popular in Chinese marketplaces, such as those in Hong Kong and Chinese restaurants in Malaysia before the 1990s, but they have gradually been supplanted by Arabic numerals. This is similar to what had happened in Europe with Roman numerals used in ancient and medieval Europe for mathematics and commerce. Nowadays, the Suzhou numeral system is only used for displaying prices in Chinese markets or on traditional handwritten invoices.
Symbols
In the Suzhou numeral system, special symbols are used for digits instead of the Chinese characters. The digits of the Suzhou numerals are defined between U+3021 and U+3029 in Unicode. An additional three code points starting from U+3038 were added later.
The symbols for 5 to 9 are derived from those for 0 to 4 by adding a vertical bar on top, which is similar to adding an upper bead which represents a value of 5 in an abacus. The resemblance makes the Suzhou numerals intuitive to use together with the abacus as the traditional calculation tool.
The numbers one, two, and three are all represented by vertical bars. This can cause confusion when they appear next to each other. Standard Chinese ideographs are often used in this situation to avoid ambiguity. For example, "21" is written as "" instead of "" which can be confused with "3" (). The first
|
https://en.wikipedia.org/wiki/Jacob%20Palis
|
Jacob Palis Jr. (born 15 March 1940) is a Brazilian mathematician and professor. Palis' research interests are mainly dynamical systems and differential equations. Some themes are global stability and hyperbolicity, bifurcations, attractors and chaotic systems.
Biography
Jacob Palis was born in Uberaba, Minas Gerais. His father was a Lebanese immigrant, and his mother was a Syrian immigrant. The couple had eight children (five men and three women), and Jacob was the youngest. His father was a merchant, owner of a large store, and supported and funded the studies of his children. Palis said that he already enjoyed mathematics in his childhood.
At 16, Palis moved to Rio de Janeiro to study engineering at the University of Brazil – now UFRJ. He was approved in first place in the entrance exam, but was not old enough to be accepted; he then had to take the university's entry exam again a year later, at which again he obtained first place. He completed the course in 1962 with honours and receiving the award for the best student.
In 1964, he moved to the United States. In 1966 he obtained his master's degree in mathematics under the guidance of Stephen Smale at the University of California, Berkeley, and in 1968 his PhD, with the thesis On Morse-Smale Diffeomorphisms, again with Smale as advisor.
In 1968, he returned to Brazil and became a researcher at the Instituto Nacional de Matemática Pura e Aplicada (IMPA) in Rio de Janeiro, Brazil.
Since 1973 he has held a permanent position as professor at IMPA, where he was director from 1993 until 2003. He was Secretary-General of the Third World Academy of Sciences from 2004 to 2006, and elected its president in 2006 and remained on position till December 2012. He was also president of the International Mathematical Union from 1999 to 2002. He was president of the Brazilian Academy of Sciences from 2007 to 2016. Palis has advised more than forty PhD students so far from more than ten countries.
Awards and honors
Palis ha
|
https://en.wikipedia.org/wiki/Comparison%20of%20DOS%20operating%20systems
|
This article details versions of MS-DOS, IBM PC DOS, and at least partially compatible disk operating systems. It does not include the many other operating systems called "DOS" which are unrelated to IBM PC compatibles.
Historical and licensing information
Originally MS-DOS was designed to be an operating system that could run on any computer with a 8086-family microprocessor. It competed with other operating systems written for such computers, such as CP/M-86 and UCSD Pascal. Each computer would have its own distinct hardware and its own version of MS-DOS, a situation similar to the one that existed for CP/M, with MS-DOS emulating the same solution as CP/M to adapt for different hardware platforms. So there were many different original equipment manufacturer (OEM) versions of MS-DOS for different hardware. But the greater speed attainable by direct control of hardware was of particular importance, especially when running computer games. So very soon an IBM-compatible architecture became the goal, and before long all 8086-family computers closely emulated IBM hardware, and only a single version of MS-DOS for a fixed hardware platform was all that was needed for the market. This specific version of MS-DOS is the version that is discussed here, as all other versions of MS-DOS died out with their respective systems. One version of such a generic MS-DOS (Z-DOS) is mentioned here, but there were dozens more. All these were for personal computers that used an 8086-family microprocessor, but which were not fully IBM PC compatible.
Technical specifications
See also
Timeline of DOS operating systems
OS/2
List of operating systems
Comparison of Linux distributions
Comparison of operating systems
References
External links
Detailed timeline of DOS variants
DR-DOS version dates
Comparison of DOS
DOS
|
https://en.wikipedia.org/wiki/Translation%20management%20system
|
A translation management system (TMS), formerly globalization management system (GMS), is a type of software for automating many parts of the human language translation process and maximizing translator efficiency. The idea of a translation management system is to automate all repeatable and non-essential work that can be done by software/systems and leaving only the creative work of translation and review to be done by human beings. A translation management system generally includes at least two types of technology: process management technology to automate the flow of work, and linguistic technology to aid the translator.
In a typical TMS, process management technology is used to monitor source language content for changes and route the content to various translators and reviewers. These translators and reviewers may be located across the globe and typically access the TMS via the Internet.
Translation management systems are most commonly used today for managing various aspects translation business.
Naming
Although translation management systems (TMS) seems to be the currently favoured term in the language localisation industry, these solutions are also known as globalization management systems (GMS) or global content management systems (GCMS). They work with content management systems (CMS) as separate, but linked programs or as simple add-ons that can answer specific multilingual requirements.
Overview
A TMS typically connects to a CMS to manage foreign language content. It tends to address the following categories in different degrees, depending on each offering:
Business administration: project management, resource management, financial management. This category is traditionally related to enterprise resource planning (ERP) tools.
Business process management: workflow, collaboration, content connectors. This category is traditionally the domain of specialised project management tools.
Language management: integrated translation memory, webtop tran
|
https://en.wikipedia.org/wiki/DIY%20audio
|
DIY Audio, do it yourself audio. Rather than buying a piece of possibly expensive audio equipment, such as a high-end audio amplifier or speaker, the person practicing DIY Audio will make it themselves. Alternatively, a DIYer may take an existing manufactured item of vintage era and update or modify it. The benefits of doing so include the satisfaction of creating something enjoyable, the possibility that the equipment made or updated is of higher quality than commercially available products and the pleasure of creating a custom-made device for which no exact equivalent is marketed. Other motivations for DIY audio can include getting audio components at a lower cost, the entertainment of using the item, and being able to ensure quality of workmanship.
History
Audio DIY came to prominence in the 1950s to 1960s, as audio reproduction was relatively new and the technology complex. Audio reproduction equipment, and in particular high performance equipment, was not generally offered at the retail level. Kits and designs were available for consumers to build their own equipment. Famous vacuum tube kits from Dynaco, Heathkit, and McIntosh, as well as solid state (transistor) kits from Hafler allowed for consumers to build their own hi fidelity systems. Books and magazines were published which explained new concepts regarding the design and operation of vacuum tube and (later) transistor circuits.
While audio equipment has become easily accessible in the current day and age, there still exists an interest in building and repairing one's own equipment including, but not limited to; pre-amplifiers, amplifiers, speakers, cables, CD players and turntables. Today, a network of companies, parts vendors, and on-line communities exist to foster this interest. DIY is especially active in loudspeaker and in tube amplification. Both are relatively simple to design and fabricate without access to sophisticated industrial equipment. Both enable the builder to pick and choose
|
https://en.wikipedia.org/wiki/Darboux%27s%20theorem
|
In differential geometry, a field in mathematics, Darboux's theorem is a theorem providing a normal form for special classes of differential 1-forms, partially generalizing the Frobenius integration theorem. It is named after Jean Gaston Darboux who established it as the solution of the Pfaff problem.
It is a foundational result in several fields, the chief among them being symplectic geometry. Indeed, one of its many consequences is that any two symplectic manifolds of the same dimension are locally symplectomorphic to one another. That is, every -dimensional symplectic manifold can be made to look locally like the linear symplectic space with its canonical symplectic form.
There is also an analogous consequence of the theorem applied to contact geometry.
Statement
Suppose that is a differential 1-form on an -dimensional manifold, such that has constant rank . Then
if everywhere, then there is a local system of coordinates in which
if everywhere, then there is a local system of coordinates in which
Darboux's original proof used induction on and it can be equivalently presented in terms of distributions or of differential ideals.
Frobenius' theorem
Darboux's theorem for ensures the any 1-form such that can be written as in some coordinate system .
This recovers one of the formulation of Frobenius theorem in terms of differential forms: if is the differential ideal generated by , then implies the existence of a coordinate system where is actually generated by .
Darboux's theorem for symplectic manifolds
Suppose that is a symplectic 2-form on an -dimensional manifold . In a neighborhood of each point of , by the Poincaré lemma, there is a 1-form with . Moreover, satisfies the first set of hypotheses in Darboux's theorem, and so locally there is a coordinate chart near in which
Taking an exterior derivative now shows
The chart is said to be a Darboux chart around . The manifold can be covered by such charts.
To state this differ
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.