source
stringlengths 31
227
| text
stringlengths 9
2k
|
---|---|
https://en.wikipedia.org/wiki/The%20World%20Museum
|
The World Museum was a full-page illustrated feature in some American Sunday newspapers, starting on May 9, 1937 until January 30, 1938. Devised and drawn by Holling Clancy Holling (1900–1973), it was also known as The World Museum Dioramas.
The Evening Star in Washington and the Baltimore American both published the dioramas. Publication in the Evening Star abruptly stops in February 1938 in spite of the next diorama, Log Cabin Days (scheduled for February 6, 1938) being announced in the January 30, 1938 issue along with Roman Gladiators. No announcement of the cancellation seems to have been done.
Format
Each diorama was published in the Sunday edition and occupied a full page in color. It featured all the parts to be cut out by the reader with detailed instructions to cut out the pictures and assemble them into a diorama using wrapping paper to stiffen the structure. Each new diorama had a paragraph explaining the diorama itself from a historical or geographical point of view. Subject included historical events, natural wonders or international ethnographic scenes.
List of Dioramas
These are the dioramas published from May 1937 to January 1938 the Washington, D.C. Evening Star. The names are the titles as published in the newspapers. Names in [ ] are clarification comments for titles that are not clear.
May 1937:
The Coronation of King George the Sixth and Queen Elizabeth
Treasure Hunters of the Sea
Castles in Spain
Dragons that Walked on the Water [Viking Ship]
June 1937:
The Creeping Wall [Glacier]
The Indian Buffalo Hunters
The Working Elephant
Making a Motion Picture
July 1937:
When the Liberty Bell Rang
When Dinosaurs Roamed the Earth
Life in a Dutch Village
Tibetan Devil Dancers
August 1937:
China Clipper [A Small Plane]
Fulton's First Steamboat
Whale Hunting
African Waterhole
September 1937:
An Indian School
Perry's Victory [War of 1812]
Polish Village Festival
Balboa at the Pacific
October 1937:
The Ohio M
|
https://en.wikipedia.org/wiki/Isothermal%E2%80%93isobaric%20ensemble
|
The isothermal–isobaric ensemble (constant temperature and constant pressure ensemble) is a statistical mechanical ensemble that maintains constant temperature and constant pressure applied. It is also called the -ensemble, where the number of particles is also kept as a constant. This ensemble plays an important role in chemistry as chemical reactions are usually carried out under constant pressure condition. The NPT ensemble is also useful for measuring the equation of state of model systems whose virial expansion for pressure cannot be evaluated, or systems near first-order phase transitions.
In the ensemble, the probability of a microstate is , where is the partition function, is the internal energy of the system in microstate , and is the volume of the system in microstate .
The probability of a macrostate is , where is the Gibbs free energy.
Derivation of key properties
The partition function for the -ensemble can be derived from statistical mechanics by beginning with a system of identical atoms described by a Hamiltonian of the form and contained within a box of volume . This system is described by the partition function of the canonical ensemble in 3 dimensions:
,
where , the thermal de Broglie wavelength ( and is the Boltzmann constant), and the factor (which accounts for indistinguishability of particles) both ensure normalization of entropy in the quasi-classical limit. It is convenient to adopt a new set of coordinates defined by such that the partition function becomes
.
If this system is then brought into contact with a bath of volume at constant temperature and pressure containing an ideal gas with total particle number such that , the partition function of the whole system is simply the product of the partition functions of the subsystems:
.
The integral over the coordinates is simply . In the limit that , while stays constant, a change in volume of the system under study will not change the pressure of the whole system
|
https://en.wikipedia.org/wiki/XOR%20gate
|
XOR gate (sometimes EOR, or EXOR and pronounced as Exclusive OR) is a digital logic gate that gives a true (1 or HIGH) output when the number of true inputs is odd. An XOR gate implements an exclusive or () from mathematical logic; that is, a true output results if one, and only one, of the inputs to the gate is true. If both inputs are false (0/LOW) or both are true, a false output results. XOR represents the inequality function, i.e., the output is true if the inputs are not alike otherwise the output is false. A way to remember XOR is "must have one or the other but not both".
An XOR gate may serve as a "programmable inverter" in which one input determines whether to invert the other input, or to simply pass it along with no change. Hence it functions as a inverter (a NOT gate) which may be activated or deactivated by a switch.
XOR can also be viewed as addition modulo 2. As a result, XOR gates are used to implement binary addition in computers. A half adder consists of an XOR gate and an AND gate. The gate is also used in subtractors and comparators.
The algebraic expressions or or all represent the XOR gate with inputs A and B. The behavior of XOR is summarized in the truth table shown on the right.
Symbols
There are three schematic symbols for XOR gates: the traditional ANSI and DIN symbols and the IEC symbol. In some cases, the DIN symbol is used with ⊕ instead of ≢. For more information see Logic Gate Symbols.
The "=1" on the IEC symbol indicates that the output is activated by only one active input.
The logic symbols ⊕, Jpq, and ⊻ can be used to denote an XOR operation in algebraic expressions.
C-like languages use the caret symbol ^ to denote bitwise XOR. (Note that the caret does not denote logical conjunction (AND) in these languages, despite the similarity of symbol.)
Implementation
The XOR gate is most commonly implemented using MOSFETs circuits. Some of those implementations include:
CMOS
The Complementary metal–oxide–semiconductor (CM
|
https://en.wikipedia.org/wiki/Holditch%27s%20theorem
|
In plane geometry, Holditch's theorem states that if a chord of fixed length is allowed to rotate inside a convex closed curve, then the locus of a point on the chord a distance p from one end and a distance q from the other is a closed curve whose enclosed area is less than that of the original curve by . The theorem was published in 1858 by Rev. Hamnet Holditch. While not mentioned by Holditch, the proof of the theorem requires an assumption that the chord be short enough that the traced locus is a simple closed curve.
Observations
The theorem is included as one of Clifford Pickover's 250 milestones in the history of mathematics. Some peculiarities of the theorem include that the area formula is independent of both the shape and the size of the original curve, and that the area formula is the same as for that of the area of an ellipse with semi-axes p and q. The theorem's author was a president of Caius College, Cambridge.
Extensions
Broman gives a more precise statement of the theorem, along with a generalization. The generalization allows, for example, consideration of the case in which the outer curve is a triangle, so that the conditions of the precise statement of Holditch's theorem do not hold because the paths of the endpoints of the chord have retrograde portions (portions that retrace themselves) whenever an acute angle is traversed. Nevertheless, the generalization shows that if the chord is shorter than any of the triangle's altitudes, and is short enough that the traced locus is a simple curve, Holditch's formula for the in-between area is still correct (and remains so if the triangle is replaced by any convex polygon with a short enough chord). However, other cases result in different formulas.
|
https://en.wikipedia.org/wiki/Exhaustion%20by%20compact%20sets
|
In mathematics, especially general topology and analysis, an exhaustion by compact sets of a topological space is a nested sequence of compact subsets of (i.e. ), such that is contained in the interior of , i.e. for each and . A space admitting an exhaustion by compact sets is called exhaustible by compact sets.
For example, consider and the sequence of closed balls
Occasionally some authors drop the requirement that is in the interior of , but then the property becomes the same as the space being σ-compact, namely a countable union of compact subsets.
Properties
The following are equivalent for a topological space :
is exhaustible by compact sets.
is σ-compact and weakly locally compact.
is Lindelöf and weakly locally compact.
(where weakly locally compact means locally compact in the weak sense that each point has a compact neighborhood).
The hemicompact property is intermediate between exhaustible by compact sets and σ-compact. Every space exhaustible by compact sets is hemicompact and every hemicompact space is σ-compact, but the reverse implications do not hold. For example, the Arens-Fort space and the Appert space are hemicompact, but not exhaustible by compact sets (because not weakly locally compact), and the set of rational numbers with the usual topology is σ-compact, but not hemicompact.
Every regular space exhaustible by compact sets is paracompact.
Notes
|
https://en.wikipedia.org/wiki/TOSEC
|
The Old School Emulation Center (TOSEC) is a retrocomputing initiative founded in February 2000 initially for the renaming and cataloging of software files intended for use in emulators, that later extended their work to the cataloging and preservation of also applications, firmware, device drivers, games, operating systems, magazines and magazine cover disks, comic books, product box art, videos of advertisements and training, related TV series and more. The catalogs provide an overview and cryptographic identification of media that allows for automatic integrity checking and renaming of files, checking for the completeness of software collections and more, using management utilities like ClrMamePro or ROMVault.
As the project grew in popularity it started to become a de facto standard for the management of retrocomputing and emulation resources. In 2013 many TOSEC catalogued files started to be included in the Internet Archive after the work quality and attention to detail put into the catalogs was praised by some of their archivists.
TOSEC usually makes two releases per year.
As of release 2023-01-23, TOSEC catalogs span ~195 unique brands and hundreds of unique computing platforms and continues to grow. As of this time the project had identified and cataloged more than 1.2 million different software images and sets (more than half of that for Commodore systems), describing a source set of about 8TB of software and resources.
See also
Digital preservation
MobyGames - Video game cataloging project
|
https://en.wikipedia.org/wiki/NAND%20logic
|
The NAND Boolean function has the property of functional completeness. This means that any Boolean expression can be re-expressed by an equivalent expression utilizing only NAND operations. For example, the function NOT(x) may be equivalently expressed as NAND(x,x). In the field of digital electronic circuits, this implies that it is possible to implement any Boolean function using just NAND gates.
The mathematical proof for this was published by Henry M. Sheffer in 1913 in the Transactions of the American Mathematical Society (Sheffer 1913). A similar case applies to the NOR function, and this is referred to as NOR logic.
NAND
A NAND gate is an inverted AND gate. It has the following truth table:
In CMOS logic, if both of the A and B inputs are high, then both the NMOS transistors (bottom half of the diagram) will conduct, neither of the PMOS transistors (top half) will conduct, and a conductive path will be established between the output and Vss (ground), bringing the output low. If both of the A and B inputs are low, then neither of the NMOS transistors will conduct, while both of the PMOS transistors will conduct, establishing a conductive path between the output and Vdd (voltage source), bringing the output high. If either of the A or B inputs is low, one of the NMOS transistors will not conduct, one of the PMOS transistors will, and a conductive path will be established between the output and Vdd (voltage source), bringing the output high. As the only configuration of the two inputs that results in a low output is when both are high, this circuit implements a NAND (NOT AND) logic gate.
Making other gates by using NAND gates
A NAND gate is a universal gate, meaning that any other gate can be represented as a combination of NAND gates.
NOT
A NOT gate is made by joining the inputs of a NAND gate together. Since a NAND gate is equivalent to an AND gate followed by a NOT gate, joining the inputs of a NAND gate leaves only the NOT gate.
AND
An AND gate is m
|
https://en.wikipedia.org/wiki/XNOR%20gate
|
The XNOR gate (sometimes ENOR, EXNOR, NXOR, XAND and pronounced as Exclusive NOR) is a digital logic gate whose function is the logical complement of the Exclusive OR (XOR) gate. It is equivalent to the logical connective () from mathematical logic, also known as the material biconditional. The two-input version implements logical equality, behaving according to the truth table to the right, and hence the gate is sometimes called an "equivalence gate". A high output (1) results if both of the inputs to the gate are the same. If one but not both inputs are high (1), a low output (0) results.
The algebraic notation used to represent the XNOR operation is . The algebraic expressions and both represent the XNOR gate with inputs A and B.
Symbols
There are two symbols for XNOR gates: one with distinctive shape and one with rectangular shape and label. Both symbols for the XNOR gate are that of the XOR gate with an added inversion bubble.
Hardware description
XNOR gates are represented in most TTL and CMOS IC families. The standard 4000 series CMOS IC is the 4077, and the TTL IC is the 74266 (although an open-collector implementation). Both include four independent, two-input, XNOR gates. The (now obsolete) 74S135 implemented four two-input XOR/XNOR gates or two three-input XNOR gates.
Both the TTL 74LS implementation, the 74LS266, as well as the CMOS gates (CD4077, 74HC4077 and 74HC266 and so on) are available from most semiconductor manufacturers such as Texas Instruments or NXP, etc. They are usually available in both through-hole DIP and SOIC formats (SOIC-14, SOC-14 or TSSOP-14).
Datasheets are readily available in most datasheet databases and suppliers.
Pinout
Both the 4077 and 74x266 devices (SN74LS266, 74HC266, 74266, etc.) have the same pinout diagram, as follows:
Pinout diagram of the 74HC266N, 74LS266 and CD4077 quad XNOR plastic dual in-line package 14-pin package (PDIP-14) ICs.
Alternatives
If a specific type of gate is not available, a circuit t
|
https://en.wikipedia.org/wiki/Askaryan%20radiation
|
The Askaryan radiation also known as Askaryan effect is the phenomenon whereby a particle traveling faster than the phase velocity of light in a dense dielectric (such as salt, ice or the lunar regolith) produces a shower of secondary charged particles which contains a charge anisotropy and emits a cone of coherent radiation in the radio or microwave part of the electromagnetic spectrum. The signal is a result of the Cherenkov radiation from individual particles in the shower. Wavelengths greater than the extent of the shower interfere constructively and thus create a radio or microwave signal which is strongest at the Cherenkov angle. The effect is named after Gurgen Askaryan, a Soviet-Armenian physicist who postulated it in 1962.
The radiation was first observed experimentally in 2000, 38 years after its theoretical prediction. So far the effect has been observed in silica sand, rock salt, ice, and Earth's atmosphere.
The effect is of primary interest in using bulk matter to detect ultra-high energy neutrinos. The Antarctic Impulse Transient Antenna (ANITA) experiment uses antennas attached to a balloon flying over Antarctica to detect the Askaryan radiation produced as cosmic neutrinos travel through the ice. Several experiments have also used the Moon as a neutrino detector based on detection of the Askaryan radiation.
See also
Cherenkov radiation
|
https://en.wikipedia.org/wiki/List%20of%20synchrotron%20radiation%20facilities
|
This is a table of synchrotrons and storage rings used as synchrotron radiation sources, and free electron lasers.
|
https://en.wikipedia.org/wiki/Distributed%20parameter%20system
|
In control theory, a distributed-parameter system (as opposed to a lumped-parameter system) is a system whose state space is infinite-dimensional. Such systems are therefore also known as infinite-dimensional systems. Typical examples are systems described by partial differential equations or by delay differential equations.
Linear time-invariant distributed-parameter systems
Abstract evolution equations
Discrete-time
With U, X and Y Hilbert spaces and ∈ L(X), ∈ L(U, X), ∈ L(X, Y) and ∈ L(U, Y) the following difference equations determine a discrete-time linear time-invariant system:
with (the state) a sequence with values in X, (the input or control) a sequence with values in U and (the output) a sequence with values in Y.
Continuous-time
The continuous-time case is similar to the discrete-time case but now one considers differential equations instead of difference equations:
,
.
An added complication now however is that to include interesting physical examples such as partial differential equations and delay differential equations into this abstract framework, one is forced to consider unbounded operators. Usually A is assumed to generate a strongly continuous semigroup on the state space X. Assuming B, C and D to be bounded operators then already allows for the inclusion of many interesting physical examples, but the inclusion of many other interesting physical examples forces unboundedness of B and C as well.
Example: a partial differential equation
The partial differential equation with and given by
fits into the abstract evolution equation framework described above as follows. The input space U and the output space Y are both chosen to be the set of complex numbers. The state space X is chosen to be L2(0, 1). The operator A is defined as
It can be shown that A generates a strongly continuous semigroup on X. The bounded operators B, C and D are defined as
Example: a delay differential equation
The delay differential equation
fits into th
|
https://en.wikipedia.org/wiki/Service%20Assurance%20Agent
|
IP SLA (Internet Protocol Service Level Agreement) is an active computer network measurement technology that was initially developed by Cisco Systems. IP SLA was previously known as Service Assurance Agent (SAA) or Response Time Reporter (RTR). IP SLA is used to track network performance like latency, ping response, and jitter, it also helps us to provide service quality.
Functions
Routers and switches enabled with IP SLA perform periodic network tests or measurements such as
Hypertext Transfer Protocol (HTTP) GET
File Transfer Protocol (FTP) downloads
Domain Name System (DNS) lookups
User Datagram Protocol (UDP) echo, for VoIP jitter and mean opinion score (MOS)
Data-Link Switching (DLSw) (Systems Network Architecture (SNA) tunneling protocol)
Dynamic Host Configuration Protocol (DHCP) lease requests
Transmission Control Protocol (TCP) connect
Internet Control Message Protocol (ICMP) echo (remote ping)
The exact number and types of available measurements depends on the IOS version. IP SLA is very widely used in service provider networks to generate time-based performance data. It is also used together with Simple Network Management Protocol (SNMP) and NetFlow, which generate volume-based data.
Usage considerations
For IP SLA tests, devices with IP SLA support are required. IP SLA is supported on Cisco routers and switches since IOS version 12.1. Other vendors like Juniper Networks or Enterasys Networks support IP SLA on some of their devices.
IP SLA tests and data collection can be configured either via a console (command-line interface) or via SNMP.
When using SNMP, both read and write community strings are needed.
The IP SLA voice quality feature was added starting with IOS version 12.3(4)T. All versions after this, including 12.4 mainline, contain the MOS and ICPIF voice quality calculation for the UDP jitter measurement.
|
https://en.wikipedia.org/wiki/Piezoresistive%20effect
|
The piezoresistive effect is a change in the electrical resistivity of a semiconductor or metal when mechanical strain is applied. In contrast to the piezoelectric effect, the piezoresistive effect causes a change only in electrical resistance, not in electric potential.
History
The change of electrical resistance in metal devices due to an applied mechanical load was first discovered in 1856 by Lord Kelvin. With single crystal silicon becoming the material of choice for the design of analog and digital circuits, the large piezoresistive effect in silicon and germanium was first discovered in 1954 (Smith 1954).
Mechanism
In conducting and semi-conducting materials, changes in inter-atomic spacing resulting from strain affect the bandgaps, making it easier (or harder depending on the material and strain) for electrons to be raised into the conduction band. This results in a change in resistivity of the material. Within a certain range of strain this relationship is linear, so that the piezoresistive coefficient
where
∂ρ = Change in resistivity
ρ = Original resistivity
ε = Strain
is constant.
Piezoresistivity in metals
Usually the resistance change in metals is mostly due to the change of geometry resulting from applied mechanical stress. However, even though the piezoresistive effect is small in those cases it is often not negligible. In cases where it is, it can be calculated using the simple resistance equation derived from Ohm's law;
where
Conductor length [m]
A Cross-sectional area of the current flow [m²]
Some metals display piezoresistivity that is much larger than the resistance change due to geometry. In platinum alloys, for instance, piezoresistivity is more than a factor of two larger, combining with the geometry effects to give a strain gauge sensitivity of up to more than three times as large than due to geometry effects alone. Pure nickel's piezoresistivity is -13 times larger, completely dwarfing and even reversing the sign of the geometry-ind
|
https://en.wikipedia.org/wiki/NOR%20gate
|
The NOR gate is a digital logic gate that implements logical NOR - it behaves according to the truth table to the right. A HIGH output (1) results if both the inputs to the gate are LOW (0); if one or both input is HIGH (1), a LOW output (0) results. NOR is the result of the negation of the OR operator. It can also in some senses be seen as the inverse of an AND gate. NOR is a functionally complete operation—NOR gates can be combined to generate any other logical function. It shares this property with the NAND gate. By contrast, the OR operator is monotonic as it can only change LOW to HIGH but not vice versa.
In most, but not all, circuit implementations, the negation comes for free—including CMOS and TTL. In such logic families, OR is the more complicated operation; it may use a NOR followed by a NOT. A significant exception is some forms of the domino logic family.
Symbols
There are three symbols for NOR gates: the American (ANSI or 'military') symbol and the IEC ('European' or 'rectangular') symbol, as well as the deprecated DIN symbol. For more information see Logic Gate Symbols. The ANSI symbol for the NOR gate is a standard OR gate with an inversion bubble connected.
The bubble indicates that the function of the or gate has been inverted.
Hardware description and pinout
NOR Gates are basic logic gates, and as such they are recognised in TTL and CMOS ICs. The standard, 4000 series, CMOS IC is the 4001, which includes four independent, two-input, NOR gates. The pinout diagram is as follows:
Availability
These devices are available from most semiconductor manufacturers such as Fairchild Semiconductor, Philips or Texas Instruments. These are usually available in both through-hole DIP and SOIC format. Datasheets are readily available in most datasheet databases.
In the popular CMOS and TTL logic families, NOR gates with up to 8 inputs are available:
CMOS
4001: Quad 2-input NOR gate
4025: Triple 3-input NOR gate
4002: Dual 4-input NOR gate
4078: Single
|
https://en.wikipedia.org/wiki/DriveSpace
|
DriveSpace (initially known as DoubleSpace) is a disk compression utility supplied with MS-DOS starting from version 6.0 in 1993 and ending in 2000 with the release of Windows Me. The purpose of DriveSpace is to increase the amount of data the user could store on disks by transparently compressing and decompressing data on-the-fly. It is primarily intended for use with hard drives, but use for floppy disks is also supported. This feature was removed in Windows XP and later.
Overview
In the most common usage scenario, the user would have one hard drive in the computer, with all the space allocated to one partition (usually as drive C:). The software would compress the entire partition contents into one large file in the root directory. On booting the system, the driver would allocate this large file as drive C:, enabling files to be accessed as normal.
Microsoft's decision to add disk compression to MS-DOS 6.0 was influenced by the fact that the competing DR DOS had earlier started to include disk compression software since version 6.0 in 1991.
Instead of developing its own product from scratch, Microsoft licensed the technology for the DoubleDisk product developed by Vertisoft and adapted it to become DoubleSpace. For instance, the loading of the driver controlling the compression/decompression (DBLSPACE.BIN) became more deeply integrated into the operating system (being loaded through the undocumented pre-load API even before the CONFIG.SYS file).
Microsoft had originally sought to license the technology from Stac Electronics, which had a similar product called Stacker, but these negotiations had failed. Microsoft was later successfully sued for patent infringement by Stac Electronics for violating some of its compression patents. During the court case Stac Electronics claimed that Microsoft had refused to pay any money when it attempted to license Stacker, offering only the possibility for Stac Electronics to develop enhancement products.
Consumption and co
|
https://en.wikipedia.org/wiki/Dressed%20to%20Kill%20%281946%20film%29
|
Dressed to Kill is a 1946 American mystery film directed by Roy William Neill. Released by Universal Pictures, it is the last of fourteen films starring Basil Rathbone as Sherlock Holmes and Nigel Bruce as Doctor Watson. It is also known by the alternative titles Prelude to Murder (working title) and Sherlock Holmes and the Secret Code in the United Kingdom.
The film has an original story, but combines elements of the short stories "The Adventure of the Six Napoleons" and "A Scandal in Bohemia." It is one of four films in the series which are in the public domain.
Plot
John Davidson, a convicted thief in Dartmoor prison (played by an uncredited Cyril Delevanti), embeds code revealing the hidden location of extremely valuable stolen Bank of England currency printing plates in the melody notes of three music boxes that he crafts to be sold at auction. Each box plays a subtly different version of an Australian tune, "The Swagman". At the auction each is purchased by a different buyer.
Dr. Watson's friend, Julian Emery, a music box collector, pays him and Sherlock Holmes a visit and tells them of an attempted burglary in his house the previous night of a plain cheap box (similar to the one he bought at auction) while leaving other much more valuable ones. Holmes and Watson ask to see and are shown Emery's collection. After they leave, Emery welcomes a female acquaintance, Hilda Courtney, who tries unsuccessfully to buy the auctioned box. When Emery declines, a male friend of Courtney's who has sneaked in murders Emery.
At this murder Holmes becomes even more curious and learns to whom else the boxes were auctioned off. Holmes and Watson arrive at the house of the person who bought the second one, just as a strange maid (Courtney in disguise) is on her way "to go shopping". They later realize it was not a maid: she locked a child in a closet in order to steal the box from the child.
Holmes is able to buy the third box, and upon examination discovers that its vari
|
https://en.wikipedia.org/wiki/Hauppauge%20MediaMVP
|
The Hauppauge MediaMVP is a network media player. It consists of a hardware unit with remote control, along with software for a Windows PC. Out of the box, it is capable of playing video and audio, displaying pictures, and "tuning in" to Internet radio stations. Alternative software is also available to extend its capabilities. It can be used as a front-end for various PVR projects.
The MediaMVP is popular with some PVR enthusiasts because it is inexpensive and relatively easy to modify.
Capabilities
The MediaMVP can stream audio and video content from a host PC running Windows. It can display photos stored on the host PC. It can stream Internet radio via the host PC as well. It can display live TV with full PVR features with SageTV PVR software for Windows or Linux.
The capabilities listed below refer to the official software and firmware supplied by Hauppauge.
Video
The MediaMVP supports the MPEG (MPEG-1 and MPEG-2) video format (and only that format). However, depending on the MediaMVP host software running on the host computer, the host software may be able to seamlessly transcode other video file formats before sending them to the MediaMVP in the MPEG format. The maximum un-transcoded playable video size is SDTV (480i). HDTV mpeg streams (e.g. 720p) need to be transcoded in real-time on the computer to SD format. Note: transcoding video can tax some slower computers.
With a hardware MPEG decoder as part of its PowerPC processor, it renders moving video images more smoothly than many software PVR implementations.
Audio
Supported audio file formats include MP3 and WMA. Playlist formats supported include M3U, PLS, ASX and B4S.
See also Internet radio below.
Photos
Supported image file formats include JPG and GIF.
Slideshows are supported. Listening to music (including streaming Internet radio) during slideshows is supported as well.
Internet radio
Supports streaming Internet radio stations via the host PC.
Other capabilities
Can schedule recordi
|
https://en.wikipedia.org/wiki/Human%20back
|
The human back, also called the dorsum (: dorsa), is the large posterior area of the human body, rising from the top of the buttocks to the back of the neck. It is the surface of the body opposite from the chest and the abdomen. The vertebral column runs the length of the back and creates a central area of recession. The breadth of the back is created by the shoulders at the top and the pelvis at the bottom.
Back pain is a common medical condition, generally benign in origin.
Structure
The central feature of the human back is the vertebral column, specifically the length from the top of the thoracic vertebrae to the bottom of the lumbar vertebrae, which houses the spinal cord in its spinal canal, and which generally has some curvature that gives shape to the back. The ribcage extends from the spine at the top of the back (with the top of the ribcage corresponding to the T1 vertebra), more than halfway down the length of the back, leaving an area with less protection between the bottom of the ribcage and the hips. The width of the back at the top is defined by the scapula, the broad, flat bones of the shoulders.
Muscles
The muscles of the back can be divided into three distinct groups; a superficial group, an intermediate group and a deep group.
Superficial group
The superficial group, also known as the appendicular group, is primarily associated with movement of the appendicular skeleton. It is composed of trapezius, latissimus dorsi, rhomboid major, rhomboid minor and levator scapulae. It is innervated by anterior rami of spinal nerves, reflecting its embryological origin outside the back.
Intermediate group
The intermediate group is also known as respiratory group as it may serve a respiratory function. It is composed of serratus posterior superior and serratus posterior inferior. Like the superficial group, it is innervated by anterior rami of spinal nerves.
Deep group
The deep group, also known as the intrinsic group due to its embryological origin in
|
https://en.wikipedia.org/wiki/Particle%20statistics
|
Particle statistics is a particular description of multiple particles in statistical mechanics. A key prerequisite concept is that of a statistical ensemble (an idealization comprising the state space of possible states of a system, each labeled with a probability) that emphasizes properties of a large system as a whole at the expense of knowledge about parameters of separate particles. When an ensemble describes a system of particles with similar properties, their number is called the particle number and usually denoted by N.
Classical statistics
In classical mechanics, all particles (fundamental and composite particles, atoms, molecules, electrons, etc.) in the system are considered distinguishable. This means that individual particles in a system can be tracked. As a consequence, switching the positions of any pair of particles in the system leads to a different configuration of the system. Furthermore, there is no restriction on placing more than one particle in any given state accessible to the system. These characteristics of classical positions are called Maxwell–Boltzmann statistics.
Quantum statistics
The fundamental feature of quantum mechanics that distinguishes it from classical mechanics is that particles of a particular type are indistinguishable from one another. This means that in an ensemble of similar particles, interchanging any two particles does not lead to a new configuration of the system. In the language of quantum mechanics this means that the wave function of the system is invariant up to a phase with respect to the interchange of the constituent particles. In the case of a system consisting of particles of different kinds (for example, electrons and protons), the wave function of the system is invariant up to a phase separately for both assemblies of particles.
The applicable definition of a particle does not require it to be elementary or even "microscopic", but it requires that all its degrees of freedom (or internal states) that are
|
https://en.wikipedia.org/wiki/Triangular%20function
|
A triangular function (also known as a triangle function, hat function, or tent function) is a function whose graph takes the shape of a triangle. Often this is an isosceles triangle of height 1 and base 2 in which case it is referred to as the triangular function. Triangular functions are useful in signal processing and communication systems engineering as representations of idealized signals, and the triangular function specifically as an integral transform kernel function from which more realistic signals can be derived, for example in kernel density estimation. It also has applications in pulse-code modulation as a pulse shape for transmitting digital signals and as a matched filter for receiving the signals. It is also used to define the triangular window sometimes called the Bartlett window.
Definitions
The most common definition is as a piecewise function:
Equivalently, it may be defined as the convolution of two identical unit rectangular functions:
The triangular function can also be represented as the product of the rectangular and absolute value functions:
Note that some authors instead define the triangle function to have a base of width 1 instead of width 2:
In its most general form a triangular function is any linear B-spline:
Whereas the definition at the top is a special case
where , , and .
A linear B-spline is the same as a continuous piecewise linear function , and this general triangle function is useful to formally define as
where for all integer .
The piecewise linear function passes through every point expressed as coordinates with ordered pair , that is,
.
Scaling
For any parameter :
Fourier transform
The transform is easily determined using the convolution property of Fourier transforms and the Fourier transform of the rectangular function:
where is the normalized sinc function.
See also
Källén function, also known as triangle function
Tent map
Triangular distribution
Triangle wave, a piecewise linear periodic function
|
https://en.wikipedia.org/wiki/Right-to-left%20script
|
In a script (commonly shortened to right to left or abbreviated RTL, RL-TB or R2L), writing starts from the right of the page and continues backwards to the left, proceeding from top to bottom for new lines. Arabic, Hebrew, Persian, Urdu, Kashmiri, Pashto, Uighur, Sorani Kurdish, and Sindhi are the most widespread RTL writing systems in modern times.
Right-to-left can also refer to (TB-RL or vertical) scripts of tradition, such as Chinese, Japanese, and Korean, though in modern times they are also commonly written (with lines going from top to bottom). Books designed for predominantly vertical TBRL text open in the same direction as those for RTL horizontal text: the spine is on the right and pages are numbered from right to left.
These scripts can be contrasted with many common modern writing systems, where writing starts from the left of the page and continues to the right.
The Arabic script is mostly but not exclusively right-to-left; mathematical expressions, numeric dates and numbers bearing units are embedded from left to right.
Uses
Hebrew, Arabic, and Persian are the most widespread RTL writing systems in modern times. As usage of the Arabic script spread, the repertoire of 28 characters used to write the Arabic language was supplemented to accommodate the sounds of many other languages such as Kashmiri, Pashto, etc. While the Hebrew alphabet is used to write the Hebrew language, it is also used to write other Jewish languages such as Yiddish and Judaeo-Spanish.
Syriac and Mandaean (Mandaic) scripts are derived from Aramaic and are written RTL. Samaritan is similar, but developed from Proto-Hebrew rather than Aramaic. Many other ancient and historic scripts derived from Aramaic inherited its right-to-left direction.
Several languages have both Arabic RTL and non-Arabic LTR writing systems. For example, Sindhi is commonly written in Arabic and Devanagari scripts, and a number of others have been used. Kurdish may be written in the Arabic or Latin s
|
https://en.wikipedia.org/wiki/Lithotroph
|
Lithotrophs are a diverse group of organisms using an inorganic substrate (usually of mineral origin) to obtain reducing equivalents for use in biosynthesis (e.g., carbon dioxide fixation) or energy conservation (i.e., ATP production) via aerobic or anaerobic respiration. While lithotrophs in the broader sense include photolithotrophs like plants, chemolithotrophs are exclusively microorganisms; no known macrofauna possesses the ability to use inorganic compounds as electron sources. Macrofauna and lithotrophs can form symbiotic relationships, in which case the lithotrophs are called "prokaryotic symbionts". An example of this is chemolithotrophic bacteria in giant tube worms or plastids, which are organelles within plant cells that may have evolved from photolithotrophic cyanobacteria-like organisms. Chemolithotrophs belong to the domains Bacteria and Archaea. The term "lithotroph" was created from the Greek terms 'lithos' (rock) and 'troph' (consumer), meaning "eaters of rock". Many but not all lithoautotrophs are extremophiles.
The last universal common ancestor of life is thought to be a chemolithotroph (due to its presence in the prokaryotes). Different from a lithotroph is an organotroph, an organism which obtains its reducing agents from the catabolism of organic compounds.
History
The term was suggested in 1946 by Lwoff and collaborators.
Biochemistry
Lithotrophs consume reduced inorganic compounds (electron donors).
Chemolithotrophs
A chemolithotroph is able to use inorganic reduced compounds in its energy-producing reactions. This process involves the oxidation of inorganic compounds coupled to ATP synthesis. The majority of chemolithotrophs are chemolithoautotrophs, able to fix carbon dioxide (CO2) through the Calvin cycle, a metabolic pathway in which CO2 is converted to glucose. This group of organisms includes sulfur oxidizers, nitrifying bacteria, iron oxidizers, and hydrogen oxidizers.
The term "chemolithotrophy" refers to a cell's acquisitio
|
https://en.wikipedia.org/wiki/Central%20charge
|
In theoretical physics, a central charge is an operator Z that commutes with all the other symmetry operators. The adjective "central" refers to the center of the symmetry group—the subgroup of elements that commute with all other elements of the original group—often embedded within a Lie algebra. In some cases, such as two-dimensional conformal field theory, a central charge may also commute with all of the other operators, including operators that are not symmetry generators.
Overview
More precisely, the central charge is the charge that corresponds, by Noether's theorem, to the center of the central extension of the symmetry group.
In theories with supersymmetry, this definition can be generalized to include supergroups and Lie superalgebras. A central charge is any operator which commutes with all the other supersymmetry generators. Theories with extended supersymmetry typically have many operators of this kind. In string theory, in the first quantized formalism, these operators also have the interpretation of winding numbers (topological quantum numbers) of various strings and branes.
In conformal field theory, the central charge is a c-number (commutes with every other operator) term that appears in the commutator of two components of the stress–energy tensor. As a result, conformal field theory is characterized by a representation of Virasoro algebra with central charge .
See also
Charge (physics)
Conformal anomaly
Two-dimensional conformal field theory
Vertex operator algebra
W-algebra
Virasoro algebra
Lie algebra extension#Projective representation
Group extension
Representation theory of the Galilean group
Non-critical string theory#The critical dimension and central charge
|
https://en.wikipedia.org/wiki/Organotroph
|
An organotroph is an organism that obtains hydrogen or electrons from organic substrates. This term is used in microbiology to classify and describe organisms based on how they obtain electrons for their respiration processes. Some organotrophs such as animals and many bacteria, are also heterotrophs. Organotrophs can be either anaerobic or aerobic.
Antonym: Lithotroph, Adjective: Organotrophic.
History
The term was suggested in 1946 by Lwoff and collaborators.
See also
Autotroph
Chemoorganotroph
Primary nutritional groups
|
https://en.wikipedia.org/wiki/Thermal%20quantum%20field%20theory
|
In theoretical physics, thermal quantum field theory (thermal field theory for short) or finite temperature field theory is a set of methods to calculate expectation values of physical observables of a quantum field theory at finite temperature.
In the Matsubara formalism, the basic idea (due to Felix Bloch) is that the expectation values of operators in a canonical ensemble
may be written as expectation values in ordinary quantum field theory where the configuration is evolved by an imaginary time . One can therefore switch to a spacetime with Euclidean signature, where the above trace (Tr) leads to the requirement that all bosonic and fermionic fields be periodic and antiperiodic, respectively, with respect to the Euclidean time direction with periodicity (we are assuming natural units ). This allows one to perform calculations with the same tools as in ordinary quantum field theory, such as functional integrals and Feynman diagrams, but with compact Euclidean time. Note that the definition of normal ordering has to be altered.
In momentum space, this leads to the replacement of continuous frequencies by discrete imaginary (Matsubara) frequencies and, through the de Broglie relation, to a discretized thermal energy spectrum . This has been shown to be a useful tool in studying the behavior of quantum field theories at finite temperature.
It has been generalized to theories with gauge invariance and was a central tool in the study of a conjectured deconfining phase transition of Yang–Mills theory.
In this Euclidean field theory, real-time observables can be retrieved by analytic continuation. The Feynman rules for gauge theories in the Euclidean time formalism, were derived by C. W. Bernard.
The Matsubara formalism, also referred to as imaginary time formalism, can be extended to systems with thermal variations. In this approach, the variation in the temperature is recast as a variation in the Euclidean metric. Analysis of the partition function leads
|
https://en.wikipedia.org/wiki/Seiberg%E2%80%93Witten%20theory
|
In theoretical physics, Seiberg–Witten theory is an supersymmetric gauge theory with an exact low-energy effective action (for massless degrees of freedom), of which the kinetic part coincides with the Kähler potential of the moduli space of vacua. Before taking the low-energy effective action, the theory is known as supersymmetric Yang–Mills theory, as the field content is a single vector supermultiplet, analogous to the field content of Yang–Mills theory being a single vector gauge field (in particle theory language) or connection (in geometric language).
The theory was studied in detail by Nathan Seiberg and Edward Witten .
Seiberg–Witten curves
In general, effective Lagrangians of supersymmetric gauge theories are largely determined by their holomorphic (really, meromorphic) properties and their behavior near the singularities. In gauge theory with extended supersymmetry, the moduli space of vacua is a special Kähler manifold and its Kähler potential is constrained by above conditions.
In the original approach, by Seiberg and Witten, holomorphy and electric-magnetic duality constraints are strong enough to almost uniquely
constrain the prepotential (a holomorphic function which defines the theory), and therefore the metric of the moduli space of vacua, for theories with SU(2) gauge group.
More generally, consider the example with gauge group SU(n). The classical potential is
where is a scalar field appearing in an expansion of superfields in the theory. The potential must vanish on the moduli space of vacua by definition, but the need not. The vacuum expectation value of can be gauge rotated into the Cartan subalgebra, making it a traceless diagonal complex matrix .
Because the fields no longer have vanishing vacuum expectation value, other fields become massive due to the Higgs mechanism (spontaneous symmetry breaking). They are integrated out in order to find the effective U(1) gauge theory. Its two-derivative, four-fermions low-energy act
|
https://en.wikipedia.org/wiki/Van%20der%20Waerden%20notation
|
In theoretical physics, Van der Waerden notation refers to the usage of two-component spinors (Weyl spinors) in four spacetime dimensions. This is standard in twistor theory and supersymmetry. It is named after Bartel Leendert van der Waerden.
Dotted indices
Undotted indices (chiral indices)
Spinors with lower undotted indices have a left-handed chirality, and are called chiral indices.
Dotted indices (anti-chiral indices)
Spinors with raised dotted indices, plus an overbar on the symbol (not index), are right-handed, and called anti-chiral indices.
Without the indices, i.e. "index free notation", an overbar is retained on right-handed spinor, since ambiguity arises between chirality when no index is indicated.
Hatted indices
Indices which have hats are called Dirac indices, and are the set of dotted and undotted, or chiral and anti-chiral, indices. For example, if
then a spinor in the chiral basis is represented as
where
In this notation the Dirac adjoint (also called the Dirac conjugate) is
See also
Dirac equation
Infeld–Van der Waerden symbols
Lorentz transformation
Pauli equation
Ricci calculus
Notes
|
https://en.wikipedia.org/wiki/Source%20field
|
In theoretical physics, a source field is a background field coupled to the original field as
.
This term appears in the action in Feynman's path integral formulation and responsible for the theory interactions. In Schwinger's formulation the source is responsible for creating or destroying (detecting) particles. In a collision reaction a source could the other particles in the collision. Therefore, the source appears in the vacuum amplitude acting from both sides on Green function correlator of the theory.
Schwinger's source theory stems from Schwinger's quantum action principle and can be related to the path integral formulation as the variation with respect to the source per se corresponds to the field , i.e.
.
Also, a source acts effectively in a region of the spacetime. As one sees in the examples below, the source field appears on the right-hand side of the equations of motion (usually second-order partial differential equations) for . When the field is the electromagnetic potential or the metric tensor, the source field is the electric current or the stress–energy tensor, respectively.
In terms of the statistical and non-relativistic applications, Schwinger's source formulation plays crucial rules in understanding many non-equilibrium systems. Source theory is theoretically significant as it needs neither divergence regularizations nor renormalization.
Relation between path integral formulation and source formulation
In the Feynman's path integral formulation with normalization , partition function
generates Green's functions (correlators)
.
One implements the quantum variational methodology to realize that is an external driving source of . From the perspectives of probability theory, can be seen as the expectation value of the function . This motivates considering the Hamiltonian of forced harmonic oscillator as a toy model
where .
In fact, the current is real, that is . And the Lagrangian is . From now on we drop the hat and the asteris
|
https://en.wikipedia.org/wiki/Incomplete%20markets
|
In economics, incomplete markets are markets in which there does not exist an Arrow–Debreu security for every possible state of nature. In contrast with complete markets, this shortage of securities will likely restrict individuals from transferring the desired level of wealth among states.
An Arrow security purchased or sold at date t is a contract promising to deliver one unit of income in one of the possible contingencies which can occur at date t + 1. If at each date-event there exists a complete set of such contracts, one for each contingency that can occur at the following date, individuals will trade these contracts in order to insure against future risks, targeting a desirable and budget feasible level of consumption in each state (i.e. consumption smoothing). In most set ups when these contracts are not available, optimal risk sharing between agents will not be possible. For this scenario, agents (homeowners, workers, firms, investors, etc.) will lack the instruments to insure against future risks such as employment status, health, labor income, prices, among others.
Markets, securities and market incompleteness
In a competitive market, each agent makes intertemporal choices in a stochastic environment. Their attitudes toward risk, the production possibility set, and the set of available trades determine the equilibrium quantities and prices of assets that are traded. In an "idealized" representation agents are assumed to have costless contractual enforcement and perfect knowledge of future states and their likelihood. With a complete set of state contingent claims (also known as Arrow–Debreu securities) agents can trade these securities to hedge against undesirable or bad outcomes.
When a market is incomplete, it typically fails to make the optimal allocation of assets. That is, the First Welfare Theorem no longer holds. The competitive equilibrium in an Incomplete Market is generally constrained suboptimal. The notion of constrained suboptimality wa
|
https://en.wikipedia.org/wiki/International%20Safe%20Harbor%20Privacy%20Principles
|
The International Safe Harbor Privacy Principles or Safe Harbour Privacy Principles were principles developed between 1998 and 2000 in order to prevent private organizations within the European Union or United States which store customer data from accidentally disclosing or losing personal information. They were overturned on October 6, 2015, by the European Court of Justice (ECJ), which enabled some US companies to comply with privacy laws protecting European Union and Swiss citizens. US companies storing customer data could self-certify that they adhered to 7 principles, to comply with the EU Data Protection Directive and with Swiss requirements. The US Department of Commerce developed privacy frameworks in conjunction with both the European Union and the Federal Data Protection and Information Commissioner of Switzerland.
Within the context of a series of decisions on the adequacy of the protection of personal data transferred to other countries, the European Commission made a decision in 2000 that the United States' principles did comply with the EU Directive – the so-called "Safe Harbour decision". However, after a customer complained that his Facebook data were insufficiently protected, the ECJ declared in October 2015 that the Safe Harbour decision was invalid, leading to further talks being held by the Commission with the US authorities towards "a renewed and sound framework for transatlantic data flows".
The European Commission and the United States agreed to establish a new framework for transatlantic data flows on 2 February 2016, known as the "EU–US Privacy Shield", which was closely followed by the Swiss-US Privacy Shield Framework.
Background history
In 1980, the OECD issued recommendations for protection of personal data in the form of eight principles. These were non-binding and in 1995, the European Union (EU) enacted a more binding form of governance, i.e. legislation, to protect personal data privacy in the form of the Data Protection Directive
|
https://en.wikipedia.org/wiki/Restaurant%20rating
|
Restaurant ratings identify restaurants according to their quality, using notations such as stars or other symbols, or numbers. Stars are a familiar and popular symbol, with scales of one to three or five stars commonly used. Ratings appear in guide books as well as in the media, typically in newspapers, lifestyle magazines and webzines. Websites featuring consumer-written reviews and ratings are increasingly popular, but are far less reliable.
In addition, there are ratings given by public health agencies rating the level of sanitation practiced by an establishment.
Restaurant guides
One of the best known guides is the Michelin series which award one to three stars to restaurants they perceive to be of high culinary merit. One star indicates a "very good restaurant"; two stars indicate a place "worth a detour"; three stars means "exceptional cuisine, worth a special journey".
Several bigger newspapers employ restaurant critics and publish online dining guides for the cities they serve, such as the Irish Independent for Irish restaurants.
List of notable restaurant guides
Europe (original working area)
The Americas
Asia
Internet restaurant review sites have empowered regular people to generate non-expert reviews. This has sparked criticism from restaurant establishments about the non-editorial, non-professional critiques. Those reviews can be falsified or faked.
Rating criteria
The different guides have their own criteria. Not every guide looks behind the scenes or decorum. Others look particularly sharply to value for money. This is why a restaurant can be missing in one guide, while mentioned in another. Because the guides work independently, it is possible to have simultaneous multiple recognitions.
Ratings impact
A top restaurant rating can mean success or failure for a restaurant, particularly when bestowed by an influential sources such as Michelin. Still, a good rating is not enough for economic success and many Michelin starred and/or highly ra
|
https://en.wikipedia.org/wiki/MODFLOW
|
MODFLOW is the U.S. Geological Survey modular finite-difference flow model, which is a computer code that solves the groundwater flow equation. The program is used by hydrogeologists to simulate the flow of groundwater through aquifers. The source code is free public domain software, written primarily in Fortran, and can compile and run on Microsoft Windows or Unix-like operating systems.
Since its original development in the early 1980s, the USGS has made six major releases, and is now considered to be the de facto standard code for aquifer simulation. There are several actively developed commercial and non-commercial graphical user interfaces for MODFLOW.
MODFLOW was constructed in what was in 1980's called a modular design. This means it has many of the attributes of what came to be called object-oriented programming. For example, capabilities (called "packages") that simulate subsidence or lakes or streams, can easily be turned on and off and the execution time and storage requirements of those packages go away entirely. If a programmer wants to change something in MODFLOW, the clean organization makes it easy. Indeed, this kind of innovation is exactly what was anticipated when MODFLOW was designed.
Importantly, the modularity of MODFLOW makes it possible for different Packages to be written that are intended to address the same simulation goal in different ways. This allows differences of opinion about how system processes function to be tested. Such testing is an important part of multi-modeling, or alternative hypothesis testing. Models like MODFLOW make this kind of testing more definitive and controlled. This results because other aspects of the program remain the same. Tests become more definitive because they become less prone to being influenced unknowingly by other numerical and programming differences.
Groundwater flow equation
The governing partial differential equation for a confined aquifer used in MODFLOW is:
where
, and are the values of
|
https://en.wikipedia.org/wiki/United%20States%20Fish%20Commission
|
The United States Fish Commission, formally known as the United States Commission of Fish and Fisheries, was an agency of the United States government created in 1871 to investigate, promote, and preserve the fisheries of the United States. In 1903, it was reorganized as the United States Bureau of Fisheries, sometimes referred to as the United States Fisheries Service, which operated until 1940. In 1940, the Bureau of Fisheries was abolished when its personnel and facilities became part of the newly created Fish and Wildlife Service, under the United States Department of the Interior.
Organizational history
U.S. Fish Commission (1871–1903)
By the 1860s, increasing human pressure on the fish and game resources of the United States had become apparent to the United States Government, and fisheries became the first aspect of the problem to receive U.S. Government attention when Robert Barnwell Roosevelt, a Democratic congressmen from New Yorks 4th Congressional District, originated a bill in the United States House of Representatives to create the U.S. Fish Commission. It was established by a joint resolution (16 Stat. 593) of the United States Congress on February 9, 1871, as an independent agency of the U.S. Government with a mandate to investigate the causes for the decrease of commercial fish and other aquatic animals in the coastal and inland waters of the United States, to recommend remedies to the U.S. Congress and the states, and to oversee restoration efforts. With a budget of US$5,000, it began operations in 1871, organized to engage in scientific, statistical, and economic investigations of U.S. fisheries to study the "decrease of the food fishes of the seacoasts and to suggest remedial measures."
An expansion of the Fish Commission's mission followed quickly, when insistence by the American Fish Culturalist Association spurred the Congress in 1872 to add fish culture to the Fish Commission's responsibilities, with an appropriation of US$15,000 to establ
|
https://en.wikipedia.org/wiki/Intel%20HEX
|
Intel hexadecimal object file format, Intel hex format or Intellec Hex is a file format that conveys binary information in ASCII text form, making it possible to store on non-binary media such as paper tape, punch cards, etc., to display on text terminals or be printed on line-oriented printers. The format is commonly used for programming microcontrollers, EPROMs, and other types of programmable logic devices and hardware emulators. In a typical application, a compiler or assembler converts a program's source code (such as in C or assembly language) to machine code and outputs it into a HEX file. Some also use it as a container format holding packets of stream data. Common file extensions used for the resulting files are .HEX or .H86. The HEX file is then read by a programmer to write the machine code into a PROM or is transferred to the target system for loading and execution.
History
The Intel hex format was originally designed for Intel's Intellec Microcomputer Development Systems (MDS) in 1973 in order to load and execute programs from paper tape. It was also used to specify memory contents to Intel for ROM production, which previously had to be encoded in the much less efficient BNPF (Begin-Negative-Positive-Finish) format. In 1973, Intel's "software group" consisted only of Bill Byerly and Ken Burget, and Gary Kildall as an external consultant doing business as Microcomputer Applications Associates (MAA) and founding Digital Research in 1974. Beginning in 1975, the format was utilized by Intellec Series II ISIS-II systems supporting diskette drives, with files using the file extension HEX. Many PROM and EPROM programming devices accept this format.
Format
Intel HEX consists of lines of ASCII text that are separated by line feed or carriage return characters or both. Each text line contains uppercase hexadecimal characters that encode multiple binary numbers. The binary numbers may represent data, memory addresses, or other values, depending on their position
|
https://en.wikipedia.org/wiki/Bernstein%27s%20theorem%20on%20monotone%20functions
|
In real analysis, a branch of mathematics, Bernstein's theorem states that every real-valued function on the half-line that is totally monotone is a mixture of exponential functions. In one important special case the mixture is a weighted average, or expected value.
Total monotonicity (sometimes also complete monotonicity) of a function means that is continuous on , infinitely differentiable on , and satisfies
for all nonnegative integers and for all . Another convention puts the opposite inequality in the above definition.
The "weighted average" statement can be characterized thus: there is a non-negative finite Borel measure on with cumulative distribution function such that
the integral being a Riemann–Stieltjes integral.
In more abstract language, the theorem characterises Laplace transforms of positive Borel measures on . In this form it is known as the Bernstein–Widder theorem, or Hausdorff–Bernstein–Widder theorem. Felix Hausdorff had earlier characterised completely monotone sequences. These are the sequences occurring in the Hausdorff moment problem.
Bernstein functions
Nonnegative functions whose derivative is completely monotone are called Bernstein functions. Every Bernstein function has the Lévy–Khintchine representation:
where and is a measure on the positive real half-line such that
|
https://en.wikipedia.org/wiki/Lymphatic%20pump
|
The lymphatic pump is a method of manipulation used by physicians who practice manual medicine (primarily osteopathic physicians).
Manual lymphatic drainage techniques remain a clinical art founded upon hypotheses, theory, and preliminary evidence.
History
The term lymphatic pump was invented by Earl Miller, D.O. to describe what was formerly known in osteopathic medicine as the thoracic pump technique.
Technique
The technique is applied to a person lying down by holding their ankle and applying gentle pressure repeatedly using the leg as a "lever" to rock the pelvis.
Relative contraindications
While no firmly established absolute contraindications exist for lymphatic techniques, the following cases are examples of relative contraindications: bone fractures, bacterial infections with fever, abscesses, and cancer.
|
https://en.wikipedia.org/wiki/Royal%20College%20of%20Pathologists%20of%20Australasia
|
The Royal College of Pathologists of Australasia, more commonly known by its acronym RCPA, is a medical organization that promotes the science and practice of pathology. The RCPA is a leading organisation representing pathologists and other senior scientists in Australasia.
History
The College of Pathologists of Australia was incorporated on 10 April 1956. In 1970, the college was granted Royal assent, and became the Royal College of Pathologists of Australia. With the increasing number of Fellows in New Zealand, the college changed its name to the Royal College of Pathologists of Australasia in January 1980. Since 1986, the college has occupied Durham Hall, a heritage listed building in Sydney's Surry Hills and the adjacent 203-205 Albion Street, Surry Hills cottages.
Programmes
Training and examinations
The college conducts training and examinations in several sub-disciplines, including:
Anatomical Pathology
Chemical Pathology
Forensic Pathology
General Pathology
Genetics
Haematology
Immunopathology
Microbiology
The college accredits laboratories for training, approves supervised training in accredited laboratories, and conducts examinations leading to Fellowship of the college (FRCPA).
Continuing Professional Development
Since its inception, the college has contributed to the continual development of knowledge and skills of its Fellows, and has established a formal Continuing Professional Development Program.
Professional Practice Standards
The college collaborated with the Commonwealth Government to establish the National Pathology Accreditation Advisory Council (NPAAC) in 1979. NPAAC advises the Commonwealth, State and Territory Health Ministers on matters relating to the accreditation of pathology laboratories, plays a key role in ensuring the quality of Australian pathology services and is responsible for the development and maintenance of standards and guidelines for pathology practices.
While NPAAC provides the standards for laboratory pra
|
https://en.wikipedia.org/wiki/Sudan%20Red%20G
|
Sudan Red G is a yellowish red lysochrome azo dye. It has the appearance of an odorless reddish-orange powder with melting point 225 °C. It is soluble in fats and used for coloring of fats, oils, and waxes, including the waxes used in turpentine-based polishes. It is also used in polystyrene, cellulose, and synthetic lacquers. It is insoluble in water. It is stable to temperatures of about 100–110 °C. It was formerly used as a food dye, but still appears to be used for this purpose in china. It is used in some temporary tattoos, where it can cause contact dermatitis. It is also used in hair dyes. It is a component of some newer formulas for red smoke signals and smoke-screens, together with Disperse Red 11.
Other Names
There are various names for Sudan Red G, including Brilliant Fat Scarlet R, C.I. Food Red 16, C.I. Solvent Red I, C.I. 12150, Ceres Red G, Fat Red BG, Fat Red G. Lacquer Red V2G, Oil Pink, Oil Scarlet 389, Oil Vermilion, Oil Red G, Oleal Red G, Plastoresin Red FR, Red GD, Resinol Red G, Silotras Red TG, Solvent Red 1, Sudan R, and amethoxybenzenazo-β-naphthol (MBN).
Toxicity & Safety Issues
According to European Food Safety Authority, Sudan Red G is considered genotoxic and/or carcinogenic.
|
https://en.wikipedia.org/wiki/Ensoniq%20ES-5506%20OTTO
|
The Ensoniq ES-5506 "OTTO" is a chip used in implementations of sample-based synthesis. Musical instruments and IBM PC compatible sound cards were the most popular applications.
OTTO is capable of altering the pitch and timbre of a digital recording and is capable of operating with up to 32 channels at once. Each channel can have several parameters altered, such as pitch, volume, waveform, and filtering. The chip is a VLSI device designed to be manufactured on a 1.5 micrometre double-metal CMOS process. It consists of approximately 80,000 transistors. It was part of the fourth generation of Ensoniq audio technology.
Major features
Real-time digital filters
Frequency interpolation
32 independent voices
Loop start and stop positions for each voice (bidirectional and reverse looping)
Motorola 68000 compatibility for asynchronous bus communication
Separate host and sound memory interface
At least 18-bit accuracy
6-channel stereo serial communication port
Programmable clocks for defining a serial protocol
Internal volume multiplication and stereo panning
ADC input for pots and wheels
Hardware support for envelopes
Support for dual OTTO systems
Optional compressed data format for sample data
Up to 16 MHz operation
Implementations
Taito Cybercore/F3 System
Seta SSV System
Ensoniq TS10/TS12 Synthesizers
Ensoniq Soundscape S-2000
Ensoniq Soundscape Elite
Ensoniq SoundscapeDB daughterboard
Gravis Ultrasound Gravis GF1 chip (Ensoniq based)
Westacott Organs DRE (Digital Rank Emulator)
QRS Pianomation Chili and Sonata MIDI players 1998
Boom Theory Corp 0.0 Drum Module Interface
|
https://en.wikipedia.org/wiki/Pseudospectrum
|
In mathematics, the pseudospectrum of an operator is a set containing the spectrum of the operator and the numbers that are "almost" eigenvalues. Knowledge of the pseudospectrum can be particularly useful for understanding non-normal operators and their eigenfunctions.
The ε-pseudospectrum of a matrix A consists of all eigenvalues of matrices which are ε-close to A:
Numerical algorithms which calculate the eigenvalues of a matrix give only approximate results due to rounding and other errors. These errors can be described with the matrix E.
More generally, for Banach spaces and operators , one can define the -pseudospectrum of (typically denoted by ) in the following way
where we use the convention that if is not invertible.
Notes
Bibliography
Lloyd N. Trefethen and Mark Embree: "Spectra And Pseudospectra: The Behavior of Nonnormal Matrices And Operators", Princeton Univ. Press, (2005).
External links
Pseudospectra Gateway by Embree and Trefethen
Numerical linear algebra
Spectral theory
|
https://en.wikipedia.org/wiki/List%20of%20animals%20representing%20first-level%20administrative%20country%20subdivisions
|
This is a list of animals that represent first-level administrative country subdivisions.
List by country
Australia
Brazil
See also List of Brazilian state birds
Canada
People's Republic of China
India
Indonesia
Italy
Japan
Netherlands
Malaysia
Mexico
Pakistan
Philippines
Romania
South Korea
Spain
Sweden
United Kingdom
United States
See also
Floral emblem
List of national birds
National emblem
|
https://en.wikipedia.org/wiki/Social%20disruption
|
Social disruption is a term used in sociology to describe the alteration, dysfunction or breakdown of social life, often in a community setting. Social disruption implies a radical transformation, in which the old certainties of modern society are falling away and something quite new is emerging. Social disruption might be caused through natural disasters, massive human displacements, rapid economic, technological and demographic change but also due to controversial policy-making.
Social disruptions are for example rising sea levels that are creating new landscapes, drawing new world maps whose key lines are not traditional boundaries between nation-states but elevations above sea level. On the local level, an example would be the closing of a community grocery store, which might cause social disruption in a community by removing a "meeting ground" for community members to develop interpersonal relationships and community solidarity.
Results of social disruption
"We are wandering aimlessly and dispassionately, arguing for and against, but the one statement on which we are, beyond all differences and over many continents, to be able to agree on, is: "I can no longer understand the world".
Social disruptions often lead to five social symptoms: Frustration, Democratic Disconnection, Fragmentation, Polarization and Escalation. Studies from the last decade show, that our societies have become more fragmented and less coherent (e.g. Bishop 2008), neighbourhoods turning into little states, organizing themselves to defend the local politics and culture against outsiders (Walzer 1983; Bauman 2017) and increasingly identifying through ways of voting, lifestyle or wellbeing (e.g. Schäfer 2015). Especially people on the more right and left political spectrum are more likely to say it is important to them to live in a place where most people share their political views and have similar interests (Pew 2014). Hence, citizens become alienated from democratic consensus (Foa and
|
https://en.wikipedia.org/wiki/Mathers%20table
|
The Mathers table of Hebrew and "Chaldee" (Aramaic) letters is a tabular display of the pronunciation, appearance, numerical values, transliteration, names, and symbolism of the twenty-two letters of the Hebrew alphabet appearing in The Kabbalah Unveiled, S.L. MacGregor Mathers' late 19th century English translation of Kabbala Denudata, itself a Latin translation by Christian Knorr von Rosenroth of the Zohar, a primary Kabbalistic text.
This table has been used as a primary reference for a basic understanding of the Hebrew alphabet as it applies to the Kabbalah, generally outside of traditional Jewish mysticism, by many modern Hermeticists and students of the occult, including members of the Hermetic Order of the Golden Dawn and other magical organizations deriving from it. It has been reproduced and adapted in many books published from the early 20th century to the present.
See also
Gematria
Hebrew language
Kabbalah
Mysticism
Notaricon
|
https://en.wikipedia.org/wiki/Integrated%20Encryption%20Scheme
|
Integrated Encryption Scheme (IES) is a hybrid encryption scheme which provides semantic security against an adversary who is able to use chosen-plaintext or chosen-ciphertext attacks. The security of the scheme is based on the computational Diffie–Hellman problem.
Two variants of IES are specified: Discrete Logarithm Integrated Encryption Scheme (DLIES) and Elliptic Curve Integrated Encryption Scheme (ECIES), which is also known as the Elliptic Curve Augmented Encryption Scheme or simply the Elliptic Curve Encryption Scheme. These two variants are identical up to the change of an underlying group.
Informal description of DLIES
As a brief and informal description and overview of how IES works, a Discrete Logarithm Integrated Encryption Scheme (DLIES) is used, focusing on illuminating the reader's understanding, rather than precise technical details.
Alice learns Bob's public key through a public key infrastructure or some other distribution method.Bob knows his own private key .
Alice generates a fresh, ephemeral value , and its associated public value .
Alice then computes a symmetric key using this information and a key derivation function (KDF) as follows:
Alice computes her ciphertext from her actual message (by symmetric encryption of ) encrypted with the key (using an authenticated encryption scheme) as follows:
Alice transmits (in a single message) both the public ephemeral and the ciphertext .
Bob, knowing and , can now compute and decrypt from .
Note that the scheme does not provide Bob with any assurance as to who really sent the message: This scheme does nothing to stop anyone from pretending to be Alice.
Formal description of ECIES
Required information
To send an encrypted message to Bob using ECIES, Alice needs the following information:
The cryptography suite to be used, including a key derivation function (e.g., ANSI-X9.63-KDF with SHA-1 option), a message authentication code (e.g., HMAC-SHA-1-160 with 160-bit keys or HMAC-SHA-1
|
https://en.wikipedia.org/wiki/Discontinuous%20linear%20map
|
In mathematics, linear maps form an important class of "simple" functions which preserve the algebraic structure of linear spaces and are often used as approximations to more general functions (see linear approximation). If the spaces involved are also topological spaces (that is, topological vector spaces), then it makes sense to ask whether all linear maps are continuous. It turns out that for maps defined on infinite-dimensional topological vector spaces (e.g., infinite-dimensional normed spaces), the answer is generally no: there exist discontinuous linear maps. If the domain of definition is complete, it is trickier; such maps can be proven to exist, but the proof relies on the axiom of choice and does not provide an explicit example.
A linear map from a finite-dimensional space is always continuous
Let X and Y be two normed spaces and a linear map from X to Y. If X is finite-dimensional, choose a basis in X which may be taken to be unit vectors. Then,
and so by the triangle inequality,
Letting
and using the fact that
for some C>0 which follows from the fact that any two norms on a finite-dimensional space are equivalent, one finds
Thus, is a bounded linear operator and so is continuous. In fact, to see this, simply note that f is linear,
and therefore for some universal constant K. Thus for any
we can choose so that ( and
are the normed balls around and ), which gives continuity.
If X is infinite-dimensional, this proof will fail as there is no guarantee that the supremum M exists. If Y is the zero space {0}, the only map between X and Y is the zero map which is trivially continuous. In all other cases, when X is infinite-dimensional and Y is not the zero space, one can find a discontinuous map from X to Y.
A concrete example
Examples of discontinuous linear maps are easy to construct in spaces that are not complete; on any Cauchy sequence of linearly independent vectors which does not have a limit, there is a linear operator such
|
https://en.wikipedia.org/wiki/Metaman
|
Metaman: The Merging of Humans and Machines into a Global Superorganism () is a 1993 book by author Gregory Stock. The title refers to a superorganism comprising humanity and its technology.
While many people have had ideas about a global brain, they have tended to suppose that this can be improved or altered by humans according to their will. Metaman can be seen as a development that directs humanity's will to its own ends, whether it likes it or not, through the operation of market forces. While it is difficult to think of making a life-form based on metals that can mine its own 'food', it is possible to imagine a superorganism that incorporates humans as its "cells" and entices them to sustain it (communalness), just as our cells interwork to sustain us.
External links
Review of Metaman, by Hans Moravec
Review of Metaman, by Patric Hedlund
Systems theory
Cybernetics
Superorganisms
Futurology books
1993 non-fiction books
|
https://en.wikipedia.org/wiki/Phycobiliprotein
|
Phycobiliproteins are water-soluble proteins present in cyanobacteria and certain algae (rhodophytes, cryptomonads, glaucocystophytes). They capture light energy, which is then passed on to chlorophylls during photosynthesis. Phycobiliproteins are formed of a complex between proteins and covalently bound phycobilins that act as chromophores (the light-capturing part). They are most important constituents of the phycobilisomes.
Major phycobiliproteins
Characteristics
Phycobiliproteins demonstrate superior fluorescent properties compared to small organic fluorophores, especially when high sensitivity or multicolor detection required :
Broad and high absorption of light suits many light sources
Very intense emission of light: 10-20 times brighter than small organic fluorophores
Relative large Stokes shift gives low background, and allows multicolor detections.
Excitation and emission spectra do not overlap compared to conventional organic dyes.
Can be used in tandem (simultaneous use by FRET) with conventional chromophores (i.e. PE and FITC, or APC and SR101 with the same light source).
Longer fluorescence retention period.
High water solubility
Applications
Phycobiliproteins allow very high detection sensitivity, and can be used in various fluorescence based techniques fluorimetric microplate assays , FISH and multicolor detection.
They are under development for use in artificial photosynthesis, limited by the relatively low conversion efficiency of 4-5%.
|
https://en.wikipedia.org/wiki/Lipoteichoic%20acid
|
Lipoteichoic acid (LTA) is a major constituent of the cell wall of gram-positive bacteria. These organisms have an inner (or cytoplasmic) membrane and, external to it, a thick (up to 80 nanometer) peptidoglycan layer. The structure of LTA varies between the different species of Gram-positive bacteria and may contain long chains of ribitol or glycerol phosphate. LTA is anchored to the cell membrane via a diacylglycerol. It acts as regulator of autolytic wall enzymes (muramidases). It has antigenic properties being able to stimulate specific immune response.
LTA may bind to target cells non-specifically through membrane phospholipids, or specifically to CD14 and to Toll-like receptors. Binding to TLR-2 has shown to induce NF-κB expression(a central transcription factor), elevating expression of both pro- and anti-apoptotic genes. Its activation also induces mitogen-activated protein kinases (MAPK) activation along with phosphoinositide 3-kinase activation.
Studies
LTA's molecular structure has been found to have the strongest hydrophobic bonds of an entire bacteria.
Said et al. showed that LTA causes an IL-10-dependent inhibition of CD4 T-cell expansion and function by up-regulating PD-1 levels on monocytes which leads to IL-10 production by monocytes after binding of PD-1 by PD-L.
Lipoteichoic acid (LTA) from Gram-positive bacteria exerts different immune effects depending on the bacterial source from which it is isolated. For example, LTA from Enterococcus faecalis is a virulence factor positively correlating to inflammatory damage to teeth during acute infection. On the other hand, a study reported Lacticaseibacillus rhamnosus GG LTA (LGG-LTA) oral administration reduces UVB-induced immunosuppression and skin tumor development in mice. In animal studies, specific bacterial LTA has been correlated with induction of arthritis, nephritis, uveitis, encephalomyelitis, meningeal inflammation, and periodontal lesions, and also triggered cascades resulting in
|
https://en.wikipedia.org/wiki/Alexander%20Razborov
|
Aleksandr Aleksandrovich Razborov (; born February 16, 1963), sometimes known as Sasha Razborov, is a Soviet and Russian mathematician and computational theorist. He is Andrew McLeish Distinguished Service Professor at the University of Chicago.
Research
In his best known work, joint with Steven Rudich, he introduced the notion of natural proofs, a class of strategies used to prove fundamental lower bounds in computational complexity. In particular, Razborov and Rudich showed that, under the assumption that certain kinds of one-way functions exist, such proofs cannot give a resolution of the P = NP problem, so new techniques will be required in order to solve this question.
Awards
Nevanlinna Prize (1990) for introducing the "approximation method" in proving Boolean circuit lower bounds of some essential algorithmic problems,
Erdős Lecturer, Hebrew University of Jerusalem, 1998.
Corresponding member of the Russian Academy of Sciences (2000)
Gödel Prize (2007, with Steven Rudich) for the paper "Natural Proofs."
David P. Robbins Prize for the paper "On the minimal density of triangles in graphs" (Combinatorics, Probability and Computing 17 (2008), no. 4, 603–618), and for introducing a new powerful method, flag algebras, to solve problems in extremal combinatorics
Gödel Lecturer (2010) with the lecture titled Complexity of Propositional Proofs.
Andrew MacLeish Distinguished Service Professor (2008) in the Department of Computer Science, University of Chicago.
Fellow of the American Academy of Arts and Sciences (AAAS) (2020).
Bibliography
(PhD thesis. 32.56MB)
(Survey paper for JACM's 50th anniversary)
See also
Avi Wigderson
Circuit complexity
Free group
Natural proofs
One-way function
Pseudorandom function family
Resolution (logic)
Notes
External links
.
Alexander Razborov's Home Page.
All-Russian Mathematical Portal: Persons: Razborov Alexander Alexandrovich.
Biography sketch in the Toyota Technological Institute at Chicago.
Curricula Vitae a
|
https://en.wikipedia.org/wiki/Glufosinate
|
Glufosinate (also known as phosphinothricin and often sold as an ammonium salt) is a naturally occurring broad-spectrum herbicide produced by several species of Streptomyces soil bacteria. Glufosinate is a non-selective, contact herbicide, with some systemic action. Plants may also metabolize bialaphos and phosalacine, other naturally occurring herbicides, directly into glufosinate. The compound irreversibly inhibits glutamine synthetase, an enzyme necessary for the production of glutamine and for ammonia detoxification, giving it antibacterial, antifungal and herbicidal properties. Application of glufosinate to plants leads to reduced glutamine and elevated ammonia levels in tissues, halting photosynthesis and resulting in plant death.
Discovery
In the 1960s and early 1970s, scientists at University of Tübingen and at the Meiji Seika Kaisha Company independently discovered that species of Streptomyces bacteria produce a tripeptide they called bialaphos that inhibits bacteria; it consists of two alanine residues and a unique amino acid that is an analog of glutamate that they named "phosphinothricin". They determined that phosphinothricin irreversibly inhibits glutamine synthetase. Phosphinothricin was first synthesized by scientists at Hoechst in the 1970s as a racemic mixture; this racemic mixture is called glufosinate and is the commercially relevant version of the chemical.
In the late 1980s scientists discovered enzymes in these Streptomyces species that selectively inactivate free phosphinothricin; the gene encoding the enzyme that was isolated from Streptomyces hygroscopicus was called the "bialaphos resistance" or "bar" gene, and the gene encoding the enzyme in Streptomyces viridochromogenes was called "phosphinothricin acetyltransferase" or "pat". The two genes and their proteins have 80% homology on the DNA level and 86% amino acid homology, and are each 158 amino acids long.
Use
Glufosinate is a broad-spectrum herbicide that is used to control impor
|
https://en.wikipedia.org/wiki/Beniamino%20Segre
|
Beniamino Segre (16 February 1903 – 2 October 1977) was an Italian mathematician who is remembered today as a major contributor to algebraic geometry and one of the founders of finite geometry.
Life and career
He was born and studied in Turin. Corrado Segre, his uncle, also served as his doctoral advisor.
Among his main contributions to algebraic geometry are studies of birational invariants of algebraic varieties, singularities and algebraic surfaces. His work was in the style of the old Italian School, although he also appreciated the greater rigour of modern algebraic geometry.
Segre was a pioneer in finite geometry, in particular projective geometry based on vector spaces over a finite field. In a well-known paper he proved the following theorem: In a Desarguesian plane of odd order, the ovals are exactly the irreducible conics. In 1959 he authored a survey "Le geometrie di Galois" on Galois geometry. According to J. W. P. Hirschfeld, it "gave a comprehensive list of results and methods, and is to my mind the seminal paper in the subject."
Some critics felt that his work was no longer geometry, but today it is recognized as a separate sub-discipline: finite geometry or combinatorial geometry. According to Hirschfeld, "He published the most as well as the deepest papers in the subject. His enormous knowledge of classical algebraic geometry enabled him to identify those results which could be applied to finite spaces. His theorem on the characterization of conics (Segre's theorem) not only stimulated a great deal of research but also made many mathematicians realize that finite spaces were worth studying."
In 1938 he lost his professorship at the University of Bologna, as a result of the anti-Jewish laws enacted under Benito Mussolini's government. He spent the next 8 years in Great Britain (mostly at the University of Manchester), then returned to Italy to resume his academic career.
Selected publications
.
. The second volume was never published: howeve
|
https://en.wikipedia.org/wiki/The%20Sword%20of%20Damocles%20%28virtual%20reality%29
|
The Sword of Damocles was the name for an early virtual reality (VR) head-mounted display and tracking system. It is widely considered to be the first augmented reality HMD system, although Morton Heilig had already created a stereoscopic head-mounted viewing apparatus without head tracking (known as "Stereoscopic-Television Apparatus for Individual Use" or "Telesphere Mask") earlier, patented in 1960.
The Sword of Damocles was created in 1968 by computer scientist Ivan Sutherland with the help of his students Bob Sproull, Quintin Foster, and Danny Cohen. Before he began working toward what he termed "the ultimate display", Ivan Sutherland was already well respected for his accomplishments in computer graphics (see Sketchpad). At MIT's Lincoln Laboratory beginning in 1966, Sutherland and his colleagues performed what are widely believed to be the first experiments with head-mounted displays of different kinds.
Features
The device was primitive both in terms of user interface and realism, and the graphics comprising the virtual environment were simple wireframe rooms. Sutherland's system displayed output from a computer program in the stereoscopic display. The perspective that the software showed the user would depend on the position of the user's gaze – which is why head tracking was necessary. The HMD had to be attached to a mechanical arm suspended from the ceiling of the lab partially due to its weight, and primarily to track head movements via linkages. The formidable appearance of the mechanism inspired its name. While using The Sword of Damocles, a user had to have his or her head securely fastened into the device to perform the experiments. At this time, the various components being tested were not fully integrated with one another.
Development
When Sutherland moved to the University of Utah in the late 1960s, work on integrating the various components into a single HMD system was begun. By the end of the decade, the first fully functional inte
|
https://en.wikipedia.org/wiki/Bahariasaurus
|
{{Speciesbox
| fossil_range = Late Cretaceous,
| image = Bahariasaurus_vertebra.png
| image_caption = Vertebra of Bahariasaurus from specimen 1912 VIII 62
| genus = Bahariasaurus
| parent_authority = Stromer, 1934
| species = ingens
| authority = Stromer, 1934
| synonyms =
Deltadromeus? Sereno et al., 1996
}}Bahariasaurus (meaning "Bahariya lizard") is an enigmatic genus of large theropod dinosaur. Bahariasaurus is known to have included at least 1 species, Bahariasaurus ingens, which was found in North African rock layers dating to the Cenomanian and Turonian ages of the Late Cretaceous. The only fossils confidently assigned to Bahariasaurus were found in the Bahariya Formation of the Bahariya (Arabic: الواحة البحرية meaning the "northern oasis") oasis in Egypt by Ernst Stromer but were destroyed during a World War II bombing raid with the same raid taking out the holotype of Spinosaurus and Aegyptosaurus among other animals found in the Bahariya Formation. While there have been more fossils assigned to the genus such as some from the Farak Formation of Niger, these remains are referred to with much less certainty. Bahariasaurus is, by most estimations, one of the largest theropods, approaching the height and length of other large bodied theropods such as Tyrannosaurus rex and the contemporaneous Carcharodontosaurus. The aforementioned estimations tend to put it at around 11–12 metres (36–39 ft) in length and 4 tonnes in overall weight.
History Bahariasaurus was found during the 1910s during an expedition to Egypt's Baharija Formation led by Markgraf and Stromer, and the holotype, specimen 1912 VIII 62, was discovered in 1911. The type species, B. ingens, was described by Ernst Stromer in 1934. This specimen was destroyed in an air raid during World War II on the night of 23/24 April 1944.
The questionable remains of Bahariasaurus from the Farak Formation of Niger, which consist of a proximal caudal centrum (65 mm), two mid caudal centra and three mid caudal
|
https://en.wikipedia.org/wiki/Hjelmslev%20transformation
|
In mathematics, the Hjelmslev transformation is an effective method for mapping an entire hyperbolic plane into a circle with a finite radius. The transformation was invented by Danish mathematician Johannes Hjelmslev. It utilizes Nikolai Ivanovich Lobachevsky's 23rd theorem from his work Geometrical Investigations on the Theory of Parallels.
Lobachevsky observes, using a combination of his 16th and 23rd theorems, that it is a fundamental characteristic of hyperbolic geometry that there must exist a distinct angle of parallelism for any given line length. Let us say for the length AE, its angle of parallelism is angle BAF. This being the case, line AH and EJ will be hyperparallel, and therefore will never meet. Consequently, any line drawn perpendicular to base AE between A and E must necessarily cross line AH at some finite distance. Johannes Hjelmslev discovered from this a method of compressing an entire hyperbolic plane into a finite circle. The method is as follows: for any angle of parallelism, draw from its line AE a perpendicular to the other ray; using that cutoff length, e.g., AH, as the radius of a circle, "map" the point H onto the line AE. This point H thus mapped must fall between A and E. By applying this process for every line within the plane, the infinite hyperbolic space thus becomes contained and planar. Hjelmslev's transformation does not yield a proper circle however. The circumference of the circle created does not have a corresponding location within the plane, and therefore, the product of a Hjelmslev transformation is more aptly called a Hjelmslev Disk. Likewise, when this transformation is extended in all three dimensions, it is referred to as a Hjelmslev Ball.
There are a few properties that are retained through the transformation which enable valuable information to be ascertained therefrom, namely:
The image of a circle sharing the center of the transformation will be a circle about this same center.
As a result, the images of all th
|
https://en.wikipedia.org/wiki/Cayenne%20nightjar
|
The Cayenne nightjar (Setopagis maculosa) is a species of bird in the nightjar family only known from a single specimen, a male taken on the Fleuve Mana, French Guiana, in 1917. However, a possible female was caught at the Saül airstrip, French Guiana, in 1982.
Taxonomy and systematics
The Cayenne nightjar was originally described in 1920 as Nyctipolus maculosus and was later lumped into genus Caprimulgus. Some authors contended that it is not a species, but a subspecies of blackish nightjar (Nyctipolus nigrescens). By the early 2000s it was generally recognized as a species and has been placed in its current genus Setopagis since the early 2010s. Nevertheless, the South American Classification Committee of the American Ornithological Society (AOS) states, " Its placement in Setopagis is entirely tentative", and the Cornell Lab of Ornithology's Birds of the World agrees.
Description
The Cayenne nightjar is known in certainty only from the holotype, a male. The specimen was measured by two researchers as long. The upperparts are grayish-brown with cinnamon spots and broad blackish brown streaks. The face is mostly chestnut. The hindneck has a narrow, indistinct tawny collar with brown bars. The wing coverts are grayish brown, heavily spotted with buff and cinnamon; the scapulars are blackish brown, broadly edged with buff. The tail is brown with a few small white spots and is otherwise mottled with grayish brown and barred with blackish brown. The chin, throat, and upper breast are buff with a chestnut tinge and brown bars. The belly and flanks are buff with brown bars. The individual captured in 1982 differed in minor ways and could have been either a female or an immature male. It was not photographed so there is no permanent record of its appearance.
Distribution and habitat
The only positively identified Cayenne nightjar was collected at Tamanoir, French Guiana, in 1917. The 1982 putative female was captured approximately southeast of that site, and addit
|
https://en.wikipedia.org/wiki/Albright%27s%20hereditary%20osteodystrophy
|
Albright's hereditary osteodystrophy is a form of osteodystrophy, and is classified as the phenotype of pseudohypoparathyroidism type 1A; this is a condition in which the body does not respond to parathyroid hormone.<ref
name="nih"></ref>
Signs and symptoms
The disorder is characterized by the following:
Hypogonadism
Brachydactyly syndrome
Choroid plexus calcification
Hypoplasia of dental enamel
Full cheeks
Hypocalcemic tetany
Individuals with Albright hereditary osteodystrophy exhibit short stature, characteristically shortened fourth and fifth metacarpals, rounded facies, and often mild intellectual deficiency.
Albright hereditary osteodystrophy is commonly known as pseudohypoparathyroidism because the kidney responds as if parathyroid hormone were absent. Blood levels of parathyroid hormone are elevated in pseudohypoparathyroidism due to the hypocalcemia
Genetics
This condition is associated with genetic imprinting. It is thought to be inherited in an autosomal dominant pattern, and seems to be associated with a Gs alpha subunit deficiency.
Mechanism
The mechanism of this condition is due to Gs signaling decrease in hormones having to do with signal transduction which is when a signal from outside cell causes change within the cell (in function). Renal tubule cells only express maternal alleles (variant form of a gene).
Diagnosis
The diagnosis of Albright's hereditary osteodystrophy is based on the following exams below:
Clicical features
Serum calicum, phosphorus, PTH
Urine test for cAMP and phosphorus
Treatment
Treatment consists of maintaining normal levels of calcium, phosphorus, and vitamin D. Phosphate binders, supplementary calcium and vitamin D will be used as required.
History
The disorder bears the name of Fuller Albright, who characterized it in 1942. He was also responsible for naming it "Sebright bantam syndrome," after the Sebright bantam chicken, which demonstrates an analogous hormone insensitivity. Much less commonly, the term
|
https://en.wikipedia.org/wiki/Binet%E2%80%93Cauchy%20identity
|
In algebra, the Binet–Cauchy identity, named after Jacques Philippe Marie Binet and Augustin-Louis Cauchy, states that
for every choice of real or complex numbers (or more generally, elements of a commutative ring).
Setting and , it gives Lagrange's identity, which is a stronger version of the Cauchy–Schwarz inequality for the Euclidean space . The Binet-Cauchy identity is a special case of the Cauchy–Binet formula for matrix determinants.
The Binet–Cauchy identity and exterior algebra
When , the first and second terms on the right hand side become the squared magnitudes of dot and cross products respectively; in dimensions these become the magnitudes of the dot and wedge products. We may write it
where , , , and are vectors. It may also be written as a formula giving the dot product of two wedge products, as
which can be written as
in the case.
In the special case and , the formula yields
When both and are unit vectors, we obtain the usual relation
where is the angle between the vectors.
This is a special case of the inner product on the exterior algebra of a vector space, which is defined on wedge-decomposable elements as the Gram determinant of their components.
Einstein notation
A relationship between the Levi–Cevita symbols and the generalized Kronecker delta is
The form of the Binet–Cauchy identity can be written as
Proof
Expanding the last term,
where the second and fourth terms are the same and artificially added to complete the sums as follows:
This completes the proof after factoring out the terms indexed by i.
Generalization
A general form, also known as the Cauchy–Binet formula, states the following:
Suppose A is an m×n matrix and B is an n×m matrix. If S is a subset of {1, ..., n} with m elements, we write AS for the m×m matrix whose columns are those columns of A that have indices from S. Similarly, we write BS for the m×m matrix whose rows are those rows of B that have indices from S.
Then the determinant of the matrix prod
|
https://en.wikipedia.org/wiki/Radner%20equilibrium
|
Radner equilibrium is an economic concept defined by economist Roy Radner in the context of general equilibrium. The concept is an extension of the Arrow–Debreu equilibrium and the base for the first consistent incomplete markets framework.
The concept departs from the Arrow-Debreu framework in two ways:
Uncertainty is explicitly modeled through a tree structure (or equivalent filtration) rendering passage of time and resolution of uncertainty explicit.
Budget feasibility is no longer defined as affordability but through explicit trading of financial instruments. Financial instruments are used to allow insurance and inter-temporal wealth transfers across spot markets at each nodes of the tree. Economic agents face a sequence of budget sets, one at each date-state.
Item (2) introduces the concept of incomplete markets, formulated in terms of net trade, the budget set is contained in a half space intersecting the positive cone of contingent goods at zero net trade only (this is called absence of arbitrage). This is because without transaction cost agents will demand an infinite amount of any trade promising positive consumption in some state and no negative net trade against that in any other good and state. This half space, containing the budget set and separating it from the free lunch cone, corresponds to a half line of positive prices. However potentially if not enough instruments are present, the full half space may not be spanned by trading the instruments and the budgets set may be strictly smaller. In such a configuration markets are said to be incomplete, and there are several ways to separate the budget set from the positive cone (sometimes called the free-lunch cone). This means that several price systems become admissible.
At a Radner equilibrium like the Arrow–Debreu equilibrium under uncertainty, perfect consensual foresight is used. It is what is called a rational expectation model.
Further reading
External links
Radner equilibrium
Eponyms
|
https://en.wikipedia.org/wiki/Essential%20infimum%20and%20essential%20supremum
|
In mathematics, the concepts of essential infimum and essential supremum are related to the notions of infimum and supremum, but adapted to measure theory and functional analysis, where one often deals with statements that are not valid for all elements in a set, but rather almost everywhere, that is, except on a set of measure zero.
While the exact definition is not immediately straightforward, intuitively the essential supremum of a function is the smallest value that is greater than or equal to the function values everywhere while ignoring what the function does at a set of points of measure zero. For example, if one takes the function that is equal to zero everywhere except at where then the supremum of the function equals one. However, its essential supremum is zero because we are allowed to ignore what the function does at the single point where is peculiar. The essential infimum is defined in a similar way.
Definition
As is often the case in measure-theoretic questions, the definition of essential supremum and infimum does not start by asking what a function does at points (that is, the image of ), but rather by asking for the set of points where equals a specific value (that is, the preimage of under ).
Let be a real valued function defined on a set The supremum of a function is characterized by the following property: for all and if for some we have for all then
More concretely, a real number is called an upper bound for if for all that is, if the set
is empty. Let
be the set of upper bounds of and define the infimum of the empty set by Then the supremum of is
if the set of upper bounds is nonempty, and otherwise.
Now assume in addition that is a measure space and, for simplicity, assume that the function is measurable. Similar to the supremum, the essential supremum of a function is characterised by the following property: for -almost all and if for some we have for -almost all then More concretely, a number
|
https://en.wikipedia.org/wiki/E4M
|
Encryption for the Masses (E4M) is a free disk encryption software for Windows NT and Windows 9x families of operating systems. E4M is discontinued; it is no longer maintained. Its author, former criminal cartel boss Paul Le Roux, joined Shaun Hollingworth (the author of the Scramdisk) to produce the commercial encryption product DriveCrypt for the security company SecurStar.
The popular source-available freeware program TrueCrypt is based on E4M's source code. However, TrueCrypt uses a different container format than E4M, which makes it impossible to use one of these programs to access an encrypted volume created by the other.
Allegation of stolen source code
Shortly after TrueCrypt version 1.0 was released in February 2004, the TrueCrypt Team reported receiving emails from Wilfried Hafner, manager of SecurStar, claiming that Paul Le Roux had stolen the source code of E4M from SecurStar as an employee. According to the TrueCrypt Team, the emails stated that Le Roux illegally distributed E4M, and authored an illegal license permitting anyone to base derivative work on E4M and distribute it freely, which Hafner alleges Le Roux did not have any right to do, claiming that all versions of E4M always belonged only to SecurStar. For a time, this led the TrueCrypt Team to stop developing and
distributing TrueCrypt.
See also
On-the-fly encryption (OTFE)
Disk encryption
Disk encryption software
Comparison of disk encryption software
|
https://en.wikipedia.org/wiki/Decoy%20effect
|
In marketing, the decoy effect (or attraction effect or asymmetric dominance effect) is the phenomenon whereby consumers will tend to have a specific change in preference between two options when also presented with a third option that is asymmetrically dominated. An option is asymmetrically dominated when it is inferior in all respects to one option; but, in comparison to the other option, it is inferior in some respects and superior in others. In other words, in terms of specific attributes determining preferences, it is completely dominated by (i.e., inferior to) one option and only partially dominated by the other. When the asymmetrically dominated option is present, a higher percentage of consumers will prefer the dominating option than when the asymmetrically dominated option is absent. The asymmetrically dominated option is therefore a decoy serving to increase preference for the dominating option. The decoy effect is also an example of the violation of the independence of irrelevant alternatives axiom of decision theory. More simply, when deciding between two options, an unattractive third option can change the perceived preference between the other two.
The decoy effect is considered particularly important in choice theory because it is a violation of the assumption of "regularity" present in all axiomatic choice models, for example in a Luce model of choice. Regularity means that it should not be possible for the market share of any alternative to increase when another alternative is added to the choice set. The new alternative should reduce, or at best leave unchanged, the choice share of existing alternatives. Regularity is violated in the example shown below where a new alternative C not only changes the relative shares of A and B but actually increases the share of A in absolute terms. Similarly, the introduction of a new alternative D increases the share of B in absolute terms.
Examples
Suppose there is a consideration set (options to choose from
|
https://en.wikipedia.org/wiki/Suspension%20%28mechanics%29
|
In mechanics, suspension is a system of components allowing a machine (normally a vehicle) to move smoothly with reduced shock.
Types may include:
car suspension, four-wheeled motor vehicle suspension
motorcycle suspension, two-wheeled motor vehicle suspension
Motorcycle fork, a component of motorcycle suspension system
bicycle suspension
Related concepts include:
Shock absorber
Shock mount
Vibration isolation
Magnetic suspension
Electrodynamic suspension
Electromagnetic suspension
See also
Cardan suspension
Seismic base isolation
Mechanics
|
https://en.wikipedia.org/wiki/Approximate%20string%20matching
|
In computer science, approximate string matching (often colloquially referred to as fuzzy string searching) is the technique of finding strings that match a pattern approximately (rather than exactly). The problem of approximate string matching is typically divided into two sub-problems: finding approximate substring matches inside a given string and finding dictionary strings that match the pattern approximately.
Overview
The closeness of a match is measured in terms of the number of primitive operations necessary to convert the string into an exact match. This number is called the edit distance between the string and the pattern. The usual primitive operations are:
insertion: cot → coat
deletion: coat → cot
substitution: coat → cost
These three operations may be generalized as forms of substitution by adding a NULL character (here symbolized by *) wherever a character has been deleted or inserted:
insertion: co*t → coat
deletion: coat → co*t
substitution: coat → cost
Some approximate matchers also treat transposition, in which the positions of two letters in the string are swapped, to be a primitive operation.
transposition: cost → cots
Different approximate matchers impose different constraints. Some matchers use a single global unweighted cost, that is, the total number of primitive operations necessary to convert the match to the pattern. For example, if the pattern is coil, foil differs by one substitution, coils by one insertion, oil by one deletion, and foal by two substitutions. If all operations count as a single unit of cost and the limit is set to one, foil, coils, and oil will count as matches while foal will not.
Other matchers specify the number of operations of each type separately, while still others set a total cost but allow different weights to be assigned to different operations. Some matchers permit separate assignments of limits and weights to individual groups in the pattern.
Problem formulation and algorithms
One possible
|
https://en.wikipedia.org/wiki/Raster%20interrupt
|
A raster interrupt (also called a horizontal blank interrupt) is an interrupt signal in a legacy computer system which is used for display timing. It is usually, though not always, generated by a system's graphics chip as the scan lines of a frame are being readied to send to the monitor for display. The most basic implementation of a raster interrupt is the vertical blank interrupt.
Such an interrupt provides a mechanism for graphics registers to be changed mid-frame, so they have different values above and below the interrupt point. This allows a single-color object such as the background or the screen border to have multiple horizontal color bands, for example. Or, for a hardware sprite to be repositioned to give the illusion that there are more sprites than a system supports. The limitation is that changes only affect the portion of the display below the interrupt. They don't allow more colors or more sprites on a single scan line.
Modern protected mode operating systems generally do not support raster interrupts as access to hardware interrupts for unprivileged user programs could compromise the system stability. As their most important use case, the multiplexing of hardware sprites, is nowadays no longer relevant there exists no modern successor to raster interrupts.
Systems supporting raster interrupts
Several popular home computers and video game consoles included graphics chips supporting raster interrupts or had features that could be combined to work like raster interrupts. The following list is not exhaustive.
Astrocade (two custom chips, 1977)
The Bally Astrocade supported a horizontal blank interrupt to select the four screen colors from a palette of 256 colors. The Astrocade did not support hardware sprites.
Atari 8-bit family (ANTIC chip, 1979)
The ANTIC chip used by the Atari 8-bit family includes display list interrupts (DLIs), which are triggered as the display is being drawn. The ANTIC chip itself is considerably powerful and inherently ca
|
https://en.wikipedia.org/wiki/Sphaerobacter
|
Sphaerobacter is a genus of bacteria. When originally described it was placed in its own subclass (Spahaerobacteridae) within the class Actinomycetota. Subsequently, phylogenetic studies have now placed it in its own order Sphaerobacterales within the phylum Thermomicrobiota. Up to now there is only one species of this genus known (Sphaerobacter thermophilus). The closest related cultivated organism to S. Thermophilus is the Thermomicrobium Roseum and has an 87% sequence similarity which indicates that S. Thermophilus is one of the most isolated bacterial species.[4]
|
https://en.wikipedia.org/wiki/Nomina%20Anatomica
|
Nomina Anatomica (NA) was the international standard on human anatomic terminology from 1895 until it was replaced by Terminologia Anatomica in 1998.
In the late nineteenth century some 30,000 terms for various body parts were in use. The same structures were described by different names, depending (among other things) on the anatomist's school and national tradition. Vernacular translations of Latin and Greek, as well as various eponymous terms, were barriers to effective international communication. There was disagreement and confusion among anatomists regarding anatomical terminology.
Editions
The first and last entries in the following table are not NA editions, but they are included for the sake of continuity. Although these early editions were authorized by different bodies, they are sometimes considered part of the same series.
Before these codes of terminology, approved at anatomists congresses, the usage of anatomical terms was based on authoritative works of scholars like Galen, Berengario da Carpi, Gaspard Bauhin, Henle, Hyrtl, etc.
The IANC and the FCAT
Twelfth congress
Around the time of the Twelfth Congress (London, 1985), a dispute arose over the editorial independence of the IANC. The IANC did not believe that their work should be subject to the approval of IFAA Member Associations.
The types of discussion underlying this dispute are illustrated in an article by Roger Warwick, then Honorary Secretary of the IANC:
An aura of scholasticism, erudition and, unfortunately, pedantry has therefore often impeded attempts to rationalize and simplify anatomical nomenclature, and such obstruction still persists. The preservation of archaic terms such as Lien, Ventriculus, Epiplooon and Syndesmologia, in a world which uses and continues to use Splen, Gaster, Omentum and Arthrologia (and their numerous derivatives) provides an example of such pedantry.
We have inherited a number of archaic and now somewhat irrational terms which are confusing to the
|
https://en.wikipedia.org/wiki/Terminologia%20Anatomica
|
Terminologia Anatomica (commonly abbreviated TA) is the international standard for human anatomical terminology. It is developed by the Federative International Programme on Anatomical Terminology, a program of the International Federation of Associations of Anatomists (IFAA).
History
The sixth edition of the previous standard, Nomina Anatomica, was released in 1989. The first edition of Terminologia Anatomica, superseding Nomina Anatomica, was developed by the Federative Committee on Anatomical Terminology (FCAT) and the International Federation of Associations of Anatomists (IFAA) and released in 1998. In April 2011, this edition was published online by the Federative International Programme on Anatomical Terminologies (FIPAT), the successor of FCAT. The first edition contained 7635 Latin items.
The second edition was released online by FIPAT in 2019 and approved and adopted by the IFAA General Assembly in 2020. The latest errata is dated August 2021. It contains a total of 7112 numbered terms (1-7113 skipping 2590), with some terms repeated.
Adoption and reception
A 2014 survey of the American Association of Clinical Anatomists found that the TA preferred term had the highest frequency of usage in only 53% of the 25 anatomical terms surveyed, and was highest or second-highest for 92% of terms. 75% of respondents were unfamiliar with FIPAT and TA.
In a panel at the 2022 International Federation of Associations of Anatomists Congress, one author stated "the Terminologia Anatomica generally receives no attention in medical terminology courses", but stressed its importance. The TA is not well established in other languages, such as French. The English equivalent names are often inconsistent if viewed as translations of the accompanying Latin phrases.
The Terminologia Anatomica specifically excludes eponyms, as they were determined to "give absolutely no anatomical information about the named structure, and vary considerably between countries and cultures".
|
https://en.wikipedia.org/wiki/Van%20der%20Corput%20sequence
|
A van der Corput sequence is an example of the simplest one-dimensional low-discrepancy sequence over the unit interval; it was first described in 1935 by the Dutch mathematician J. G. van der Corput. It is constructed by reversing the base-n representation of the sequence of natural numbers (1, 2, 3, …).
The -ary representation of the positive integer is
where is the base in which the number is represented, and that is, the -th digit in the -ary expansion of
The -th number in the van der Corput sequence is
Examples
For example, to get the decimal van der Corput sequence, we start by dividing the numbers 1 to 9 in tenths (), then we change the denominator to 100 to begin dividing in hundredths (). In terms of numerator, we begin with all two-digit numbers from 10 to 99, but in backwards order of digits. Consequently, we will get the numerators grouped by the end digit. Firstly, all two-digit numerators that end with 1, so the next numerators are 01, 11, 21, 31, 41, 51, 61, 71, 81, 91. Then the numerators ending with 2, so they are 02, 12, 22, 32, 42, 52, 62, 72, 82, 92. And after that, the numerators ending in 3: 03, 13, 23 and so on...
Thus, the sequence begins
or in decimal representation:
0.1, 0.2, 0.3, 0.4, 0.5, 0.6, 0.7, 0.8, 0.9, 0.01, 0.11, 0.21, 0.31, 0.41, 0.51, 0.61, 0.71, 0.81, 0.91, 0.02, 0.12, 0.22, 0.32, …,
The same can be done for the binary numeral system, and the binary van der Corput sequence is
0.12, 0.012, 0.112, 0.0012, 0.1012, 0.0112, 0.1112, 0.00012, 0.10012, 0.01012, 0.11012, 0.00112, 0.10112, 0.01112, 0.11112, …
or, equivalently,
The elements of the van der Corput sequence (in any base) form a dense set in the unit interval; that is, for any real number in , there exists a subsequence of the van der Corput sequence that converges to that number. They are also equidistributed over the unit interval.
C implementation
double corput(int n, int base){
double q=0, bk=(double)1/base;
while (n > 0) {
q += (n % bas
|
https://en.wikipedia.org/wiki/SDS%20940
|
The SDS 940 was Scientific Data Systems' (SDS) first machine designed to directly support time-sharing. The 940 was based on the SDS 930's 24-bit CPU, with additional circuitry to provide protected memory and virtual memory.
It was announced in February 1966 and shipped in April, becoming a major part of Tymshare's expansion during the 1960s. The influential Stanford Research Institute "oN-Line System" (NLS) was demonstrated on the system. This machine was later used to run Community Memory, the first bulletin board system.
After SDS was acquired by Xerox in 1969 and became Xerox Data Systems, the SDS 940 was renamed as the XDS 940.
History
The design was originally created by the University of California, Berkeley as part of their Project Genie that ran between 1964 and 1969. Genie added memory management and controller logic to an existing SDS 930 computer to give it page-mapped virtual memory, which would be heavily copied by other designs. The 940 was simply a commercialized version of the Genie design and remained backwardly compatible with their earlier models, with the exception of the 12-bit SDS 92.
Like most systems of the era, the machine was built with a bank of core memory as the primary storage, allowing between 16 and 64 kilowords. Words were 24 bits plus a parity bit. This was backed up by a variety of secondary storage devices, including a 1376 kword drum in Genie, or hard disks in the SDS models in the form of a drum-like 2097 kword "fixed-head" disk or a traditional "floating-head" model. The SDS machines also included a paper tape punch and reader, line printer, and a real-time clock. They bootstrapped from paper tape.
A file storage of 96 MB were also attached. The line printer used was a Potter Model HSP-3502 chain printer with 96 printing characters and a speed of about 230 lines per minute.
Software system
The operating system developed at Project Genie was the Berkeley Timesharing System.
By August 1968 a version 2.0 was announced th
|
https://en.wikipedia.org/wiki/Ratchet%20effect
|
A ratchet effect is an instance of the restrained ability of human processes to be reversed once a specific thing has happened, analogous with the mechanical ratchet that holds the spring tight as a clock is wound up. It is related to the phenomena of featuritis and scope creep in the manufacture of various consumer goods, and of mission creep in military planning.
In sociology, "ratchet effects refer to the tendency for central controllers to base next year's targets on last year's performance, meaning that managers who expect still to be in place in the next target period have a perverse incentive not to exceed targets even if they could easily do so".
Examples
Famine cycle
Garrett Hardin, a biologist and environmentalist, used the phrase to describe how food aid keeps people alive who would otherwise die in a famine. They live and multiply in better times, making another bigger crisis inevitable, since the supply of food has not been increased.
The ratchet effect first came to light in Alan Peacock and Jack Wiseman's work, The Growth of Public Expenditure in the United Kingdom. Peacock and Wiseman found that public spending increases like a ratchet following periods of crisis. The term was later used by American historian Robert Higgs to highlight Peacock and Wiseman's research in his book, "Crisis and Leviathan". Similarly, governments have difficulty in rolling back huge bureaucratic organizations created initially for temporary needs, e.g., at times of war, natural or economic crisis. The effect may likewise afflict large business corporations with myriad layers of bureaucracy which resist reform or dismantling.
Production strategy
Jean Tirole used the concept in his pioneering work on regulation and monopolies. The ratchet effect can denote an economic strategy arising in an environment where incentive depends on both current and past production, such as in a competitive industry employing piece rates. The producers observe that since incentive is readju
|
https://en.wikipedia.org/wiki/Discourse%20representation%20theory
|
In formal linguistics, discourse representation theory (DRT) is a framework for exploring meaning under a formal semantics approach. One of the main differences between DRT-style approaches and traditional Montagovian approaches is that DRT includes a level of abstract mental representations (discourse representation structures, DRS) within its formalism, which gives it an intrinsic ability to handle meaning across sentence boundaries. DRT was created by Hans Kamp in 1981. A very similar theory was developed independently by Irene Heim in 1982, under the name of File Change Semantics (FCS). Discourse representation theories have been used to implement semantic parsers and natural language understanding systems.
Discourse representation structures
DRT uses discourse representation structures (DRS) to represent a hearer's mental representation of a discourse as it unfolds over time. There are two critical components to a DRS:
A set of discourse referents representing entities that are under discussion.
A set of DRS conditions representing information that has been given about discourse referents.
Consider Sentence (1) below:
(1) A farmer owns a donkey.
The DRS of (1) can be notated as (2) below:
(2) [x,y: farmer(x), donkey(y), owns(x,y)]
What (2) says is that there are two discourse referents, x and y, and three discourse conditions farmer, donkey, and owns, such that the condition farmer holds of x, donkey holds of y, and owns holds of the pair x and y.
Informally, the DRS in (2) is true in a given model of evaluation if and only if there are entities in that model that satisfy the conditions. So, if a model contains two individuals, and one is a farmer, the other is a donkey, and the first owns the second, the DRS in (2) is true in that model.
Uttering subsequent sentences results in the existing DRS being updated.
(3) He beats it.
Uttering (3) after (1) results in the DRS in (2) being updated as follows, in (4) (assuming a way to disambiguate which pr
|
https://en.wikipedia.org/wiki/Trusted%20Execution%20Technology
|
Intel Trusted Execution Technology (Intel TXT, formerly known as LaGrande Technology) is a computer hardware technology of which the primary goals are:
Attestation of the authenticity of a platform and its operating system.
Assuring that an authentic operating system starts in a trusted environment, which can then be considered trusted.
Provision of a trusted operating system with additional security capabilities not available to an unproven one.
Intel TXT uses a Trusted Platform Module (TPM) and cryptographic techniques to provide measurements of software and platform components so that system software as well as local and remote management applications may use those measurements to make trust decisions. It complements Intel Management Engine. This technology is based on an industry initiative by the Trusted Computing Group (TCG) to promote safer computing. It defends against software-based attacks aimed at stealing sensitive information by corrupting system or BIOS code, or modifying the platform's configuration.
Details
The Trusted Platform Module (TPM) as specified by the TCG provides many security functions including special registers (called Platform Configuration Registers – PCRs) which hold various measurements in a shielded location in a manner that prevents spoofing. Measurements consist of a cryptographic hash using a Secure Hashing Algorithm (SHA); the TPM v1.0 specification uses the SHA-1 hashing algorithm. More recent TPM versions (v2.0+) call for SHA-2.
A desired characteristic of a cryptographic hash algorithm is that (for all practical purposes) the hash result (referred to as a hash digest or a hash) of any two modules will produce the same hash value only if the modules are identical.
Measurements
Measurements can be of code, data structures, configuration, information, or anything that can be loaded into memory. TCG requires that code not be executed until after it has been measured. To ensure a particular sequence of measurements, has
|
https://en.wikipedia.org/wiki/Linea%20aspera
|
The linea aspera () is a ridge of roughened surface on the posterior surface of the shaft of the femur. It is the site of attachments of muscles and the intermuscular septum.
Its margins diverge above and below.
The linea aspera is a prominent longitudinal ridge or crest, on the middle third of the bone, presenting a medial and a lateral lip, and a narrow rough, intermediate line. It is an important insertion point for the adductors and the lateral and medial intermuscular septa that divides the thigh into three compartments. The tension generated by muscle attached to the bones is responsible for the formation of the ridges.
Structure
Above
Above, the linea aspera is prolonged by three ridges.
The lateral ridge is very rough, and runs almost vertically upward to the base of the greater trochanter. It is termed the gluteal tuberosity, and gives attachment to part of the gluteus maximus: its upper part is often elongated into a roughened crest, on which a more or less well-marked, rounded tubercle, the third trochanter, is occasionally developed.
The intermediate ridge or pectineal line is continued to the base of the lesser trochanter and gives attachment to the pectineus muscle;
the medial ridge is lost in the intertrochanteric line; between the intermediate and medial ridges a portion of the iliacus muscle is inserted.
Below
Below, the linea aspera is prolonged into two ridges, enclosing between them a triangular area, the popliteal surface, upon which the popliteal artery rests.
Of these two ridges, the lateral is the more prominent, and descends to the summit of the lateral condyle.
The medial is less marked, especially at its upper part, where it is crossed by the femoral artery. It ends below at the summit of the medial condyle, in a small tubercle, the adductor tubercle, which affords insertion to the tendon of the adductor magnus.
Development
The tension generated by muscle attached to the bones is responsible for the formation of the ridges.
|
https://en.wikipedia.org/wiki/OGDL
|
OGDL (Ordered Graph Data Language), is a "structured textual format that represents information in the form of graphs, where the nodes are strings and the arcs or edges are spaces or indentation."
Like XML, but unlike JSON and YAML, OGDL includes a schema notation and path traversal notation. There is also a binary representation.
Example
network
eth0
ip 192.168.0.10
mask 255.255.255.0
hostname crispin
See also
Comparison of data serialization formats
|
https://en.wikipedia.org/wiki/Tell-tale%20%28automotive%29
|
A tell-tale, sometimes called an idiot light or warning light, is an indicator of malfunction or operation of a system, indicated by a binary (on/off) illuminated light, symbol or text legend.
The "idiot light" terminology arises from popular frustration with automakers' use of lights for crucial functions which could previously be monitored by gauges, so a troublesome condition could be detected and corrected early. Such early detection of problems with, for example, engine temperature or oil pressure or charging system operation is not possible via an idiot light, which lights only when a fault has already occurred – thus providing no advance warnings or details of the malfunction's extent. The Hudson automobile company was the first to use lights instead of gauges for oil pressure and the voltmeter, starting in the mid-1930s.
Regulation
Automotive tell-tales are regulated by automobile safety standards worldwide. In the United States, National Highway Traffic Safety Administration Federal Motor Vehicle Safety Standard 101 includes tell-tales in its specifications for vehicle controls and displays. In Canada, the analogous Canada Motor Vehicle Safety Standard 101 applies. In Europe and throughout most of the rest of the world, ECE Regulations specify various types of tell-tales.
Types
Different tell-tales can convey different kinds of information. One type lights or blinks to indicate a failure (as of oil pressure, engine temperature control, charging current, etc.); lighting and blinking indicate progression from warning to failure indication. Another type lights to alert the need for specific service after a certain amount of time or distance has elapsed (e.g., to change the oil).
Colour may also communicate information about the nature of the tell-tale, for example red may signify that the vehicle cannot continue driving (e.g. oil pressure).) Many older vehicles used schemes which were specific to the manufacturer, e.g. some British Fords of the 1960s used
|
https://en.wikipedia.org/wiki/AT%26T%20Technologies
|
AT&T Technologies, Inc., was created by AT&T in 1983 in preparation for the breakup of the Bell System, which became effective as of January 1, 1984. It assumed the corporate charter of Western Electric Co., Inc.
History
Creation
AT&T (originally American Telephone and Telegraph Company), after divesting ownership of the Bell System, restructured its remaining companies into three core units. American Bell, Bell Labs and Western Electric were fully absorbed into AT&T, and divided up as an umbrella of several specifically focused companies held by AT&T Technologies, including:
AT&T Bell Laboratories - R&D functions
AT&T Consumer Products - Consumer telephone equipment sales
AT&T International - International ventures
AT&T Network Systems International
Goldstar Semiconductor
AT&T Taiwan
AT&T Microelectronica de Espana
Lycom
AT&T Ricoh
AT&T Network Systems Espana
AT&T Network Systems - Large Business/Corporate equipment
AT&T Technology Systems - Computer-focused R&D
Telephone production
From January 1, 1984, until mid-1986, AT&T Technologies continued to manufacture telephones that had been made before 1984 by Western Electric under the Western Electric marking. "Bell System Property - Not For Sale" markings were eliminated from all telephones, replaced with "AT&T" in the plastic housing and "Western Electric" in the metal telephone bases.
Bell logos contained on the bottom of Trimline bases were filled in, leaving a giant lump next to "Western Electric".
Telephone changes
Toward the end of the Bell System, Western Electric telephones contained much more computer technology and more plastic over metal, since advances in electronics and manufacturing processes made it possible, and there was no longer the need to produce heavy duty, long-lasting telephones. In 1985, the 2220 Trimline was heavily modified, including a touch-tone/pulse dial switch, eliminating the need for the 220 rotary phone, foreshadowing what was to come for other AT&T telephone
|
https://en.wikipedia.org/wiki/Infradian%20rhythm
|
In chronobiology, an infradian rhythm is a rhythm with a period longer than the period of a circadian rhythm, i.e., with a frequency of less than one cycle in 24 hours. Some examples of infradian rhythms in mammals include menstruation, breeding, migration, hibernation, molting and fur or hair growth, and tidal or seasonal rhythms. In contrast, ultradian rhythms have periods shorter than the period of a circadian rhythm. Several infradian rhythms are known to be caused by hormone stimulation or exogenous factors. For example, seasonal depression, an example of an infradian rhythm occurring once a year, can be caused by the systematic lowering of light levels during the winter.
See also
Photoperiodicity
|
https://en.wikipedia.org/wiki/Lecher%20line
|
In electronics, a Lecher line or Lecher wires is a pair of parallel wires or rods that were used to measure the wavelength of radio waves, mainly at VHF, UHF and microwave frequencies. They form a short length of balanced transmission line (a resonant stub). When attached to a source of radio-frequency power such as a radio transmitter, the radio waves form standing waves along their length. By sliding a conductive bar that bridges the two wires along their length, the length of the waves can be physically measured. Austrian physicist Ernst Lecher, improving on techniques used by Oliver Lodge and Heinrich Hertz, developed this method of measuring wavelength around 1888. Lecher lines were used as frequency measuring devices until frequency counters became available after World War 2. They were also used as components, often called "resonant stubs", in VHF, UHF and microwave radio equipment such as transmitters, radar sets, and television sets, serving as tank circuits, filters, and impedance-matching devices. They are used at frequencies between HF/VHF, where lumped components are used, and UHF/SHF, where resonant cavities are more practical.
Wavelength measurement
A Lecher line is a pair of parallel uninsulated wires or rods held a precise distance apart. The separation is not critical but should be a small fraction of the wavelength; it ranges from less than a centimeter to over 10 cm. The length of the wires depends on the wavelength involved; lines used for measurement are generally several wavelengths long. The uniform spacing of the wires makes them a transmission line, conducting waves at a constant speed very close to the speed of light. One end of the rods is connected to the source of RF power, such as the output of a radio transmitter. At the other end the rods are connected together with a conductive bar between them. This short circuiting termination reflects the waves. The waves reflected from the short-circuited end interfere with the
|
https://en.wikipedia.org/wiki/Second-order%20arithmetic
|
In mathematical logic, second-order arithmetic is a collection of axiomatic systems that formalize the natural numbers and their subsets. It is an alternative to axiomatic set theory as a foundation for much, but not all, of mathematics.
A precursor to second-order arithmetic that involves third-order parameters was introduced by David Hilbert and Paul Bernays in their book Grundlagen der Mathematik. The standard axiomatization of second-order arithmetic is denoted by Z2.
Second-order arithmetic includes, but is significantly stronger than, its first-order counterpart Peano arithmetic. Unlike Peano arithmetic, second-order arithmetic allows quantification over sets of natural numbers as well as numbers themselves. Because real numbers can be represented as (infinite) sets of natural numbers in well-known ways, and because second-order arithmetic allows quantification over such sets, it is possible to formalize the real numbers in second-order arithmetic. For this reason, second-order arithmetic is sometimes called "analysis".
Second-order arithmetic can also be seen as a weak version of set theory in which every element is either a natural number or a set of natural numbers. Although it is much weaker than Zermelo–Fraenkel set theory, second-order arithmetic can prove essentially all of the results of classical mathematics expressible in its language.
A subsystem of second-order arithmetic is a theory in the language of second-order arithmetic each axiom of which is a theorem of full second-order arithmetic (Z2). Such subsystems are essential to reverse mathematics, a research program investigating how much of classical mathematics can be derived in certain weak subsystems of varying strength. Much of core mathematics can be formalized in these weak subsystems, some of which are defined below. Reverse mathematics also clarifies the extent and manner in which classical mathematics is nonconstructive.
Definition
Syntax
The language of second-order arithmetic is
|
https://en.wikipedia.org/wiki/Serous%20fluid
|
In physiology, serous fluid or serosal fluid (originating from the Medieval Latin word serosus, from Latin serum) is any of various body fluids resembling serum, that are typically pale yellow or transparent and of a benign nature. The fluid fills the inside of body cavities. Serous fluid originates from serous glands, with secretions enriched with proteins and water. Serous fluid may also originate from mixed glands, which contain both mucous and serous cells. A common trait of serous fluids is their role in assisting digestion, excretion, and respiration.
In medical fields, especially cytopathology, serous fluid is a synonym for effusion fluids from various body cavities. Examples of effusion fluid are pleural effusion and pericardial effusion. There are many causes of effusions which include involvement of the cavity by cancer. Cancer in a serous cavity is called a serous carcinoma. Cytopathology evaluation is recommended to evaluate the causes of effusions in these cavities.
Examples
Saliva consists of mucus and serous fluid; the serous fluid contains the enzyme amylase, which is important for the digestion of carbohydrates. Minor salivary glands of von Ebner present on the tongue secrete the lipase. The parotid gland produces purely serous saliva. The other major salivary glands produce mixed (serous and mucus) saliva.
Another type of serous fluid is secreted by the serous membranes (serosa), two-layered membranes which line the body cavities. Serous membrane fluid collects on microvilli on the outer layer and acts as a lubricant and reduces friction from muscle movement. This can be seen in the lungs, with the pleural cavity.
Pericardial fluid is a serous fluid secreted by the serous layer of the pericardium into the pericardial cavity. The pericardium consists of two layers, an outer fibrous layer and the inner serous layer. This serous layer has two membranes which enclose the pericardial cavity into which is secreted the pericardial fluid.
Blood serum
|
https://en.wikipedia.org/wiki/Zen%20Cart
|
Zen Cart is an online store management system. It is PHP-based, using a MySQL database and HTML components. Support is provided for numerous languages and currencies, and it is freely available under the GNU General Public License.
History
Zen Cart is a software fork that branched from osCommerce in 2003. Beyond some aesthetic changes, the major differences between the two systems come from Zen Cart's architectural changes (for example, a template system) and additional included features in the core. The release of the 1.3.x series further differentiated Zen Cart by moving the template system from its historic tables-based layout approach to one that is largely CSS-based.
Plugins
As support for Zen Cart dropped in recent years, many third party companies are creating Zencart plugins and modules that can help users solve problems like installing reCAPTCHA v3
See also
Comparison of shopping cart software
|
https://en.wikipedia.org/wiki/Kato%27s%20conjecture
|
Kato's conjecture is a mathematical problem named after mathematician Tosio Kato, of the University of California, Berkeley. Kato initially posed the problem in 1953.
Kato asked whether the square roots of certain elliptic operators, defined via functional calculus, are analytic. The full statement of the conjecture as given by Auscher et al. is: "the domain of the square root of a uniformly complex elliptic operator with bounded measurable coefficients in Rn is the Sobolev space H1(Rn) in any dimension with the estimate ".
The problem remained unresolved for nearly a half-century, until in 2001 it was jointly solved in the affirmative by Pascal Auscher, Steve Hofmann, Michael Lacey, Alan McIntosh, and Philippe Tchamitchian.
|
https://en.wikipedia.org/wiki/Business%20process%20interoperability
|
Business process interoperability (BPI) is a property referring to the ability of diverse business processes to work together, to so called "inter-operate". It is a state that exists when a business process can meet a specific objective automatically utilizing essential human labor only. Typically, BPI is present when a process conforms to standards that enable it to achieve its objective regardless of ownership, location, make, version or design of the computer systems used.
Overview
The main attraction of BPI is that a business process can start and finish at any point worldwide regardless of the types of hardware and software required to automate it. Because of its capacity to offload human "mind" labor, BPI is considered by many as the final stage in the evolution of business computing. BPI's twin criteria of specific objective and essential human labor are both subjective.
The objectives of BPI vary, but tend to fall into the following categories:
Enable end-to-end straight-through processing ("STP") by interconnecting data and procedures trapped in information silos
Let systems and products work with other systems or products without special effort on the part of the customer
Increase productivity by automating human labor
Eliminate redundant business processes and data replications
Minimize errors inherent in manual processes
Introduce mainstream enterprise software-as-a-service
Give top managers a practical means of overseeing processes used to run business operations
Encourage development of innovative Internet-based business processes
Place emphasis on business processes rather than on the systems required to operate them
Strengthen security by eliminating gaps among proprietary software systems
Improve privacy by giving users complete control over their data
Enable realtime enterprise scenarios and forecasts
Business process interoperability is limited to enterprise software systems in which functions are designed to work together, such
|
https://en.wikipedia.org/wiki/Wood%20ear
|
Wood-ear or tree ear (, Korean: 목이 버섯), also translated wood jellyfish or , can refer to a few similar-looking edible fungi used primarily in Chinese cuisine; these are commonly sold in Asian markets shredded and dried.
Auricularia heimuer (黑木耳, black ear fungus), previously misdetermined as Auricularia auricula-judae
Auricularia cornea (毛木耳, cloud ear fungus), also called Auricularia polytricha
Tremella fuciformis (银耳, white/silver ear fungus)
The black and cloud ear fungi are black in appearance and closely related. The white ear fungus is superficially similar but has important ecological, taxonomical, and culinary differences.
Chinese edible mushrooms
|
https://en.wikipedia.org/wiki/Particle%20number%20operator
|
In quantum mechanics, for systems where the total number of particles may not be preserved, the number operator is the observable that counts the number of particles.
The following is in bra–ket notation: The number operator acts on Fock space. Let
be a Fock state, composed of single-particle states drawn from a basis of the underlying Hilbert space of the Fock space. Given the corresponding creation and annihilation operators and we define the number operator by
and we have
where is the number of particles in state . The above equality can be proven by noting that
then
See also
Harmonic oscillator
Quantum harmonic oscillator
Second quantization
Quantum field theory
Thermodynamics
Fermion number operator
(-1)F
|
https://en.wikipedia.org/wiki/Tacheometry
|
Tacheometry (; from Greek for "quick measure") is a system of rapid surveying, by which the horizontal and vertical positions of points on the earth's surface relative to one another are determined without using a chain or tape, or a separate levelling instrument.
Instead of the pole normally employed to mark a point, a staff similar to a level staff is used. This is marked with heights from the base or foot, and is graduated according to the form of tacheometer in use.
The horizontal distance S is inferred from the vertical angle subtended between two well-defined points on the staff and the known distance 2L between them. Alternatively, also by readings of the staff indicated by two fixed stadia wires in the diaphragm (reticle) of the telescope. The difference of height Δh is computed from the angle of depression z or angle of elevation α of a fixed point on the staff and the horizontal distance S already obtained.
The azimuth angle is determined as normally. Thus, all the measurements requisite to locate a point both vertically and horizontally with reference to the point where the tacheometer is centred are determined by an observer at the instrument without any assistance beyond that of a person to hold the level staff.
The ordinary methods of surveying with a theodolite, chain, and levelling instrument are fairly satisfactory when the ground is relatively clear of obstructions and not very precipitous, but it becomes extremely cumbersome when the ground is covered with bush, or broken up by ravines. Chain measurements then become slow and liable to considerable error; the levelling, too, is carried on at great disadvantage in point of speed, though without serious loss of accuracy. These difficulties led to the introduction of tacheometry.
In western countries, tacheometry is primarily of historical interest in surveying, as professional measurement nowadays is usually carried out using total stations and recorded using data collectors. Location positions
|
https://en.wikipedia.org/wiki/Reactor%20%28video%20game%29
|
Reactor is an arcade video game released in 1982 by Gottlieb. The object of the game is to cool down the core of a nuclear reactor without being pushed into its walls by swarms of subatomic particles. Reactor was developed by Tim Skelly, who previously designed and programmed a series of vector graphics arcade games for Cinematronics, including Rip Off. It was the first arcade game to credit the developer on the title screen. Reactor was ported to the Atari 2600 by Charlie Heath and published by Parker Brothers the same year as the original.
Gameplay
Controls consist of a trackball and two buttons, Energy and Decoy. The player controls a ship that can move freely within a nuclear reactor, seen from the top down. Swarms of particles follow the player and bounce off each other, the player's ship, and the reactor core. Any object touching the outer "kill wall" of the reactor is destroyed. Pressing the Energy button during a collision with a particle will cause it to bounce away at a higher speed.
While touching the core is not harmful, it continually grows in size, restricting the available space for movement. Two sets of control rods protrude from the kill wall; if the player knocks particles into all the rods of one set, the core shrinks to its minimum size before starting to grow again. The player starts the game with a limited number of decoys, which can be deployed by pressing the Decoy button in order to lure particles toward the kill wall, control rods, or either of two small "bonus chambers" at opposite corners of the screen. An extra decoy is earned by knocking out both sets of rods.
The player earns points for destroying particles or luring/knocking them into the bonus chambers so that they bounce off the walls. A set number of particles must be destroyed in order to complete each level.
In later levels, the core is replaced by a slowly expanding vortex that can attract the player's ship and destroy it on contact. One or both of the bonus chambers may be
|
https://en.wikipedia.org/wiki/LAN%20messenger
|
A LAN Messenger is an instant messaging program for computers designed for use within a single local area network (LAN).
Many LAN Messengers offer basics functionality for sending private messages, file transfer, chatrooms and graphical smileys. The advantage of using a simple LAN messenger over a normal instant messenger is that no active Internet connection or central server is required, and only people inside the firewall will have access to the system.
History
A precursor of LAN Messengers is the Unix talk command, and similar facilities on earlier systems, which enabled multiple users on one host system to directly talk with each other. At the time, computers were usually shared between multiple users, who accessed them through serial or telephone lines.
Novell NetWare featured a trivial person-to-person chat program for DOS, which used the [IPX/SPX] protocol suite. NetWare for Windows also included broadcast and targeted messages similar to WinPopup and the Windows Messenger service.
On Windows, WinPopup was a small utility included with Windows 3.11. WinPopup uses SMB/NetBIOS protocol and was intended to receive and send short text messages.
Windows NT/2000/XP improves upon this with Windows Messenger service, a Windows service compatible to WinPopup. On systems where this service is running, the received messages "pop up" as simple message boxes. Any software compatible with WinPopup, like the console utility NET SEND, can send such messages. However, due to security concerns, by default, the messenger service is off in Windows XP SP2 and blocked by Windows XP's firewall.
On Apple's -based computers, the iChat program has allowed LAN messaging over the Bonjour protocol since 2005. The multi-protocol messenger Pidgin has support for the Bonjour protocol, including on Windows.
See also
Comparison of instant messaging protocols
Comparison of cross-platform instant messaging clients
Comparison of LAN messengers
Friend-to-friend
IRC on LANs
Talker
|
https://en.wikipedia.org/wiki/Charge%20amplifier
|
A charge amplifier is an electronic current integrator that produces a voltage output proportional to the integrated value of the input current, or the total charge injected.
The amplifier offsets the input current using a feedback reference capacitor, and produces an output voltage inversely proportional to the value of the reference capacitor but proportional to the total input charge flowing during the specified time period.
The circuit therefore acts as a charge-to-voltage converter. The gain of the circuit depends on the values of the feedback capacitor.
The charge amplifier was invented by Walter Kistler in 1950.
Design
Charge amplifiers are usually constructed using an operational amplifier or other high gain semiconductor circuit with a negative feedback capacitor Cf.
Into the inverting node flow the input charge signal qin and the feedback charge qf from the output. According to Kirchhoff's circuit laws they compensate each other.
.
The input charge and the output voltage are proportional with inverted sign. The feedback capacitor Cf sets the amplification.
The input impedance of the circuit is almost zero because of the Miller effect. Hence all the stray capacitances (the cable capacitance, the amplifier input capacitance, etc.) are virtually grounded and they have no influence on the output signal.
The feedback resistor Rf discharges the capacitor. Without Rf the DC gain would be very high so that even the tiny DC input offset current of the operational amplifier would appear highly amplified at the output. Rf and Cf set the lower frequency limit of the charge amplifier.
Due to the described DC effects and the finite isolation resistances in practical charge amplifiers the circuit is not suitable for the measurement of static charges. High quality charge amplifiers allow, however, quasistatic measurements at frequencies below 0.1 Hz. Some manufacturers also use a reset switch instead of Rf to manually discharge Cf before a measurement.
Practic
|
https://en.wikipedia.org/wiki/VisSim
|
VisSim is a visual block diagram program for simulation of dynamical systems and model-based design of embedded systems, with its own visual language. It is developed by Visual Solutions of Westford, Massachusetts. Visual Solutions was acquired by Altair in August 2014 and its products have been rebranded as Altair Embed as a part of Altair's Model Based Development Suite. With Embed, you can develop virtual prototypes of dynamic systems. Models are built by sliding blocks into the work area and wiring them together with the mouse. Embed automatically converts the control diagrams into C-code ready to be downloaded to the target hardware.
VisSim or now Altair Embed uses a graphical data flow paradigm to implement dynamic systems based on differential equations. Version 8 adds interactive UML OMG 2 compliant state chart graphs that are placed in VisSim diagrams. This allows the modeling of state based systems such as startup sequencing of process plants or serial protocol decoding.
Applications
VisSim/Altair Embed is used in control system design and digital signal processing for multidomain simulation and design. It includes blocks for arithmetic, Boolean, and transcendental functions, as well as digital filters, transfer functions, numerical integration and interactive plotting. The most commonly modeled systems are aeronautical, biological/medical, digital power, electric motor, electrical, hydraulic, mechanical, process, thermal/HVAC and econometric.
Distributing VisSim models
A read-only version of the software, VisSim Viewer, is available free of charge and provides a way for people not licensed to use VisSim to run VisSim models. This program is intended to allow models to be more widely shared while preserving the model in its published form. The viewer will execute any VisSim model, and only allows changes to block and simulation parameters to illustrate different design scenarios. Sliders and buttons may be activated if included in the model.
Code gene
|
https://en.wikipedia.org/wiki/Probability%20current
|
In quantum mechanics, the probability current (sometimes called probability flux) is a mathematical quantity describing the flow of probability. Specifically, if one thinks of probability as a heterogeneous fluid, then the probability current is the rate of flow of this fluid. It is a real vector that changes with space and time. Probability currents are analogous to mass currents in hydrodynamics and electric currents in electromagnetism. As in those fields, the probability current (i.e. the probability current density) is related to the probability density function via a continuity equation. The probability current is invariant under gauge transformation.
The concept of probability current is also used outside of quantum mechanics, when dealing with probability density functions that change over time, for instance in Brownian motion and the Fokker–Planck equation.
Definition (non-relativistic 3-current)
Free spin-0 particle
In non-relativistic quantum mechanics, the probability current of the wave function of a particle of mass in one dimension is defined as
where
is the reduced Planck constant;
denotes the complex conjugate of the wave function;
denotes the real part;
denotes the imaginary part.
Note that the probability current is proportional to a Wronskian
In three dimensions, this generalizes to
where denotes the del or gradient operator. This can be simplified in terms of the kinetic momentum operator,
to obtain
These definitions use the position basis (i.e. for a wavefunction in position space), but momentum space is possible.
Spin-0 particle in an electromagnetic field
The above definition should be modified for a system in an external electromagnetic field. In SI units, a charged particle of mass and electric charge includes a term due to the interaction with the electromagnetic field;
where is the magnetic vector potential. The term has dimensions of momentum. Note that used here is the canonical momentum and is not gauge i
|
https://en.wikipedia.org/wiki/Aortic%20arch
|
The aortic arch, arch of the aorta, or transverse aortic arch () is the part of the aorta between the ascending and descending aorta. The arch travels backward, so that it ultimately runs to the left of the trachea.
Structure
The aorta begins at the level of the upper border of the second/third sternocostal articulation of the right side, behind the ventricular outflow tract and pulmonary trunk. The right atrial appendage overlaps it. The first few centimeters of the ascending aorta and pulmonary trunk lies in the same pericardial sheath. and runs at first upward, arches over the pulmonary trunk, right pulmonary artery, and right main bronchus to lie behind the right second coastal cartilage. The right lung and sternum lies anterior to the aorta at this point. The aorta then passes posteriorly and to the left, anterior to the trachea, and arches over left main bronchus and left pulmonary artery, and reaches to the left side of the T4 vertebral body. Apart from T4 vertebral body, other structures such as trachea, oesophagus, and thoracic duct (from front to back) also lies to the left of the aorta. Inferiorly, the arch of aorta is connected to ligamentum arteriosum while superiorly, it gives rise to three main branches. Arch of aorta continues as the descending aorta after T4 vertebral body.
The aortic arch has three main branches on its superior aspect. The first, and largest, branch of the arch of the aorta is the brachiocephalic trunk, which is to the right and slightly anterior to the other two branches and originates behind the manubrium of the sternum. Next, the left common carotid artery originates from the aortic arch to the left of the brachiocephalic trunk, then ascends along the left side of the trachea and through the superior mediastinum. Finally, the left subclavian artery comes off of the aortic arch to the left of the left common carotid artery and ascends, with the left common carotid, through the superior mediastinum and along the left side of t
|
https://en.wikipedia.org/wiki/Space%20blanket
|
A space blanket (also known as a Mylar blanket, emergency blanket, first aid blanket, safety blanket, thermal blanket, weather blanket, heat sheet, foil blanket, or shock blanket) is an especially low-weight, low-bulk blanket made of heat-reflective, thin, plastic sheeting. They are used on the exterior surfaces of spacecraft for thermal control, as well as by people. Their design reduces the heat loss in a person's body, which would otherwise occur due to thermal radiation, water evaporation, or convection. Their low weight and compact size before unfurling make them ideal when space or weight are at a premium. They may be included in first aid kits and with camping equipment. Lost campers and hikers have an additional possible benefit: the shiny surface flashes in the sun, allowing its use as an improvised distress beacon for searchers and as a method of signalling over long distances to other people.
Manufacturing
First developed by NASAs Marshall Space Flight Center in 1964 for the US space program,
the material comprises a thin sheet of plastic (often PET film) that is coated with a metallic, reflecting agent, making it metallized polyethylene terephthalate (MPET) that is usually gold or silver in color, which reflects up to 97% of radiated heat.
For use in space, polyimide (e.g. Kapton, UPILEX) substrate is usually chosen due to its resistance to the hostile space environment, large temperature range (cryogenic to −260 °C and for short excursions over 480 °C), low outgassing (making it suitable for vacuum use), and resistance to ultraviolet radiation. Aluminized Kapton, with foil thickness of 50 and 125 µm, was used on the Apollo Lunar Module. The polyimide gives the foils their distinctive amber-gold color.
Space blankets are made by vacuum-depositing a very precise amount of pure aluminum vapor onto a very thin, durable film substrate.
Usage
In their principal usage, space blankets are included in many emergency, first aid, and survival kits because t
|
https://en.wikipedia.org/wiki/Le%20Chat
|
Le Chat (Le Cat in English-language editions) is a Belgian comic strip, created by Philippe Geluck. Foremost published daily in the newspaper Le Soir from March 22, 1983, until March 23, 2013, it is since issued directly as complete albums.
Right from the start, it quickly became one of the bestselling Franco-Belgian comics series and the mascot of Le Soir. While virtually an icon in Francophone Belgium, he is far less well known in Flanders.
Concept
Le Chat is an adult, human-sized obese, anthropomorphic cat who typically wears a suit. He always has the same physical expression. He often comes up with elaborate reasonings which lead to hilariously absurd conclusions e.g. by taking metaphors literally or by adding increasingly unlikely what-ifs to ordinary situations.
One page in length, it appeared weekly in the "Victor" supplement of Belgian newspaper Le Soir. For Le Chat's 20th anniversary in 2003, Le Soir allowed Geluck to illustrate that day's entire newspaper. An exhibition of Le Chat's history (and that of his creator), "Le Chat s'expose", was first held at the Autoworld Motor Museum in Brussels in Spring 2004, and has since toured Europe. In March–October 2006 it appeared at Les Champs Libres in Rennes.
In popular culture
As part of the Brussels' Comic Book Route a wall in the Zuidlaan/Boulevard du Midi in Brussels was dedicated to "Le Chat" in August 1993.
On October 11, 2008, Le Chat received his own market place in Hotton in the Belgian province Luxembourg. A statue of him, sculpted by François Deboucq, was placed in the center, depicting him holding an umbrella which rains water down from inside.
In 2015, "Le Chat" received his own museum.
Bibliography
Le Chat, 2001
Le Retour du Chat, 2001
La Vengeance du Chat, 2002
Le Quatrième Chat, 2002
Le Chat au Congo, 2003
Ma langue au Chat, 2004
Le Chat à Malibu, 2005
Le Chat 1999,9999, 1999
L'Avenir du Chat, 1999
Le Chat est content, 2000
L'Affaire le Chat, 2001 (Available in English as "The
|
https://en.wikipedia.org/wiki/Domestic%20pigeon
|
The domestic pigeon (Columba livia domestica or Columba livia forma domestica) is a pigeon subspecies that was derived from the rock dove (also called the rock pigeon). The rock pigeon is the world's oldest domesticated bird. Mesopotamian cuneiform tablets mention the domestication of pigeons more than 5,000 years ago, as do Egyptian hieroglyphics. Research suggests that domestication of pigeons occurred as early as 10,000 years ago.
Pigeons have held historical importance to humans as food, pets, holy animals, and messengers. Due to their homing ability, pigeons have been used to deliver messages, including during the world wars. Despite this, city pigeons today are seen as pests, mainly due to their droppings. Feral pigeons are considered invasive in many parts of the world, though they have a positive impact on wild bird populations, serving as an important prey species for birds of prey.
History of domestication
The earliest recorded mention of pigeons comes from Mesopotamia some 5,000 years ago. Pigeon Valley in Cappadocia has rock formations that were carved into dovecotes.
Despite the long history of pigeons, little is known about the specifics of their initial domestication. Which subspecies of C. livia was the progenitor of domestics, exactly when, how many times, where and how they were domesticated, and how they spread, remains unknown. Their fragile bones and similarity to wild birds make the fossil record a poor tool for their study. Thus most of what is known comes from written accounts, which almost certainly do not cover the first stages of domestication. There is strong evidence that some divergences in appearance between the wild-type rock dove and domestic pigeons, such as checkered wing patterns and red/brown coloration, may be due to introgression by cross-breeding with the speckled pigeon.
Ancient Egyptians kept vast quantities of them, and would sacrifice tens of thousands at a time for ritual purposes. Akbar the Great traveled with a co
|
https://en.wikipedia.org/wiki/Hybrid%20automatic%20repeat%20request
|
Hybrid automatic repeat request (hybrid ARQ or HARQ) is a combination of high-rate forward error correction (FEC) and automatic repeat request (ARQ) error-control. In standard ARQ, redundant bits are added to data to be transmitted using an error-detecting (ED) code such as a cyclic redundancy check (CRC). Receivers detecting a corrupted message will request a new message from the sender. In Hybrid ARQ, the original data is encoded with an FEC code, and the parity bits are either immediately sent along with the message or only transmitted upon request when a receiver detects an erroneous message. The ED code may be omitted when a code is used that can perform both forward error correction (FEC) in addition to error detection, such as a Reed–Solomon code. The FEC code is chosen to correct an expected subset of all errors that may occur, while the ARQ method is used as a fall-back to correct errors that are uncorrectable using only the redundancy sent in the initial transmission. As a result, hybrid ARQ performs better than ordinary ARQ in poor signal conditions, but in its simplest form this comes at the expense of significantly lower throughput in good signal conditions. There is typically a signal quality cross-over point below which simple hybrid ARQ is better, and above which basic ARQ is better.
Simple Hybrid ARQ
The simplest version of HARQ, Type I HARQ, adds both ED and FEC information to each message prior to transmission. When the coded data block is received, the receiver first decodes the error-correction code. If the channel quality is good enough, all transmission errors should be correctable, and the receiver can obtain the correct data block. If the channel quality is bad, and not all transmission errors can be corrected, the receiver will detect this situation using the error-detection code, then the received coded data block is rejected and a re-transmission is requested by the receiver, similar to ARQ.
In a more sophisticated form, Type II HARQ,
|
https://en.wikipedia.org/wiki/Punctured%20code
|
In coding theory, puncturing is the process of removing some of the parity bits after encoding with an error-correction code. This has the same effect as encoding with an error-correction code with a higher rate, or less redundancy. However, with puncturing the same decoder can be used regardless of how many bits have been punctured, thus puncturing considerably increases the flexibility of the system without significantly increasing its complexity.
In some cases, a pre-defined pattern of puncturing is used in an encoder. Then, the inverse operation, known as depuncturing, is implemented by the decoder.
Puncturing is used in UMTS during the rate matching process. It is also used in Wi-Fi, Wi-SUN, GPRS, EDGE, DVB-T and DAB, as well as in the DRM Standards.
Puncturing is often used with the Viterbi algorithm in coding systems.
During Radio Resource Control (RRC) Connection set procedure, during sending NBAP radio link setup message the uplink puncturing limit will send to NODE B, along with U/L spreading factor & U/L scrambling code.
Puncturing was introduced by Gustave Solomon and J. J. Stiffler in 1964.
See also
Singleton bound, an upper bound in coding theory
|
https://en.wikipedia.org/wiki/Basel%20Institute%20for%20Immunology
|
The Basel Institute for Immunology (BII) was founded in 1969 as a basic research institute in immunology located at 487 Grenzacherstrasse, Basel, Switzerland on the Rhine River down the street from the main Hoffmann-La Roche campus near the Swiss-German border. The institute opened its doors in 1971.
Description
It was a unique concept in the history of mechanisms for funding basic science and the relationship between basic science and industry. Through the influence of Paul Sacher, Swiss conductor and patron of the arts and sciences, drug company Hoffmann-LaRoche committed unrestricted support of $24 million per year and freedom of design of the institute to its founding director Niels K. Jerne. Jerne retired in 1980 and was succeeded by Fritz Melchers, who generally maintained Jerne's themes and vision.
Research groups
The institute was constructed to consist of about 50 scientists in interactive research groups of 3 to 5 researchers supported by technical staff with no titles other than “member” with renewable contracts of 2 to 5 years. Interaction was facilitated by laboratories split into two floors per lab connected by a spiral staircase surrounding a central gathering room. Famously, Charley Steinberg mostly presided over casual meetings in the cafeteria. Scientists from beginning postdoctoral to senior professor were provided complete freedom of research design without the pressures of individual fund raising, proposal writing, politicking and pressure to fit research to popular demands and funding source. The institute's administrative structure was minimal. Continuous visits by distinguished visiting scientists from around the world for periods of a day to months enriched the environment.
Culture and achievements
Establishment of the BII coincided with a convergence of a critical mass of young and energetic scientists from around the world in Basel to staff three startup research ventures to exploit the newly breaking technologies related to
|
https://en.wikipedia.org/wiki/Field%20stain
|
Field stain is a histological method for staining of blood smears. It is used for staining thick blood films in order to discover malarial parasites. Field's stain is a version of a Romanowsky stain, used for rapid processing of the specimens.
Field's stain consists of two parts - Field's stain A is methylene blue and Azure 1 dissolved in phosphate buffer solution; Field's stain B is Eosin Y in buffer solution. Field stain is named after physician John William Field, who developed it in 1941.
Additional images
|
https://en.wikipedia.org/wiki/Radio%20transmitter%20design
|
A radio transmitter or just transmitter is an electronic device which produces radio waves with an antenna. Radio waves are electromagnetic waves with frequencies between about 30 Hz and 300 GHz. The transmitter itself generates a radio frequency alternating current, which is applied to the antenna. When excited by this alternating current, the antenna radiates radio waves. Transmitters are necessary parts of all systems that use radio: radio and television broadcasting, cell phones, wireless networks, radar, two way radios like walkie talkies, radio navigation systems like GPS, remote entry systems, among numerous other uses.
A transmitter can be a separate piece of equipment, or an electronic circuit within another device. Most transmitters consist of an electronic oscillator which generates an oscillating carrier wave, a modulator which impresses an information bearing modulation signal on the carrier, and an amplifier which increases the power of the signal. To prevent interference between different users of the radio spectrum, transmitters are strictly regulated by national radio laws, and are restricted to certain frequencies and power levels, depending on use. The design must usually be type approved before sale. An important legal requirement is that the circuit does not radiate significant radio wave power outside its assigned frequency band, called spurious emission.
Design issues
A radio transmitter design has to meet certain requirements. These include the frequency of operation, the type of modulation, the stability and purity of the resulting signal, the efficiency of power use, and the power level required to meet the system design objectives. High-power transmitters may have additional constraints with respect to radiation safety, generation of X-rays, and protection from high voltages.
Typically a transmitter design includes generation of a carrier signal, which is normally sinusoidal, optionally one or more frequency multiplication stages
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.