source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/KXVO
|
KXVO (channel 15) is a television station in Omaha, Nebraska, United States, airing programming from the digital multicast network TBD. It is owned by Mitts Telecasting Company LLC, which maintains a local marketing agreement (LMA) with the Sinclair Broadcast Group, owner of dual Fox/CW affiliate KPTM (channel 42), for the provision of certain services. Both stations share studios on Farnam Street in Omaha, while KXVO's transmitter is located on Pflug Road, south of Gretna and I-80.
History
KXVO signed on the air on June 10, 1995 as an affiliate of The WB, which debuted nationally almost five months earlier on January 11 of that year; the station was originally owned by Cocola Broadcasting, but was operated by Pappas Telecasting under a local marketing agreement. In the interim six months, Omaha did have access to The WB via cable and satellite providers through Chicago-based national superstation WGN. Cocola would later sell the station to Mitts Telecasting Company in 2000, which retained the LMA with Pappas.
On January 24, 2006, CBS Corporation and Time Warner announced that The WB and UPN would cease broadcasting that September and merge their programming to form a new "fifth" network called The CW. The letters represent the first initials of its corporate parents, CBS (the parent company of UPN) and the Warner Bros. Entertainment unit of Time Warner. In April 2006, KXVO announced an affiliation agreement with The CW, which began airing on the station when the network launched on September 18 of that year.
On January 16, 2009, it was announced that several Pappas stations, including sister station KPTM, would be sold to New World TV Group, after the sale received United States bankruptcy court approval. The LMA between KXVO and KPTM continued after the deal was finalized.
Titan TV Broadcast Group announced the sale of most of its stations, including KPTM and the LMA with KXVO (which remained under Mitts Telecasting ownership after the sale), to the Sinclair B
|
https://en.wikipedia.org/wiki/Subsolar%20point
|
The subsolar point on a planet is the point at which its Sun is perceived to be directly overhead (at the zenith); that is, where the Sun's rays strike the planet exactly perpendicular to its surface. It can also mean the point closest to the Sun on an astronomical object, even though the Sun might not be visible.
To an observer on a planet with an orientation and rotation similar to those of Earth, the subsolar point will appear to move westward with a speed of 1600 km/h, completing one circuit around the globe each day, approximately moving along the equator. However, it will also move north and south between the tropics over the course of a year, so will appear to spiral like a helix.
The subsolar point contacts the Tropic of Cancer on the June solstice and the Tropic of Capricorn on the December solstice. The subsolar point crosses the Equator on the March and September equinoxes.
Coordinates of the subsolar point
The subsolar point moves constantly on the surface of the Earth, but for any given time, its coordinates, or latitude and longitude, can be calculated as follows:
where
is the latitude of the subsolar point in degrees,
is the longitude of the subsolar point in degrees,
is the declination of the Sun in degrees,
is the Greenwich Mean Time or UTC, in decimal hours since 00:00:00 UTC on the relevant date
is the equation of time in minutes.
Observation in specific locations
Qibla observation by shadows, when the subsolar point passes through the Ka'bah in Saudi Arabia, allowing the Muslim sacred direction to be found by observing shadows.
When the point passes through Hawaii, which is the only U.S. state in which this happens, the event is known as Lahaina Noon.
See also
Subsatellite point
References
External links
Day and Night World Map (shows location of subsolar point for any user-specified time)
Spherical astronomy
Earth
Astronomical coordinate systems
Solstices
Point (geometry)
|
https://en.wikipedia.org/wiki/Safe%40Office
|
Safe@Office is a line of firewall and virtual private network (VPN) appliances developed by SofaWare Technologies, a Check Point company.
The Check Point Safe@Office product line is targeted at the small and medium business segment, and includes the 500 and 500W (with Wi-Fi) series of internet security appliance. The old S-Box, Safe@Home, 100 series, 200 series, and 400W series are discontinued.
The appliances are licensed according to the number of protected IP addresses (referenced to as users) in numbers 5, 25 or unlimited. There is also a variant with a built-in asymmetric disconnection line (ADSL) modem.
See also
VPN-1 UTM Edge — similar appliance with possibility of being managed from the Check Point SmartCenter.
References
External links
Safe@Office Product Page At The Check Point Website
Safe@Office Product Page At The SofaWare Website
Fixing Your Connection Is Not Private Error In Browser
Computer network security
|
https://en.wikipedia.org/wiki/Theorem%20on%20friends%20and%20strangers
|
The theorem on friends and strangers is a mathematical theorem in an area of mathematics called Ramsey theory.
Statement
Suppose a party has six people. Consider any two of them. They might be meeting for the first time—in which case we will call them mutual strangers; or they might have met before—in which case we will call them mutual acquaintances. The theorem says:
In any party of six people, at least three of them are (pairwise) mutual strangers or mutual acquaintances.
Conversion to a graph-theoretic setting
A proof of the theorem requires nothing but a three-step logic. It is convenient to phrase the problem in graph-theoretic language.
Suppose a graph has 6 vertices and every pair of (distinct) vertices is joined by an edge. Such a graph is called a complete graph (because there cannot be any more edges). A complete graph on vertices is denoted by the symbol .
Now take a . It has 15 edges in all. Let the 6 vertices stand for the 6 people in our party. Let the edges be coloured red or blue depending on whether the two people represented by the vertices connected by the edge are mutual strangers or mutual acquaintances, respectively. The theorem now asserts:
No matter how you colour the 15 edges of a with red and blue, you cannot avoid having either a red triangle—that is, a triangle all of whose three sides are red, representing three pairs of mutual strangers—or a blue triangle, representing three pairs of mutual acquaintances. In other words, whatever colours you use, there will always be at least one monochromatic triangle ( that is, a triangle all of whose edges have the same color ).
Proof
Choose any one vertex; call it P. There are five edges leaving P. They are each coloured red or blue. The pigeonhole principle says that at least three of them must be of the same colour; for if there are less than three of one colour, say red, then there are at least three that are blue.
Let A, B, C be the other ends of these three edges, all of the same
|
https://en.wikipedia.org/wiki/Verifiable%20random%20function
|
In cryptography, a verifiable random function (VRF) is a public-key pseudorandom function that provides proofs that its outputs were calculated correctly. The owner of the secret key can compute the function value as well as an associated proof for any input value. Everyone else, using the proof and the associated public key (or verification key), can check that this value was indeed calculated correctly, yet this information cannot be used to find the secret key.
A verifiable random function can be viewed as a public-key analogue of a keyed cryptographic hash and as a cryptographic commitment to an exponentially large number of seemingly random bits. The concept of a verifiable random function is closely related to that of a verifiable unpredictable function (VUF), whose outputs are hard to predict but do not necessarily seem random.
The concept of a VRF was introduced by Micali, Rabin, and Vadhan in 1999. Since then, verifiable random functions have found widespread use in cryptocurrencies, as well as in proposals for protocol design and cybersecurity.
Constructions
In 1999, Micali, Rabin, and Vadhan introduced the concept of a VRF and proposed the first such one. The original construction was rather inefficient: it first produces a verifiable unpredictable function, then uses a hard-core bit to transform it into a VRF; moreover, the inputs have to be mapped to primes in a complicated manner: namely, by using a prime sequence generator that generates primes with overwhelming probability using a probabilistic primality test. The verifiable unpredictable function thus proposed, which is provably secure if a variant of the RSA problem is hard, is defined as follows: The public key PK is , where m is the product of two random primes, r is a number randomly selected from , coins is a randomly selected set of bits, and Q a function selected randomly from all polynomials of degree over the field . The secret key is . Given an input x and a secret key SK, the VUF use
|
https://en.wikipedia.org/wiki/ACPI
|
Advanced Configuration and Power Interface (ACPI) is an open standard that operating systems can use to discover and configure computer hardware components, to perform power management (e.g. putting unused hardware components to sleep), auto configuration (e.g. Plug and Play and hot swapping), and status monitoring. First released in December 1996, ACPI aims to replace Advanced Power Management (APM), the MultiProcessor Specification, and the Plug and Play BIOS (PnP) Specification. ACPI brings power management under the control of the operating system, as opposed to the previous BIOS-centric system that relied on platform-specific firmware to determine power management and configuration policies. The specification is central to the Operating System-directed configuration and Power Management (OSPM) system. ACPI defines hardware abstraction interfaces between the device's firmware (e.g. BIOS, UEFI), the computer hardware components, and the operating systems.
Internally, ACPI advertises the available components and their functions to the operating system kernel using instruction lists ("methods") provided through the system firmware (UEFI or BIOS), which the kernel parses. ACPI then executes the desired operations written in ACPI Machine Language (such as the initialization of hardware components) using an embedded minimal virtual machine.
Intel, Microsoft and Toshiba originally developed the standard, while HP, Huawei and Phoenix also participated later. In October 2013, ACPI Special Interest Group (ACPI SIG), the original developers of the ACPI standard, agreed to transfer all assets to the UEFI Forum, in which all future development will take place. of the standard 6.5 was released in August 2022.
Architecture
The firmware-level ACPI has three main components: the ACPI tables, the ACPI BIOS, and the ACPI registers. The ACPI BIOS generates ACPI tables and loads ACPI tables into main memory. Much of the firmware ACPI functionality is provided in bytecode of ACP
|
https://en.wikipedia.org/wiki/Bouillon%20cube
|
A bouillon cube (Canada and US), stock cube (Australia, Ireland, New Zealand, South Africa and UK), or broth cube (Asia) is dehydrated broth or stock formed into a small cube or other cuboid shape. The most common format is a cube about wide. It is typically made from dehydrated vegetables or meat stock, a small portion of fat, MSG, salt, and seasonings, shaped into a small cube. Vegetarian and vegan types are also made. Bouillon is also available in granular, powdered, liquid, and paste forms.
History
Dehydrated meat stock, in the form of tablets, was known in the 17th century to English food writer Anne Blencowe, who died in 1718, and elsewhere as early as 1735. Various French cooks in the early 19th century (Lefesse, Massué, and Martin) tried to patent bouillon cubes and tablets, but were turned down for lack of originality. Nicolas Appert also proposed such dehydrated bouillon in 1831.
Portable soup was a kind of dehydrated food used in the 18th and 19th centuries. It was a precursor of meat extract and bouillon cubes, and of industrially dehydrated food. It is also known as pocket soup or veal glue. It is a cousin of the glace de viande of French cooking. It was long a staple of seamen and explorers, for it would keep for many months or even years. In this context, it was a filling and nutritious dish. Portable soup of less extended vintage was, according to the 1881 Household Cyclopedia, "exceedingly convenient for private families, for by putting one of the cakes in a saucepan with about a quart of water, and a little salt, a basin of good broth may be made in a few minutes."
In the mid-19th century, German chemist Justus von Liebig developed meat extract, but it was more expensive than bouillon cubes.
The invention of the bouillon cube is also attributed to Auguste Escoffier, one of the most accomplished French chefs of his time, who also pioneered many other advances in food preservation, such as the canning of tomatoes and vegetables.
Industrially
|
https://en.wikipedia.org/wiki/Variable-frequency%20transformer
|
A variable-frequency transformer (VFT) is used to transmit electricity between two (asynchronous or synchronous) alternating current frequency domains. The VFT is a relatively recent development. Most asynchronous grid inter-ties use high-voltage direct current converters, while synchronous grid inter-ties are connected by lines and "ordinary" transformers, but without the ability to control power flow between the systems, or with phase-shifting transformer with some flow control.
It can be thought of as a very high power synchro, or a rotary converter acting as a frequency changer, which is more efficient than a motor–generator of the same rating.
Construction and operation
A variable-frequency transformer is a doubly fed electric machine resembling a vertical shaft hydroelectric generator with a three-phase wound rotor, connected by slip rings to one external power circuit. The stator is connected to the other. With no applied torque, the shaft rotates due to the difference in frequency between the networks connected to the rotor and stator. A direct-current torque motor is mounted on the same shaft; changing the direction of torque applied to the shaft changes the direction of power flow.
The variable-frequency transformer behaves as a continuously adjustable phase-shifting transformer. It allows control of the power flow between two networks. Unlike power electronics solutions such as back-to-back HVDC, the variable frequency transformer does not demand harmonic filters and reactive power compensation. Limitations of the concept are the current-carrying capacity of the slip rings for the rotor winding.
Projects
Five small variable-frequency transformer with a total power rate of 25 MVA were in use at Neuhof Substation, Bad Sachsa, Germany for coupling power grids of former East and West Germany between 1985 and 1990.
Langlois Substation in Québec, Canada () installed a 100 MW variable-frequency transformer in 2004 to connect the asynchronous grids in Québe
|
https://en.wikipedia.org/wiki/Preferred%20frame
|
In theoretical physics, a preferred frame or privileged frame is usually a special hypothetical frame of reference in which the laws of physics might appear to be identifiably different (simpler) from those in other frames.
In theories that apply the principle of relativity to inertial motion, physics is the same in all inertial frames, and is even the same in all frames under the principle of general relativity.
Preferred frame in aether theory
In theories that presume that light travels at a fixed speed relative to an unmodifiable and detectable luminiferous aether, a preferred frame would be a frame in which this aether would be stationary. In 1887, Michelson and Morley tried to identify the state of motion of the aether. To do so, they assumed Galilean relativity to be satisfied by clocks and rulers; that is, that the length of rulers and periods of clocks are invariant under any Galilean frame change. Under such an hypothesis, the aether should have been observed.
By comparing measurements made in different directions and looking for an effect due to the Earth's orbital speed, their experiment famously produced a null result. As a consequence, within Lorentz ether theory the Galilean transformation was replaced by the Lorentz transformation. However, in Lorentz aether theory the existence of an undetectable aether is assumed and the relativity principle holds. The theory was quickly replaced by special relativity, which gave similar formulas without the existence of an unobservable aether. All inertial frames are physically equivalent, in both theories. More precisely, provided that no phenomenon violates the principle of relativity of motion, there is no means to measure the velocity of an inertial observer with regard to a possible medium of propagation of quantum waves.
Inertial frames preferred above noninertial frames
Although all inertial frames are equivalent under classical mechanics and special relativity, the set of all inertial frames is priv
|
https://en.wikipedia.org/wiki/Comparison%20of%20Unicode%20encodings
|
This article compares Unicode encodings. Two situations are considered: 8-bit-clean environments (which can be assumed), and environments that forbid use of byte values that have the high bit set. Originally such prohibitions were to allow for links that used only seven data bits, but they remain in some standards and so some standard-conforming software must generate messages that comply with the restrictions. Standard Compression Scheme for Unicode and Binary Ordered Compression for Unicode are excluded from the comparison tables because it is difficult to simply quantify their size.
Compatibility issues
A UTF-8 file that contains only ASCII characters is identical to an ASCII file. Legacy programs can generally handle UTF-8 encoded files, even if they contain non-ASCII characters. For instance, the C printf function can print a UTF-8 string, as it only looks for the ASCII '%' character to define a formatting string, and prints all other bytes unchanged, thus non-ASCII characters will be output unchanged.
UTF-16 and UTF-32 are incompatible with ASCII files, and thus require Unicode-aware programs to display, print and manipulate them, even if the file is known to contain only characters in the ASCII subset. Because they contain many zero bytes, the strings cannot be manipulated by normal null-terminated string handling for even simple operations such as copy.
Therefore, even on most UTF-16 systems such as Windows and Java, UTF-16 text files are not common; older 8-bit encodings such as ASCII or ISO-8859-1 are still used, forgoing Unicode support; or UTF-8 is used for Unicode. One rare counter-example is the "strings" file used by macOS (Mac OS X 10.3 Panther and later) applications for lookup of internationalized versions of messages which defaults to UTF-16, with "files encoded using UTF-8 ... not guaranteed to work."
XML is, by convention, encoded as UTF-8, and all XML processors must at least support UTF-8 (including US-ASCII by definition) and UTF-16.
Ef
|
https://en.wikipedia.org/wiki/MP6
|
The Rise mP6 was a superpipelined and superscalar microprocessor designed by Rise Technology to compete with the Intel Pentium line.
History
Rise Technology had spent 5 years developing a x86 compatible microprocessor, and finally introduced it in November 1998 as a low-cost, low-power alternative for the Super Socket 7 platform, that allowed for higher Front-side bus speeds than the previous Socket 7 and that made it possible for other CPU manufacturers to keep competing against Intel, that had moved to the Slot 1 platform.
Design
The mP6 made use of the MMX instruction set and had three MMX pipelines which allowed the CPU to execute up to three MMX instructions in a single cycle. Its three integer units made it possible to execute three integer instructions in a single cycle as well and the fully pipelined floating point unit could execute up to two floating-point instructions per cycle. To further improve the performance the core utilized branch prediction and a number of techniques to resolve data dependency conflicts.
According to Rise, the mP6 should perform almost as fast as Intel Pentium II at the same frequencies.
Performance
Despite its innovative features, the real-life performance of the mP6 proved disappointing. This was mainly due to the small L1 Cache. Another reason was that the Rise mP6's PR 266 rating was based upon the old Intel Pentium MMX, while its main competitors were the Intel Celeron 266, the IDT WinChip 2-266 and the AMD K6-2 266, that all delivered more performance in most benchmarks and applications. The Celeron and the K6-2 actually worked at 266 MHz, and the WinChip 2's PR rating was based upon the performance of its AMD opponent.
Use
Announced in 1998, the chip never achieved widespread use, and Rise quietly exited the market in December of the following year.
Like competitors Cyrix and IDT, Rise found it was unable to compete with Intel and AMD.
Legacy
Silicon Integrated Systems (SiS) licensed the mP6 technology, and used it
|
https://en.wikipedia.org/wiki/List%20of%20Puerto%20Rico%20state%20forests
|
Puerto Rico state forests (Spanish: Bosques estatales de Puerto Rico), sometimes referred to as Puerto Rico Commonwealth forests in English, are protected forest reserves managed by the government of Puerto Rico, particularly by the Puerto Rico Department of Natural and Environmental Resources. In addition to their function as protected forest reserves, many of the forests are analogous to state parks in other states and territories of the United States, as they also function as management units that cater to recreational, educational and cultural activities. Additionally, state forests in Puerto Rico can contain units with additional protection designations within their boundaries, as is the case of La Parguera Natural Reserve within Boquerón State Forest, for example. There are currently 20 units in the Puerto Rico state forest system.
History
The first protected forests in Puerto Rico were designated not for their ecological value but for their industrial timber utility in the form of Spanish Crown Lands under the Inspección de Montes, the equivalent of the Spanish Colonial Forest service. El Yunque, for example, was the first forested area to receive this designation in Puerto Rico.
Puerto Rico became a territory of the United States in the aftermath of the Spanish-American War in 1898 and, in 1903, President Theodore Roosevelt set aside these former timberlands to proclaim the Luquillo Forest Reserve, the first ecologically protected area in the island. Meanwhile, on the state level, colonial governor Arthur Yager set aside mangrove forests along the coasts of Puerto Rico for their ecological value between 1918 and 1919: Aguirre, Boquerón, Ceiba, and Guánica. The latter had its boundaries extended in order to protect a large tract of dry forest, a type of ecosystem that used to be common in the Caribbean but had now almost completely disappeared. These became the first state or territorial-level protected forests in the island and, in December of 1919, owne
|
https://en.wikipedia.org/wiki/Local%20reference%20frame
|
In theoretical physics, a local reference frame (local frame) refers to a coordinate system or frame of reference that is only expected to function over a small region or a restricted region of space or spacetime.
The term is most often used in the context of the application of local inertial frames to small regions of a gravitational field. Although gravitational tidal forces will cause the background geometry to become noticeably non-Euclidean over larger regions, if we restrict ourselves to a sufficiently small region containing a cluster of objects falling together in an effectively uniform gravitational field, their physics can be described as the physics of that cluster in a space free from explicit background gravitational effects.
Equivalence principle
When constructing his general theory of relativity, Einstein made the following observation: a freely falling object in a gravitational field will not be able to detect the existence of the field by making local measurements ("a falling man feels no gravity"). Einstein was then able to complete his general theory by arguing that the physics of curved spacetime must reduce over small regions to the physics of simple inertial mechanics (in this case special relativity) for small freefalling regions.
Einstein referred to this as "the happiest idea of my life".
Laboratory frame
In physics, the laboratory frame of reference, or lab frame for short, is a frame of reference centered on the laboratory in which the experiment (either real or thought experiment) is done. This is the reference frame in which the laboratory is at rest. Also, this is usually the frame of reference in which measurements are made, since they are presumed (unless stated otherwise) to be made by laboratory instruments. An example of instruments in a lab frame, would be the particle detectors at the detection facility of a particle accelerator.
See also
Breit frame
Center-of-mass frame
Frame bundle
Inertial frame of reference
Local coo
|
https://en.wikipedia.org/wiki/Non-inertial%20reference%20frame
|
A non-inertial reference frame (also known as an accelerated reference frame) is a frame of reference that undergoes acceleration with respect to an inertial frame. An accelerometer at rest in a non-inertial frame will, in general, detect a non-zero acceleration. While the laws of motion are the same in all inertial frames, in non-inertial frames, they vary from frame to frame depending on the acceleration.
In classical mechanics it is often possible to explain the motion of bodies in non-inertial reference frames by introducing additional fictitious forces (also called inertial forces, pseudo-forces and d'Alembert forces) to Newton's second law. Common examples of this include the Coriolis force and the centrifugal force. In general, the expression for any fictitious force can be derived from the acceleration of the non-inertial frame. As stated by Goodman and Warner, "One might say that F ma holds in any coordinate system provided the term 'force' is redefined to include the so-called 'reversed effective forces' or 'inertia forces'."
In the theory of general relativity, the curvature of spacetime causes frames to be locally inertial, but globally non-inertial. Due to the non-Euclidean geometry of curved space-time, there are no global inertial reference frames in general relativity. More specifically, the fictitious force which appears in general relativity is the force of gravity.
Avoiding fictitious forces in calculations
In flat spacetime, the use of non-inertial frames can be avoided if desired. Measurements with respect to non-inertial reference frames can always be transformed to an inertial frame, incorporating directly the acceleration of the non-inertial frame as that acceleration as seen from the inertial frame. This approach avoids use of fictitious forces (it is based on an inertial frame, where fictitious forces are absent, by definition) but it may be less convenient from an intuitive, observational, and even a calculational viewpoint. As point
|
https://en.wikipedia.org/wiki/Chicken%20gun
|
A chicken gun or flight impact simulator is a large-diameter, compressed-air gun used to fire bird carcasses at aircraft components in order to simulate high-speed bird strikes during the aircraft's flight. Jet engines and aircraft windshields are particularly vulnerable to damage from such strikes, and are the most common target in such tests. Although various species of bird are used in aircraft testing and certification, the device acquired the common name of "chicken gun" as chickens are the most commonly used 'ammunition' owing to their ready availability.
Context
Bird strikes are a significant hazard to flight safety, particularly around takeoff and landing where crew workload is highest and there is scant time for recovery before a potential impact with the ground. The speeds involved in a collision between a jet aircraft and a bird can be considerable – often around – resulting in a large transfer of kinetic energy. A bird colliding with an aircraft windshield could penetrate or shatter it, injuring the flight crew or impairing their ability to see. At high altitudes such an event could cause uncontrolled decompression. A bird ingested by a jet engine can break the engine's compressor blades, potentially causing catastrophic damage.
Multiple measures are used to prevent bird strikes, such as the use of deterrent systems at airports to prevent birds from gathering, population control using birds of prey or firearms, and recently avian radar systems that track flocks of birds and give warnings to pilots and air traffic controllers.
Despite this, the risk of bird strikes is impossible to eliminate and therefore most government certification authorities such as the US Federal Aviation Administration and the European Aviation Safety Agency require that aircraft engines and airframes be resilient against bird strikes to a certain degree as part of the airworthiness certification process. In general, an engine should not suffer an uncontained failure (an even
|
https://en.wikipedia.org/wiki/Memory%20map
|
In computer science, a memory map is a structure of data (which usually resides in memory itself) that indicates how memory is laid out. The term "memory map" has different meanings in different contexts.
It is the fastest and most flexible cache organization that uses an associative memory. The associative memory stores both the address and content of the memory word.
In the boot process of some computers, a memory map may be passed on from the firmware to instruct an operating system kernel about memory layout. It contains the information regarding the size of total memory, any reserved regions and may also provide other details specific to the architecture.
In virtual memory implementations and memory management units, a memory map refers to page tables or hardware registers, which store the mapping between a certain process's virtual memory layout and how that space relates to physical memory addresses.
In native debugger programs, a memory map refers to the mapping between loaded executable(or)library files and memory regions. These memory maps are used to resolve memory addresses (such as function pointers) to actual symbols.
PC BIOS memory map
BIOS for the IBM Personal Computer and compatibles provides a set of routines that can be used by operating system or applications to get the memory layout. Some of the available routines are:
BIOS Function: INT 0x15, AX=0xE801:
This BIOS interrupt call is used to get the memory size for 64MB+ configurations. It is supported by AMI BIOSses dated August 23, 1994 or later. The caller sets AX to 0xE801 then executes int 0x15. If some error has happened, the routine returns with CF (Carry Flag) set to 1. If no error, the routine returns with CF clear and the state of registers is described as following:
BIOS Function: INT 0x15, AX=0xE820 - GET SYSTEM MEMORY MAP:
Input:
SMAP buffer structure:
How used: The operating system shall allocate an SMAP buffer in memory (20 bytes buffer). Then set registers as specified in "I
|
https://en.wikipedia.org/wiki/SAPHO%20syndrome
|
SAPHO syndrome includes a variety of inflammatory bone disorders that may be associated with skin changes. These diseases share some clinical, radiologic, and pathologic characteristics.
An entity initially known as chronic recurrent multifocal osteomyelitis was first described in 1972. Subsequently, in 1978, several cases of were associated with blisters on the palms and soles (palmoplantar pustulosis). Since then, a number of associations between skin conditions and osteoarticular disorders have been reported under a variety of names, including sternocostoclavicular hyperostosis, pustulotic arthro-osteitis, and acne-associated spondyloarthropathy. The term SAPHO (an acronym for synovitis, acne, pustulosis, hyperostosis, osteitis) was coined in 1987 to represent this spectrum of inflammatory bone disorders that may or may not be associated with dermatologic pathology.
Diagnosis
Radiologic findings
Anterior chest wall (most common site, 65–90% of patients): Hyperostosis, sclerosis and bone hypertrophy especially involving the sternoclavicular joint, often with a soft tissue component.
Spine (33% of patients): Segmental, usually involving the thoracic spine. The four main presentations include spondylodiscitis, osteosclerosis, paravertebral ossifications, and sacroiliac joint involvement.
Long bones (30% of patients): usually metadiaphyseal and located in the distal femur and proximal tibia. It looks like chronic osteomyelitis but will not have a sequestrum or abscess.
Flat bones (10% of patients): mandible and ilium.
Peripheral arthritis has been reported in 92% of cases of SAPHO as well.
In children, the SAPHO syndrome is most likely to affect the metaphysis of long bones in the legs (tibia, femur, fibula), followed by clavicles and spine.
Treatment
Bisphosphonate therapy has been suggested as a first-line therapeutic option in many case reports and series.
Treatment with tumor necrosis factor alpha antagonists (TNF inhibitors) has been tried in a few
|
https://en.wikipedia.org/wiki/Fictitious%20play
|
In game theory, fictitious play is a learning rule first introduced by George W. Brown. In it, each player presumes that the opponents are playing stationary (possibly mixed) strategies. At each round, each player thus best responds to the empirical frequency of play of their opponent. Such a method is of course adequate if the opponent indeed uses a stationary strategy, while it is flawed if the opponent's strategy is non-stationary. The opponent's strategy may for example be conditioned on the fictitious player's last move.
History
Brown first introduced fictitious play as an explanation for Nash equilibrium play. He imagined that a player would "simulate" play of the game in their mind and update their future play based on this simulation; hence the name fictitious play. In terms of current use, the name is a bit of a misnomer, since each play of the game actually occurs. The play is not exactly fictitious.
Convergence properties
In fictitious play, strict Nash equilibria are absorbing states. That is, if at any time period all the players play a Nash equilibrium, then they will do so for all subsequent rounds. (Fudenberg and Levine 1998, Proposition 2.1) In addition, if fictitious play converges to any distribution, those probabilities correspond to a Nash equilibrium of the underlying game. (Proposition 2.2)
Therefore, the interesting question is, under what circumstances does fictitious play converge? The process will converge for a 2-person game if:
Both players have only a finite number of strategies and the game is zero sum (Robinson 1951)
The game is solvable by iterated elimination of strictly dominated strategies (Nachbar 1990)
The game is a potential game (Monderer and Shapley 1996-a,1996-b)
The game has generic payoffs and is 2 × N (Berger 2005)
Fictitious play does not always converge, however. Shapley (1964) proved that in the game pictured here (a nonzero-sum version of Rock, Paper, Scissors), if the players start by choosing (
|
https://en.wikipedia.org/wiki/Spin%20coating
|
Spin coating is a procedure used to deposit uniform thin films onto flat substrates. Usually a small amount of coating material is applied on the center of the substrate, which is either spinning at low speed or not spinning at all. The substrate is then rotated at speeds up to 10,000 rpm to spread the coating material by centrifugal force. A machine used for spin coating is called a spin coater, or simply spinner.
Rotation is continued while the fluid spins off the edges of the substrate, until the desired thickness of the film is achieved. The applied solvent is usually volatile, and simultaneously evaporates. The higher the angular speed of spinning, the thinner the film. The thickness of the film also depends on the viscosity and concentration of the solution, and the solvent. Pioneering theoretical analysis of spin coating was undertaken by Emslie et al., and has been extended by many subsequent authors (including Wilson et al., who studied the rate of spreading in spin coating; and Danglad-Flores et al., who found a universal description to predict the deposited film thickness).
Spin coating is widely used in microfabrication of functional oxide layers on glass or single crystal substrates using sol-gel precursors, where it can be used to create uniform thin films with nanoscale thicknesses. It is used intensively in photolithography, to deposit layers of photoresist about 1 micrometre thick. Photoresist is typically spun at 20 to 80 revolutions per second for 30 to 60 seconds. It is also widely used for the fabrication of planar photonic structures made of polymers.
One advantage to spin coating thin films is the uniformity of the film thickness. Owing to self-leveling, thicknesses do not vary more than 1%. The thickness of films produced in this manner may also affect the optical properties of such materials. This is important for electrochemical testing, specifically when recording absorbance readings from Ultraviolet-visible Spectroscopy, since thicker
|
https://en.wikipedia.org/wiki/Dubna%2048K
|
The Dubna 48K (Дубна 48К) is a Soviet clone of the ZX Spectrum home computer launched in 1991. It was based on an analogue of the Zilog Z80 microprocessor. Its name comes from Dubna, a town near Moscow, where it was produced on the "TENSOR" instrument factory, and "48K" stands for 48 KBs of RAM.
Overview
According to the manual, this computer was intended for:
studying the principles of PC operation
various kinds of calculations
"intellectual games"
By the time this computer was released (1991), there were already much more powerful x86 CPUs and commercially available advanced operating systems, such as Unix, DOS and Windows. The Dubna 48K had only a built-in BASIC interpreter, and loaded its programs from a cassette recorder, so it couldn't run any of the modern operating systems. However, the Dubna 48K and many other Z80 clones, though outdated by that time, were introduced in high schools of the Soviet Union. Many of the games for the Z80-based machine were ported from games already available for Nintendo's 8-bit game console, marketed in Russia under the brand Dendy.
The machine comes in two versions: in a metal case for the initial 1991 model, and in a plastic case for the 1992 model.
Included items
The Dubna 48K was shipped with the following units:
Main unit ("data processing unit", as stated on its back side), with mainboard and built-in keyboard
External power unit
Video adapter for connecting the computer to the TV set
BASIC programming manual
Reference book, including complete schematic circuit
Additionally, there were some optional items:
Joystick
32 cm (12") colour monitor
The computer could also connect to a ZX Microdrive, but such device was never included.
Technical details
CPU: 8-bit MME 80A at 1.875 MHz (half the speed of the original ZX Spectrum)
RAM: 48 KB (16× КР565РУ5Г chips)
ROM: 16 KB (2× К573РФ4А)
Resolution: 256 x 192 pixels, or 24 rows of 32 characters each
Number of colours: 8 colours in either normal or bright mode,
|
https://en.wikipedia.org/wiki/Rope%20stretcher
|
In ancient Egypt, a rope stretcher (or harpedonaptai) was a surveyor who measured real property demarcations and foundations using knotted cords, stretched so the rope did not sag. The practice is depicted in tomb paintings of the Theban Necropolis. Rope stretchers used 3-4-5 triangles and the plummet, which are still in use by modern surveyors.
The commissioning of a new sacred building was a solemn occasion in which pharaohs and other high-ranking officials personally stretched ropes to define the foundation. This important ceremony, and therefore rope-stretching itself, are attested over 3000 years from the early dynastic period to the Ptolemaic kingdom.
Rope stretching technology spread to ancient Greece and India, where it stimulated the development of geometry and mathematics.
See also
Gromatici
Surveying
Trigonometry
References
The New Encyclopædia Britannica, Encyclopædia Britannica 1974
James Henry Breasted Ancient Records of Egypt, Part Two, Chicago 1906
Joel F. PAULSON, "Surveying in Ancient Egypt,", FIG Working Week 2005 and GSDI-8, Cairo, Egypt April 16-21, 2005.
External links
surveying instruments
proportions "The knowledge of pleasing proportions of the rope stretchers was incorporated by the Greeks"
Sangaku and The Egyptian Triangle
Ancient Egypt
Ancient Egyptian technology
Surveying
Egyptian inventions
|
https://en.wikipedia.org/wiki/Architectural%20technologist
|
The architectural technologist, also known as a building technologist, provides technical building design services and is trained in architectural technology, building technical design and construction.
Architectural technologists apply the science of architecture and typically concentrate on the technology of building, design technology and construction. The training of an architectural technologist concentrates on the ever-increasingly complex technical aspects in a building project, but matters of aesthetics, space, light and circulation are also involved within the technical design, leading the professional to assume decisions which are also non-technical. They can or may negotiate the construction project, and manage the process from conception through to completion, typically focusing on the technical aspects of a building project.
Most architectural technologists are employed in architectural and engineering firms, or with municipal authorities; but many provide independent professional services directly to clients, although restricted by law in some countries. Others work in product development or sales with manufacturers.
In Britain, Ireland, Sweden, Denmark, Hong-Kong (Chartered Architectural Technologist), Canada (Architectural Technologist or Registered Building Technologist), Argentina (M.M.O Maestro Mayor de Obras / Chartered Architecture & Building Science Technologist) and other nations, they have many abilities which are extremely useful in a technological sense to work alongside architects, engineers and other professionals - the training of a technologist provides skills in building and architectural technology. It is an important role in the current building climate. Architectural technologists may be directors or shareholders of an architectural firm (where permitted by the jurisdiction and legal structure). To become an Architectural Technologist, a four-year degree (or equivalent) in Architectural Technology (in Canada normally a thre
|
https://en.wikipedia.org/wiki/Solar%20conjunction
|
Solar conjunction generally occurs when a planet or other Solar System object is on the opposite side of the Sun from the Earth. From an Earth reference, the Sun will pass between the Earth and the object. Communication with any spacecraft in solar conjunction will be severely limited due to the Sun's interference on radio transmissions from the spacecraft.
The term can also refer to the passage of the line of sight to an interior planet (Mercury or Venus) or comet being very close to the solar disk. If the planet passes directly in front of the Sun, a solar transit occurs.
Spacecraft-related issues
There is also a risk that an antenna equipped with auto-tracking will begin following the Sun's movements instead of the satellite once they are no longer inline with each other. This is because the Sun acts as a large electromagnetic noise generator which creates a signal much stronger than the satellite's tracking signal.
One example of limitations caused by the solar conjunction occurred when the NASA-JPL team put the Curiosity rover on Mars' surface in autonomous operation mode for 25 days during the conjunction. In autonomous mode Curiosity suspends all movements and active science operations but retains communication-independent experiments (e.g. record atmospheric and radiation data). A more recent example occurred with the Mars rover Perseverance in October of 2021.
See also
Conjunction (astronomy and astrology)
List of conjunctions (astronomy)
Opposition (astronomy)
References
Astrological aspects
Conjunctions (astronomy and astrology)
Satellite broadcasting
Spaceflight concepts
|
https://en.wikipedia.org/wiki/Line%202%20%28Shanghai%20Metro%29
|
Line 2 is an east–west line in the Shanghai Metro network. With a length of nearly , it is the second longest line in the metro system after line 11. Line 2 runs from in the west to in the east, passing Hongqiao Airport, the Huangpu river, and the Lujiazui Financial District in Pudong. With a daily ridership of over 1.9 million, it is the busiest line on the Shanghai Metro. The eastern portion of the line, from to Pudong International Airport, was operated almost independently from the main segment until April 19, 2019, when through service began. The line is colored light green on system maps.
History
The first section of line 2 was opened on October 28, 1999, from to . This section, which included 12 stations, totaled . A year later coinciding with the tenth anniversary of the development and opening up of Pudong, marking the official opening of the line, was added to the eastern part of the line, adding . Four new stations, located west of the Zhongshan Park station, opened in December 2006, extending the line to . This section added to the line. Four years later, in preparation for the 2010 Shanghai World Expo, the line was significantly expanded. In February, the Zhangjiang Hi-Tech Park station was rebuilt. In addition, another eastern segment took line 2 to . A month later, the line was extended westward to , adding to the line including a stop at . On April 8, an eastward extension added 8 stations to the line, totaling and taking line 2 to . On July 1, opens to the public with the opening of the railway station of the same name.
In October 2006, it was decided to rename three stations on line 2 by the end of the year, adopting a new naming scheme: metro stations, unlike bus stops, are no longer supposed to be named after neighbouring vertical streets, but famous streets and sights in the vicinity, making it easier for visitors to find these places. The renamed stations are Century Avenue (formerly Dongfang Road), East Nanjing Road (formerly Middle
|
https://en.wikipedia.org/wiki/Hybrid%20integrated%20circuit
|
A hybrid integrated circuit (HIC), hybrid microcircuit, hybrid circuit or simply hybrid is a miniaturized electronic circuit constructed of individual devices, such as semiconductor devices (e.g. transistors, diodes or monolithic ICs) and passive components (e.g. resistors, inductors, transformers, and capacitors), bonded to a substrate or printed circuit board (PCB). A PCB having components on a Printed Wiring Board (PWB) is not considered a true hybrid circuit according to the definition of MIL-PRF-38534.
Overview
"Integrated circuit" as the term is currently used refers to a monolithic IC which differs notably from a HIC in that a HIC is fabricated by inter-connecting a number of components on a substrate whereas an IC's (monolithic) components are fabricated in a series of steps entirely on a single wafer which is then diced into chips. Some hybrid circuits may contain monolithic ICs, particularly Multi-chip module (MCM) hybrid circuits.
Hybrid circuits could be encapsulated in epoxy, as shown in the photo, or in military and space applications, a lid was soldered onto the package. A hybrid circuit serves as a component on a PCB in the same way as a monolithic integrated circuit; the difference between the two types of devices is in how they are constructed and manufactured. The advantage of hybrid circuits is that components which cannot be included in a monolithic IC can be used, e.g., capacitors of large value, wound components, crystals, inductors. In military and space applications, numerous integrated circuits, transistors and diodes, in their die form, would be placed on either a ceramic or beryllium substrate. Either gold or aluminum wire would be bonded from the pads of the IC, transistor, or diode to the substrate.
Thick film technology is often used as the interconnecting medium for hybrid integrated circuits. The use of screen printed thick film interconnect provides advantages of versatility over thin film although feature sizes may be larg
|
https://en.wikipedia.org/wiki/Property%20list
|
In the macOS, iOS, NeXTSTEP, and GNUstep programming frameworks, property list files are files that store serialized objects. Property list files use the filename extension .plist, and thus are often referred to as p-list files.
Property list files are often used to store a user's settings. They are also used to store information about bundles and applications, a task served by the resource fork in the old Mac OS.
Property lists are also used for localization strings for development. These files use the .strings or .stringsdict extensions. The former is a "reduced" old-style plist containing only one dictionary without the braces (see ), while the latter is a fully-fledged plist. Xcode also uses a .pbxproj extension for old-style plists used as project files.
Representations
Since the data represented by property lists is somewhat abstract, the underlying file format can be implemented many ways. Namely, NeXTSTEP used one format to represent a property list, and the subsequent GNUstep and macOS frameworks introduced differing formats.
NeXTSTEP
Under NeXTSTEP, property lists were designed to be human-readable and edited by hand, serialized to ASCII in a syntax somewhat like a programming language. This same format was used by OPENSTEP.
Strings are represented in C literal style: ; simpler, unquoted strings are allowed as long as they consist of alphanumericals and one of .
Binary data are represented as: < [hexadecimal codes in ASCII] >. Spaces and comments between paired hex-codes are ignored.
Arrays are represented as: . Trailing commas are tolerated.
Dictionaries are represented as: . The left-hand side must be a string, but it can be unquoted.
Comments are allowed as: and .
As in C, whitespace are generally insignificant to syntax. Value statements terminate by a semicolon.
One limitation of the original NeXT property list format is that it could not represent an NSValue (number, boolean, etc.) object. As a result, these values would have to be
|
https://en.wikipedia.org/wiki/Sister%20group
|
In phylogenetics, a sister group or sister taxon, also called an adelphotaxon, comprises the closest relative(s) of another given unit in an evolutionary tree.
Definition
The expression is most easily illustrated by a cladogram:
Taxon A and taxon B are sister groups to each other. Taxa A and B, together with any other extant or extinct descendants of their most recent common ancestor (MRCA), form a monophyletic group, the clade AB. Clade AB and taxon C are also sister groups. Taxa A, B, and C, together with all other descendants of their MRCA form the clade ABC.
The whole clade ABC is itself a subtree of a larger tree which offers yet more sister group relationships, both among the leaves and among larger, more deeply rooted clades. The tree structure shown connects through its root to the rest of the universal tree of life.
In cladistic standards, taxa A, B, and C may represent specimens, species, genera, or any other taxonomic units. If A and B are at the same taxonomic level, terminology such as sister species or sister genera can be used.
Example
The term sister group is used in phylogenetic analysis, however, only groups identified in the analysis are labeled as "sister groups".
An example is birds, whose commonly cited living sister group is the crocodiles, but that is true only when discussing extant organisms; when other, extinct groups are considered, the relationship between birds and crocodiles appears distant.
Although the bird family tree is rooted in the dinosaurs, there were a number of other, earlier groups, such as the pterosaurs, that branched off of the line leading to the dinosaurs after the last common ancestor of birds and crocodiles.
The term sister group must thus be seen as a relative term, with the caveat that the sister group is only the closest relative among the groups/species/specimens that are included in the analysis.
Notes
References
Evolutionary biology
Phylogenetics
Biological classification
taxa
|
https://en.wikipedia.org/wiki/Longest%20uncrossed%20knight%27s%20path
|
The longest uncrossed (or nonintersecting) knight's path is a mathematical problem involving a knight on the standard 8×8 chessboard or, more generally, on a square n×n board. The problem is to find the longest path the knight can take on the given board, such that the path does not intersect itself. A further distinction can be made between a closed path, which ends on the same field as where it begins, and an open path, which ends on a different field from where it begins.
Known solutions
The longest open paths on an n×n board are known only for n ≤ 9. Their lengths for n = 1, 2, …, 9 are:
0, 0, 2, 5, 10, 17, 24, 35, 47
The longest closed paths are known only for n ≤ 10. Their lengths for n = 1, 2, …, 10 are:
0, 0, 0, 4, 8, 12, 24, 32, 42, 54
Generalizations
The problem can be further generalized to rectangular n×m boards, or even to boards in the shape of any polyomino. The problem for n×m boards, where n doesn't exceed 8 and m might be very large was given at 2018 ICPC World Finals. The solution used dynamic programming and uses the fact that the solution should exhibit a cyclic behavior.
Other standard chess pieces than the knight are less interesting, but fairy chess pieces like the camel ((3,1)-leaper), giraffe ((4,1)-leaper) and zebra ((3,2)-leaper) lead to problems of comparable complexity.
See also
A knight's tour is a self-intersecting knight's path visiting all fields of the board.
TwixT, a board game based on uncrossed knight's paths.
References
George Jelliss, Non-Intersecting Paths
Non-crossing knight tours
2018 ICPC World Finals solutions (Problem J)
External links
Uncrossed knight's tours
Mathematical chess problems
Computational problems in graph theory
|
https://en.wikipedia.org/wiki/Stuck-at%20fault
|
A stuck-at fault is a particular fault model used by fault simulators and automatic test pattern generation (ATPG) tools to mimic a manufacturing defect within an integrated circuit. Individual signals and pins are assumed to be stuck at Logical '1', '0' and 'X'. For example, an input is tied to a logical 1 state during test generation to assure that a manufacturing defect with that type of behavior can be found with a specific test pattern. Likewise the input could be tied to a logical 0 to model the behavior of a defective circuit that cannot switch its output pin.
Not all faults can be analyzed using the stuck-at fault model. Compensation for static hazards, namely branching signals, can render a circuit untestable using this model. Also, redundant circuits cannot be tested using this model, since by design there is no change in any output as a result of a single fault.
Single stuck at line
Single stuck line is a fault model used in digital circuits. It is used for post manufacturing testing, not design testing. The model assumes one line or node in the digital circuit is stuck at logic high or logic low. When a line is stuck it is called a fault.
Digital circuits can be divided into:
Gate level or combinational circuits which contain no storage (latches and/or flip flops) but only gates like NAND, OR, XOR, etc.
Sequential circuits which contain storage.
This fault model applies to gate level circuits, or a block of a sequential circuit which can be separated from the storage elements.
Ideally a gate-level circuit would be completely tested by applying all possible inputs and checking that they gave the right outputs, but this is completely impractical: an adder to add two 32-bit numbers would require 264 = 1.8*1019 tests, taking 58 years at 0.1 ns/test.
The stuck at fault model assumes that only one input on one gate will be faulty at a time, assuming that if more are faulty, a test that can detect any single fault, should easily find multiple faults.
|
https://en.wikipedia.org/wiki/Aristotle%27s%20wheel%20paradox
|
Aristotle's wheel paradox is a paradox or problem appearing in the pseudo-Aristotelian Greek work Mechanica It states as follows: A wheel is depicted in two-dimensional space as two circles. Its larger, outer circle is tangential to a horizontal surface (e.g. a road that it rolls on), while the smaller, inner one has the same center and is rigidly affixed to the larger. (The smaller circle could be the bead of a tire, the rim it is mounted upon, or the axle.) Assuming the larger circle rolls without slipping (or skidding) for one full revolution, the distances moved by both circles' circumferences are the same. The distance travelled by the larger circle is equal to its circumference, but for the smaller it is greater than its circumference, thereby creating a paradox.
The paradox is not limited to wheels: other things depicted in two dimensions display the same behavior such as a roll of tape, or a typical round bottle or jar rolled on its side (the smaller circle would be the mouth or neck of the jar or bottle).
In an alternative version of the problem, the smaller circle, rather than the larger one, is in contact with the horizontal surface. Examples include a typical train wheel, which has a flange, or a barbell straddling a bench. American educator and philosopher Israel Drabkin called these Case II versions of the paradox, and a similar, but unidentical, analysis applies.
History of the paradox
In antiquity
In antiquity, the wheel problem was described in the Greek work Mechanica, traditionally attributed to Aristotle, but widely believed to have been written by a later member of his school. (Thomas Winter has made the alternative proposal that it was written by Archytas.) It also appears in the Mechanica of Hero of Alexandria. In the Aristotelian version it appears as "Problem 24", where the description of the wheel is given as follows:
For let there be a larger circle ΔZΓ a smaller EHB, and A at the centre of both; let ZI be the line which the greater
|
https://en.wikipedia.org/wiki/Focal%20adhesion
|
In cell biology, focal adhesions (also cell–matrix adhesions or FAs) are large macromolecular assemblies through which mechanical force and regulatory signals are transmitted between the extracellular matrix (ECM) and an interacting cell. More precisely, focal adhesions are the sub-cellular structures that mediate the regulatory effects (i.e., signaling events) of a cell in response to ECM adhesion.
Focal adhesions serve as the mechanical linkages to the ECM, and as a biochemical signaling hub to concentrate and direct numerous signaling proteins at sites of integrin binding and clustering.
Structure and function
Focal adhesions are integrin-containing, multi-protein structures that form mechanical links between intracellular actin bundles and the extracellular substrate in many cell types. Focal adhesions are large, dynamic protein complexes through which the cytoskeleton of a cell connects to the ECM. They are limited to clearly defined ranges of the cell, at which the plasma membrane closes to within 15 nm of the ECM substrate. Focal adhesions are in a state of constant flux: proteins associate and disassociate with it continually as signals are transmitted to other parts of the cell, relating to anything from cell motility to cell cycle. Focal adhesions can contain over 100 different proteins, which suggests a considerable functional diversity. More than anchoring the cell, they function as signal carriers (sensors), which inform the cell about the condition of the ECM and thus affect their behavior. In sessile cells, focal adhesions are quite stable under normal conditions, while in moving cells their stability is diminished: this is because in motile cells, focal adhesions are being constantly assembled and disassembled as the cell establishes new contacts at the leading edge, and breaks old contacts at the trailing edge of the cell. One example of their important role is in the immune system, in which white blood cells migrate along the connective endotheli
|
https://en.wikipedia.org/wiki/Medical%20geology
|
Medical geology is an interdisciplinary scientific field studying the relationship between natural geological factors and their effects on human and animal health. The Commission on Geological Sciences for Environmental Planning defines medical geology as "the science dealing with the influence of ordinary environmental factors on the geographical distribution of health problems in man and animals."
In its broadest sense, medical geology studies exposure to or deficiency of trace elements and minerals; inhalation of ambient and anthropogenic mineral dusts and volcanic emissions; transportation, modification and concentration of organic compounds; and exposure to radionuclides, microbes and pathogens.
History
Many have deemed medical geology as a new field, when in actuality it is re-emerging. Hippocrates and Aristotle first recognized the relationship between human diseases and the earth's elements. This field ultimately depends on a number of different fields coming and working together to solve some of the earth's mysteries. The scientific term for this field is hydrobiogeochemoepidemiopathoecology; however, it is more commonly known as medical geology. It was established in 1990 by the International Union of Geological Sciences. Paracelsus, the "father of pharmacology" (1493–1541), stated that "all substances are poisons, there is none which is not a poison. The right dosage differentiates a poison and a remedy." This passage sums up the idea of medical geology. The goal of this field is to find the right balance and intake of elements/minerals in order to improve and maintain health.
Examples of research in medical geology include:
Studies on the impact of contaminant mobility as a result of extreme weather events such as flooding.
Lead and other heavy metal exposure resulting from dust and other particulates
Asbestos exposure such as amphibole asbestos dusts in Libby, Montana
Fungal infection resulting from airborne dust, such as Valley Fever or coccidio
|
https://en.wikipedia.org/wiki/Variable%20envelope%20return%20path
|
Variable envelope return path (VERP) is a technique used by some electronic mailing list software to enable automatic detection and removal of undeliverable e-mail addresses. It works by using a different return path (also called "envelope sender") for each recipient of a message.
Motivation
Any long-lived mailing list eventually contains addresses that can't be reached. Addresses that were once valid can become unusable because the person receiving the mail switched to a different provider. In another scenario, the address may still exist but be abandoned, with unread mail accumulating until there is not enough room left to accept any more.
When a message is sent to a mailing list, the mailing list software re-sends it to all of the addresses on the list. The presence of invalid addresses in the list results in bounce messages being sent to the owner of the list. If the mailing list is small, the owner can read the bounce messages and manually remove the invalid addresses from the list. With a larger mailing list, this is a tedious, unpleasant job, so it is desirable to automate the process.
However, most bounce messages have historically been designed to be read by human users, not automatically handled by software. They all convey the same basic idea ("the message from X to Y could not be delivered because of reason Z") but with so many variations that it would be nearly impossible to write a program to reliably interpret the meaning of every bounce message. RFC 1894 (obsoleted by RFC 3464) defines a standard format to fix this problem, but support for the standard is far from universal. However, there are several common formats (e.g., RFC 3464, qmail's qsbmf, and Microsoft's DSN format for Exchange) that cover large proportion of bounces.
Microsoft Exchange can sometimes bounce a message without providing any indication of the address to which the original message was sent. When Exchange knows the intended recipient, but is not willing to accept email for t
|
https://en.wikipedia.org/wiki/B-Method
|
The B method is a method of software development based on B, a tool-supported formal method based on an abstract machine notation, used in the development of computer software.
Overview
B was originally developed in the 1980s by Jean-Raymond Abrial in France and the UK. B is related to the Z notation (also originated by Abrial) and supports development of programming language code from specifications. B has been used in major safety-critical system applications in Europe (such as the automatic Paris Métro lines 14 and 1 and the Ariane 5 rocket). It has robust, commercially available tool support for specification, design, proof and code generation.
Compared to Z, B is slightly more low-level and more focused on refinement to code rather than just formal specification — hence it is easier to correctly implement a specification written in B than one in Z. In particular, there is good tool support for this.
The same language is used in specification, design and programming.
Mechanisms include encapsulation and data locality.
Event-B
Subsequently, another formal method called Event-B has been developed based on the B-Method, support by the Rodin Platform. Event-B is a formal method aimed at system-level modelling and analysis. Features of Event-B are the use of set theory for modelling, the use of refinement to represent systems at different levels of abstraction, and the use of mathematical proof for verifying consistency between these refinement levels.
The main components
The B notation depends on set theory and first order logic in order to specify different versions of software that covers the complete cycle of project development.
Abstract machine
In the first and the most abstract version, which is called Abstract Machine, the designer should specify the goal of the design.
Refinement
Then, during a refinement step, they may pad the specification in order to clarify the goal or to turn the abstract machine more concrete by adding details about data structu
|
https://en.wikipedia.org/wiki/Jean-Raymond%20Abrial
|
Jean-Raymond Abrial (born 6 November 1938) is a French computer scientist and inventor of the Z and B formal methods.
Abrial was a student at the École Polytechnique (class of 1958).
Abrial's 1974 paper Data Semantics laid the foundation for a formal approach to Data Models; although not adopted directly by practitioners, it directly influenced all subsequent models from the Entity-Relationship Model through to RDF.
J.-R. Abrial is the father of the Z notation (typically used for formal specification of software), during his time at the Programming Research Group under Prof. Tony Hoare within the Oxford University Computing Laboratory (now Oxford University Department of Computer Science), arriving in 1979 and sharing an office and collaborating with Cliff Jones. He later initiated the B-Method, with better tool-based software development support for refinement from a high-level specification to an executable program, including the Rodin tool. These are two important formal methods approaches for software engineering. He is the author of The B-Book: Assigning Programs to Meanings. For much of his career he has been an independent consultant. He was an invited professor at ETH Zurich from 2004 to 2009.
Abrial was elected to be a Member of the Academia Europaea in 2006.
See also
Rodin tool
References
External links
by Jonathan Bowen
Managing the Construction of Large Computerized Systems — article
Have we learned from the Wasa disaster (video) — talk by Jean-Raymond Abrial
1938 births
Living people
École Polytechnique alumni
French computer scientists
Members of the Department of Computer Science, University of Oxford
Formal methods people
Z notation
Computer science writers
Software engineers
Software engineering researchers
Academic staff of ETH Zurich
Members of Academia Europaea
|
https://en.wikipedia.org/wiki/Hessian%20%28Web%20service%20protocol%29
|
Hessian is a binary Web service protocol that makes Web services usable without requiring a large framework, and without learning a new set of protocols . Because it is a binary protocol, it is well-suited to sending binary data without any need to extend the protocol with attachments.
Hessian was developed by Caucho Technology, Inc. The company has released Java, Python and ActionScript for Adobe Flash implementations of Hessian under an open source license (the Apache license). Third-party implementations in several other languages (C++, C#, JavaScript, Perl, PHP, Ruby, Objective-C, D, and Erlang) are also available as open-source.
Adaptations
Although Hessian is primarily intended for Web services, it can be adapted for TCP traffic by using the HessianInput and HessianOutput classes in Caucho's Java implementation.
Implementations
Cotton (Erlang)
HessDroid (Android)
Hessian (on Rubyforge) (Ruby)
Hessian.js (JavaScript)
Hessian4J (Java)
HessianC# (C#)
HessianCPP (C++)
HessianD (D)
HessianKit (Objective-C 2.0)
HessianObjC (Objective-C)
HessianPHP (PHP)
HessianPy (Python)
HessianRuby (Ruby)
Hessian-Translator (Perl)
See also
Abstract Syntax Notation One
SDXF
Apache Thrift
Etch (protocol)
Protocol Buffers
Internet Communications Engine
References
External links
Web services
|
https://en.wikipedia.org/wiki/Deep%20reactive-ion%20etching
|
Deep reactive-ion etching (DRIE) is a highly anisotropic etch process used to create deep penetration, steep-sided holes and trenches in wafers/substrates, typically with high aspect ratios. It was developed for microelectromechanical systems (MEMS), which require these features, but is also used to excavate trenches for high-density capacitors for DRAM and more recently for creating through silicon vias (TSVs) in advanced 3D wafer level packaging technology. In DRIE, the substrate is placed inside a reactor, and several gases are introduced. A plasma is struck in the gas mixture which breaks the gas molecules into ions. The ions accelerated towards, and react with the surface of the material being etched, forming another gaseous element. This is known as the chemical part of the reactive ion etching. There is also a physical part, if ions have enough energy, they can knock atoms out of the material to be etched without chemical reaction.
DRIE is a special subclass of RIE.
There are two main technologies for high-rate DRIE: cryogenic and Bosch, although the Bosch process is the only recognised production technique. Both Bosch and cryo processes can fabricate 90° (truly vertical) walls, but often the walls are slightly tapered, e.g. 88° ("reentrant") or 92° ("retrograde").
Another mechanism is sidewall passivation: SiOxFy functional groups (which originate from sulphur hexafluoride and oxygen etch gases) condense on the sidewalls, and protect them from lateral etching. As a combination of these processes deep vertical structures can be made.
Cryogenic process
In cryogenic-DRIE, the wafer is chilled to −110 °C (163 K). The low temperature slows down the chemical reaction that produces isotropic etching. However, ions continue to bombard upward-facing surfaces and etch them away. This process produces trenches with highly vertical sidewalls. The primary issues with cryo-DRIE is that the standard masks on substrates crack under the extreme cold, plus etch by-p
|
https://en.wikipedia.org/wiki/Formulario%20mathematico
|
Formulario Mathematico (Latino sine flexione: Formulary for Mathematics) is a book by Giuseppe Peano which expresses fundamental theorems of mathematics in a symbolic language developed by Peano. The author was assisted by Giovanni Vailati, Mario Pieri, Alessandro Padoa, Giovanni Vacca, Vincenzo Vivanti, Gino Fano and Cesare Burali-Forti.
The Formulario was first published in 1894. The fifth and last edition was published in 1908.
Hubert Kennedy wrote "the development and use of mathematical logic is the guiding motif of the project". He also explains the variety of Peano's publication under the title:
the five editions of the Formulario [are not] editions in the usual sense of the word. Each is essentially a new elaboration, although much material is repeated. Moreover, the title and language varied: the first three, titled Formulaire de Mathématiques, and the fourth, titled, Formulaire Mathématique, were written in French, while Latino sine flexione, Peano's own invention, was used for the fifth edition, titled Formulario Mathematico. ... Ugo Cassina lists no less than twenty separately published items as being parts of the 'complete' Formulario!
Peano believed that students needed only precise statement of their lessons. He wrote:
Each professor will be able to adopt this Formulario as a textbook, for it ought to contain all theorems and all methods. His teaching will be reduced to showing how to read the formulas, and to indicating to the students the theorems that he wishes to explain in his course.
Such a dismissal of the oral tradition in lectures at universities was the undoing of Peano's own teaching career.
Notes
References
Ivor Grattan-Guinness (2000) The Search for Mathematical Roots 1870-1940. Princeton University Press.
1895 non-fiction books
1908 non-fiction books
Mathematics books
Mathematical terminology
Mathematical logic
Mathematical symbols
|
https://en.wikipedia.org/wiki/Maemo
|
Maemo is a software platform originally developed by Nokia, now developed by the community, for smartphones and Internet tablets. The platform comprises both the Maemo operating system and SDK. Maemo played a key role in Nokia's strategy to compete with Apple and Android, and that strategy failed for complex, institutional and strategic reasons.
Maemo is mostly based on open-source code and has been developed by Maemo Devices within Nokia in collaboration with many open-source projects such as the Linux kernel, Debian, and GNOME. Maemo is based on Debian and draws much of its GUI, frameworks, and libraries from the GNOME project. It uses the Matchbox window manager and the GTK-based Hildon framework as its GUI and application framework.
The user interface in Maemo 4 is similar to many hand-held interfaces and features a "home" screen, from which all applications and settings are accessed. The home screen is divided into areas for launching applications, a menu bar, and a large customizable area that can display information such as an RSS reader, Internet radio player, and Google search box. The Maemo 5 user interface is slightly different; the menu bar and info area are consolidated to the top of the display, and the four desktops can be customized with shortcuts and widgets.
At the Mobile World Congress in February 2010, it was announced that the Maemo project would be merging with Moblin to create the MeeGo mobile software platform. Despite that, the Maemo community continued to be active, and in late 2012 Nokia began transferring Maemo ownership to the Hildon Foundation, which was replaced by a German association Maemo Community e.V.
Since 2017, a new release called Maemo Leste is in development which is based on Devuan.
User interface
OS2005–OS2008
Up to Maemo 4 (AKA OS2008), the default screen is the "Home" screen — the central point from which all applications and settings are accessed. The Home Screen is divided into the following areas:
Vertically d
|
https://en.wikipedia.org/wiki/Broad%20Institute
|
The Eli and Edythe L. Broad Institute of MIT and Harvard (IPA: , pronunciation respelling: ), often referred to as the Broad Institute, is a biomedical and genomic research center located in Cambridge, Massachusetts, United States. The institute is independently governed and supported as a 501(c)(3) nonprofit research organization under the name Broad Institute Inc., and it partners with the Massachusetts Institute of Technology, Harvard University, and the five Harvard teaching hospitals.
History
The Broad Institute evolved from a decade of research collaborations among MIT and Harvard scientists. One cornerstone was the Center for Genome Research of Whitehead Institute at MIT. Founded in 1982, the Whitehead became a major center for genomics and the Human Genome Project. As early as 1995, scientists at the Whitehead started pilot projects in genomic medicine, forming an unofficial collaborative network among young scientists interested in genomic approaches to cancer and human genetics. Another cornerstone was the Institute of Chemistry and Cell Biology established by Harvard Medical School in 1998 to pursue chemical genetics as an academic discipline. Its screening facility was one of the first high-throughput resources opened in an academic setting. It facilitated small molecule screening projects for more than 80 research groups worldwide.
To create a new organization that was open, collaborative, cross-disciplinary and able to organize projects at any scale, planning took place in 2002–2003 among philanthropists Eli and Edythe Broad, MIT, the Whitehead Institute, Harvard and the Harvard-affiliated hospitals (in particular, the Beth Israel Deaconess Medical Center, Brigham and Women's Hospital, Children's Hospital Boston, the Dana–Farber Cancer Institute and the Massachusetts General Hospital).
The Broads made a founding gift of $100 million and the Broad Institute was formally launched in May 2004. In November 2005, the Broads announced an additional $100
|
https://en.wikipedia.org/wiki/Colonisation%20%28biology%29
|
Colonisation or colonization is the process in biology by which a species spreads to new areas. Colonisation often refers to successful immigration where a population becomes integrated into an ecological community, having resisted initial local extinction. In ecology, it is represented by the symbol λ (lowercase lambda) to denote the long-term intrinsic growth rate of a population.
One classic scientific model in biogeography posits that a species must continue to colonize new areas through its life cycle (called a taxon cycle) in order to achieve longevity. Accordingly, colonisation and extinction are key components of island biogeography, a theory that has many applications in ecology, such as metapopulations.
Scale
Colonisation occurs on several scales. In the most basic form, as biofilm in the formation of communities of microorganisms on surfaces. In small scales such as colonising new sites, perhaps as a result of environmental change. And on larger scales where a species expands its range to encompass new areas. This can be via a series of small encroachments, such as in woody plant encroachment, or by long-distance dispersal. The term range expansion is also used.
Use
The term is generally only used to refer to the spread of a species into new areas by natural means, as opposed to unnatural introduction or translocation by humans, which may lead to invasive species.
Colonisation events
Large-scale notable pre-historic colonisation events include:
Arthropods
the colonisation of the earth's land by the first animals, the arthropods. The first fossils of land animals come from millipedes. These were seen about 450 million years ago (Dunn, 2013).
Humans
the early human migration and colonisation of areas outside Africa according to the recent African origin paradigm, resulting in the extinction of Pleistocene megafauna, although the role of humans in this event is controversial.
Some large-scale notable colonisation events during the 20th century are:
|
https://en.wikipedia.org/wiki/VideoCrypt
|
VideoCrypt is a cryptographic, smartcard-based conditional access television encryption system that scrambles analogue pay-TV signals. It was introduced in 1989 by News Datacom and was used initially by Sky TV and subsequently by several other broadcasters on SES' Astra satellites at 19.2° east.
Users
Versions
Three variants of the VideoCrypt system were deployed in Europe: VideoCrypt I for the UK and Irish market and VideoCrypt II for continental Europe. The third variant, VideoCrypt-S was used on a short-lived BBC Select service. The VideoCrypt-S system differed from the typical VideoCrypt implementation as it used line shuffle scrambling.
Sky NZ and Sky Fiji may use different versions of the VideoCrypt standard.
Sky NZ used NICAM stereo for many years until abandoning it when the Sky DTH technology started replacing Sky UHF.
Operating principle
The system scrambles the picture using a technique known as "line cut-and-rotate". Each line that made up each picture (video frame) is cut at one of 256 possible "cut points", and the two halves of each line are swapped around for transmission. The series of cutpoints is determined by a pseudo-random sequence. Channels were decoded using a pseudorandom number generator (PRNG) sequence stored on a smart card (aka Viewing Card).
To decode a channel the decoder would read the smart card to check if the card is authorised for the specific channel. If not, a message would appear on screen. Otherwise the decoder seeds the card's PRNG with a seed transmitted with the video signal to generate the correct sequence of cut points.
The system also included a cryptographic element called the Fiat Shamir Zero Knowledge Test. This element was a routine in the smartcard that would prove to the decoder that the card was indeed a genuine card. The basic model was that the decoder would present the card with a packet of data (the question or challenge) which the card would process and effectively return the result (the answer) to
|
https://en.wikipedia.org/wiki/Loren%20Kohnfelder
|
Loren Kohnfelder invented what is today called public key infrastructure (PKI) in his May 1978 MIT S.B. (BSCSE) thesis, which described a practical means of using public key cryptography to secure network communications.
The Kohnfelder thesis introduced the terms 'certificate' and 'certificate revocation list' as well as introducing numerous other concepts now established as important parts of PKI. The X.509 certificate specification that provides the basis for SSL, S/MIME and most modern PKI implementations are based on the Kohnfelder thesis.
He was also the co-creator, with Praerit Garg, of the STRIDE model of security threats, widely used in threat modeling.
In 2021 he published Designing Secure Software with No Starch Press. He maintains a medium blog.
References
External links
Kohnfelder, Loren M., Towards a Practical Public-Key Cryptosystem, May 1978.
Kohnfelder, Loren
Living people
Year of birth missing (living people)
|
https://en.wikipedia.org/wiki/Phantasy%20Star%20Universe
|
(PSU) is an action role-playing video game developed by Sega's Sonic Team for the Microsoft Windows, PlayStation 2 and Xbox 360 platforms. It was released in Japan for the PC and PlayStation 2 on August 31, 2006; the Xbox 360 version was released there on December 14, 2006. Its North American release was in October 2006, in all formats. The European release date was November 24 the same year, while the Australian release date was November 30.
Phantasy Star Universe is similar to the Phantasy Star Online (PSO) games, but takes place in a different time period and location, and has many new features. Like most of the PSO titles, PSU was playable in both a persistent online network mode and a fully featured, single-player story mode.
Plot
Ethan Waber, the main character, and his younger sister, Lumia Waber, are at the celebration of the 100th anniversary of the Alliance Space Fleet on the GUARDIANS Space station. The celebration is interrupted when a mysterious meteor shower almost destroys the entire fleet. During an evacuation, Ethan and Lumia divert from the main evacuation route; collapsing rubble separates Lumia from Ethan. Ethan then meets up with a GUARDIAN named Leo, but they are attacked by a strange creature that paralyzes Leo. Ethan kills the creature. After killing multiple creatures and saving people, Ethan finds Lumia and they leave the station. Ethan reveals that he dislikes the GUARDIANS organization because his father died on a mission. Leo, impressed with Ethan's abilities, persuades Ethan to join the GUARDIANS.
Ethan and his classmate Hyuga Ryght are trained by a GUARDIAN named Karen Erra, who leads them against the S.E.E.D, the monsters that came from the meteors. After being hired to accompany a scientist to a RELICS site of an ancient, long-dead civilization, they find out that the SEED are attracted to a power source called A-Photons, which the ancients used and that the solar system has just rediscovered. Karen discovers she is the sister of
|
https://en.wikipedia.org/wiki/Velocity%20potential
|
A velocity potential is a scalar potential used in potential flow theory. It was introduced by Joseph-Louis Lagrange in 1788.
It is used in continuum mechanics, when a continuum occupies a simply-connected region and is irrotational. In such a case,
where denotes the flow velocity. As a result, can be represented as the gradient of a scalar function :
is known as a velocity potential for .
A velocity potential is not unique. If is a velocity potential, then is also a velocity potential for , where is a scalar function of time and can be constant. In other words, velocity potentials are unique up to a constant, or a function solely of the temporal variable.
The Laplacian of a velocity potential is equal to the divergence of the corresponding flow. Hence if a velocity potential satisfies Laplace equation, the flow is incompressible.
Unlike a stream function, a velocity potential can exist in three-dimensional flow.
Usage in acoustics
In theoretical acoustics, it is often desirable to work with the acoustic wave equation of the velocity potential instead of pressure and/or particle velocity .
Solving the wave equation for either field or field does not necessarily provide a simple answer for the other field. On the other hand, when is solved for, not only is found as given above, but is also easily found—from the (linearised) Bernoulli equation for irrotational and unsteady flow—as
See also
Vorticity
Hamiltonian fluid mechanics
Potential flow
Potential flow around a circular cylinder
Notes
External links
Joukowski Transform Interactive WebApp
Continuum mechanics
Physical quantities
|
https://en.wikipedia.org/wiki/Xsupplicant
|
Xsupplicant is a supplicant that allows a workstation to authenticate with a RADIUS server using 802.1X and the Extensible Authentication Protocol (EAP). It can be used for computers with wired or wireless LAN connections to complete a strong authentication before joining the network and supports the dynamic assignment of WEP keys.
Overview
Xsupplicant up to version 1.2.8 was designed to run on Linux clients as a command line utility. Version 1.3.X and greater are designed to run on Windows XP and are currently being ported to Linux/BSD systems, and include a robust graphical user interface, and also includes Network Access Control (NAC) functionality from Trusted Computing Group's Trusted Network Connect NAC.
Xsupplicant was chosen by the OpenSea Alliance, dedicated to developing, promoting, and distributing an open source 802.1X supplicant.
Xsupplicant supports the following EAP types:
EAP-MD5
LEAP
EAP-MSCHAPv2
EAP-OTP
EAP-PEAP (v0 and v1)
EAP-SIM
EAP-TLS
EAP-TNC
EAP-TTLSv0 (PAP/CHAP/MS-CHAP/MS-CHAPv2/EAP)
EAP-AKA
EAP-GTC
EAP-FAST (partial)
Xsupplicant is primarily maintained by Chris Hessing.
See also
wpa_supplicant
References
External links
XSupplicant on SourceForge
XSupplicant on GitHub
Wireless networking
|
https://en.wikipedia.org/wiki/Index%20ellipsoid
|
In crystal optics, the index ellipsoid (also known as the optical indicatrix or sometimes as the dielectric ellipsoid) is a geometric construction which concisely represents the refractive indices and associated polarizations of light, as functions of the orientation of the wavefront, in a doubly-refractive crystal (provided that the crystal does not exhibit optical rotation). When this ellipsoid is cut through its center by a plane parallel to the wavefront, the resulting intersection (called a central section or diametral section) is an ellipse whose major and minor semiaxes have lengths equal to the two refractive indices for that orientation of the wavefront, and have the directions of the respective polarizations as expressed by the electric displacement vector . The principal semiaxes of the index ellipsoid are called the principal refractive indices.
It follows from the sectioning procedure that each principal semiaxis of the ellipsoid is generally not the refractive index for propagation in the direction of that semiaxis, but rather the refractive index for wavefronts tangential to that direction, with the vector parallel to that direction, propagating perpendicular to that direction. Thus the direction of propagation (normal to the wavefront) to which each principal refractive index applies is in the plane perpendicular to the associated principal semiaxis.
Terminology
The index ellipsoid is not to be confused with the index surface, whose radius vector (from the origin) in any direction is indeed the refractive index for propagation in that direction; for a birefringent medium, the index surface is the two-sheeted surface whose two radius vectors in any direction have lengths equal to the major and minor semiaxes of the diametral section of the index ellipsoid by a plane normal to that direction.
If we let denote the principal semiaxes of the index ellipsoid, and choose a Cartesian coordinate system in which these semiaxes are respectively in the , ,
|
https://en.wikipedia.org/wiki/Guillermo%20Owen
|
Guillermo Owen (born 1938) is a Colombian mathematician, and professor of applied mathematics at the Naval Postgraduate School in Monterey, California, known for his work in game theory. He is also the son of the Mexican Poet and Diplomat Gilberto Owen.
Biography
Guillermo Owen was born May 4, 1938, in Bogotá, Colombia, and obtained a B.S. degree from Fordham University in 1958, and a Ph.D. degree from Princeton University under the guidance of Dr. Harold Kuhn in 1962.
Owen has taught at Fordham University (1961–1969), Rice University (1969–1977) and Los Andes University in Colombia (1978–1982, 2008), apart from having given lectures in many universities in Europe and Latin America. He is currently holding the position of Distinguished Professor of applied mathematics at the Naval Postgraduate School in Monterey, California.
Owen is member of the Colombian Academy of Sciences, The Royal Academy of Arts and Sciences of Barcelona, and the Third World Academy of Sciences. He is associate editor of the International Journal of Game Theory, and fellow of the International Game Theory Society.
Honors and awards
The Escuela Naval Almirante Padilla of Cartagena gave him an honorary degree of Naval Science Professional in June 2004.
Owen was named Honorary President of the XIV Latin Ibero American Congress on Operations Research - CLAIO 2008. Cartagena, Colombia, September 2008.
The university of Lower Normandy, in Caen, France, gave him an honorary doctorate in October 2017.
Publications
Owen has authored, translated and/or edited thirteen books, and approximately one hundred and forty papers published in journals such as Management Science, Operations Research, International Journal of Game Theory, American political Science Review, and Mathematical Programming, among others. Owen's books include:
1968. Game theory. Academic Press
1970. Finite mathematics and calculus; mathematics for the social and management sciences. With M. Evans Munroe.
1983. Information p
|
https://en.wikipedia.org/wiki/Security%20Administrator%20Tool%20for%20Analyzing%20Networks
|
Security Administrator Tool for Analyzing Networks (SATAN) was a free software vulnerability scanner for analyzing networked computers. SATAN captured the attention of a broad technical audience, appearing in PC Magazine and drawing threats from the United States Department of Justice. It featured a web interface, complete with forms to enter targets, tables to display results, and context-sensitive tutorials that appeared when a vulnerability had been found.
Naming
For those offended by the name SATAN, the software contained a special command called repent, which rearranged the letters in the program's acronym from "SATAN" to "SANTA".
Description
The tool was developed by Dan Farmer and Wietse Venema. Neil Gaiman drew the artwork for the SATAN documentation.
SATAN was designed to help systems administrators automate the process of testing their systems for known vulnerabilities that can be exploited via the network. This was particularly useful for networked systems with multiple hosts. Like most security tools, it was useful for good or malicious purposes – it was also useful to would-be intruders looking for systems with security holes.
SATAN was written mostly in Perl and utilized a web browser such as Netscape, Mosaic or Lynx to provide the user interface. This easy to use interface drove the scanning process and presents the results in summary format. As well as reporting the presence of vulnerabilities, SATAN also gathered large amounts of general network information, such as which hosts are connected to subnets, what types of machines they are and which services they offered.
Status
SATAN's popularity diminished after the 1990s. It was released in 1995 and development has ceased. In 2006, SecTools.Org conducted a security popularity poll and developed a list of 100 network security analysis tools in order of popularity based on the responses of 3,243 people. Results suggest that SATAN has been replaced by nmap, Nessus and to a lesser degree SARA (Securi
|
https://en.wikipedia.org/wiki/Range%20of%20motion
|
Range of motion (or ROM) is the linear or angular distance that a moving object may normally travel while properly attached to another.
In biomechanics and strength training, ROM refers to the angular distance and direction a joint can move between the flexed position and the extended position. The act of attempting to increase this distance through therapeutic exercises (range of motion therapy—stretching from flexion to extension for physiological gain) is also sometimes called range of motion.
In mechanical engineering, it is (also called range of travel or ROT) used particularly when talking about mechanical devices, such as a sound volume control knob.
In biomechanics
Measuring range of motion
Each specific joint has a normal range of motion that is expressed in degrees. The reference values for the normal ROM in individuals differ slightly depending on age and gender. For example, as an individual ages, they typically lose a small amount of ROM.
Analog and traditional devices to measure range of motion in the joints of the body include the goniometer and inclinometer which use a stationary arm, protractor, fulcrum, and movement arm to measure angle from axis of the joint. As measurement results will vary by the degree of resistance, two levels of range of motion results are recorded in most cases.
Recent technological advances in 3D motion capture technology allow for the measurement of joints concurrently, which can be used to measure a patient's active range of motion.
Limited range of motion
Limited range of motion refers to a joint that has a reduction in its ability to move. The reduced motion may be a problem with the specific joint or it may be caused by injury or diseases such as osteoarthritis, rheumatoid arthritis, or other types of arthritis. Pain, swelling, and stiffness associated with arthritis can limit the range of motion of a particular joint and impair function and the ability to perform usual daily activities.
Limited range of moti
|
https://en.wikipedia.org/wiki/Simple%20set
|
In computability theory, a subset of the natural numbers is called simple if it is computably enumerable (c.e.) and co-infinite (i.e. its complement is infinite), but every infinite subset of its complement is not c.e.. Simple sets are examples of c.e. sets that are not computable.
Relation to Post's problem
Simple sets were devised by Emil Leon Post in the search for a non-Turing-complete c.e. set. Whether such sets exist is known as Post's problem. Post had to prove two things in order to obtain his result: that the simple set A is not computable, and that the K, the halting problem, does not Turing-reduce to A. He succeeded in the first part (which is obvious by definition), but for the other part, he managed only to prove a many-one reduction.
Post's idea was validated by Friedberg and Muchnik in the 1950s using a novel technique called the priority method. They give a construction for a set that is simple (and thus non-computable), but fails to compute the halting problem.
Formal definitions and some properties
In what follows, denotes a standard uniformly c.e. listing of all the c.e. sets.
A set is called immune if is infinite, but for every index , we have . Or equivalently: there is no infinite subset of that is c.e..
A set is called simple if it is c.e. and its complement is immune.
A set is called effectively immune if is infinite, but there exists a recursive function such that for every index , we have that .
A set is called effectively simple if it is c.e. and its complement is effectively immune. Every effectively simple set is simple and Turing-complete.
A set is called hyperimmune if is infinite, but is not computably dominated, where is the list of members of in order.
A set is called hypersimple if it is simple and its complement is hyperimmune.
Notes
References
Computability theory
|
https://en.wikipedia.org/wiki/Prone%20position
|
Prone position () is a body position in which the person lies flat with the chest down and the back up. In anatomical terms of location, the dorsal side is up, and the ventral side is down. The supine position is the 180° contrast.
Etymology
The word prone, meaning "naturally inclined to something, apt, liable," has been recorded in English since 1382; the meaning "lying face-down" was first recorded in 1578, but is also referred to as "lying down" or "going prone."
Prone derives from the Latin , meaning "bent forward, inclined to," from the adverbial form of the prefix pro- "forward." Both the original, literal, and the derived figurative sense were used in Latin, but the figurative is older in English.
Anatomy
In anatomy, the prone position is a position of the body lying face down. It is opposed to the supine position which is face up. Using the terms defined in the anatomical position, the ventral side is down, and the dorsal side is up.
Concerning the forearm, prone refers to that configuration where the palm of the hand is directed posteriorly, and the radius and ulna are crossed.
Researchers observed that the expiratory reserve volume measured at relaxation volume increased from supine to prone by the factor of 0.15.
Shooting
In competitive shooting, the prone position is the position of a shooter lying face down on the ground. It is considered the easiest and most accurate position as the ground provides extra stability. It is one of the positions in three positions events. For many years (1932-2016), the only purely prone Olympic event was the 50 meter rifle prone; however, this has since been dropped from the Olympic program. Both men and women still have the 50 meter rifle three positions as an Olympic shooting event.
Prone position is often used in military combat as, like in competitive shooting, the prone position provides the best accuracy and stability.
Many first-person shooter video games also allow the player character to go into the
|
https://en.wikipedia.org/wiki/Semi-Thue%20system
|
In theoretical computer science and mathematical logic a string rewriting system (SRS), historically called a semi-Thue system, is a rewriting system over strings from a (usually finite) alphabet. Given a binary relation between fixed strings over the alphabet, called rewrite rules, denoted by , an SRS extends the rewriting relation to all strings in which the left- and right-hand side of the rules appear as substrings, that is , where , , , and are strings.
The notion of a semi-Thue system essentially coincides with the presentation of a monoid. Thus they constitute a natural framework for solving the word problem for monoids and groups.
An SRS can be defined directly as an abstract rewriting system. It can also be seen as a restricted kind of a term rewriting system. As a formalism, string rewriting systems are Turing complete. The semi-Thue name comes from the Norwegian mathematician Axel Thue, who introduced systematic treatment of string rewriting systems in a 1914 paper. Thue introduced this notion hoping to solve the word problem for finitely presented semigroups. Only in 1947 was the problem shown to be undecidable— this result was obtained independently by Emil Post and A. A. Markov Jr.
Definition
A string rewriting system or semi-Thue system is a tuple where
is an alphabet, usually assumed finite. The elements of the set (* is the Kleene star here) are finite (possibly empty) strings on , sometimes called words in formal languages; we will simply call them strings here.
is a binary relation on strings from , i.e., Each element is called a (rewriting) rule and is usually written .
If the relation is symmetric, then the system is called a Thue system.
The rewriting rules in can be naturally extended to other strings in by allowing substrings to be rewritten according to . More formally, the one-step rewriting relation relation induced by on for any strings :
if and only if there exist such that , , and .
Since is a relation on ,
|
https://en.wikipedia.org/wiki/List%20of%20anatomical%20isthmi
|
In anatomy, isthmus refers to a constriction between organs. This is a list of anatomical isthmi:
Aortic isthmus, section of the aortic arch
Cavo-tricuspid isthmus of the right atrium of the heart, a body of fibrous tissue in the lower atrium between the inferior vena cava, and the tricuspid valve
Isthmus, the ear side of the eustachian tube
Isthmus, narrowed part between the trunk and the splenium of the corpus callosum
Isthmus, formation of the shell membrane in birds oviduct's
Isthmus lobe, lobe in the prostate
Isthmus of cingulate gyrus
Isthmus of fauces, opening at the back of the mouth into the throat
Isthmus organizer, secondary organizer region at the junction of the midbrain and metencephalon
Isthmus tubae uterinae, links the fallopian tube to the uterus
Kronig isthmus, band of resonance representing the apex of lung
Thyroid isthmus, thin band of tissue connecting some of the lobes that make up the thyroid
Uterine isthmus, inferior-posterior part of uterus
Isthmus
Anatomical isthmus
|
https://en.wikipedia.org/wiki/Mario%20Pieri
|
Mario Pieri (22 June 1860 – 1 March 1913) was an Italian mathematician who is known for his work on foundations of geometry.
Biography
Pieri was born in Lucca, Italy, the son of Pellegrino Pieri and Ermina Luporini. Pellegrino was a lawyer. Pieri began his higher education at University of Bologna where he drew the attention of Salvatore Pincherle. Obtaining a scholarship, Pieri transferred to Scuola Normale Superiore in Pisa. There he took his degree in 1884 and worked first at a technical secondary school in Pisa.
When an opportunity arose at the military academy in Turin to teach projective geometry, Pieri moved there and, by 1888, he was also an assistant instructor in the same subject at the University of Turin. By 1891, he had become libero docente at the university, teaching elective courses. Pieri continued to teach in Turin until 1900 when, through competition, he was awarded the position of extraordinary professor at University of Catania on the island of Sicily.
Von Staudt's Geometrie der Lage (1847) was a much admired text on projective geometry. In 1889 Pieri translated it as Geometria di Posizione, a publication that included a study of the life and work of von Staudt written by Corrado Segre, the initiator of the project.
Pieri also came under the influence of Giuseppe Peano at Turin. He contributed to the Formulario mathematico, and Peano placed nine of Pieri's papers for publication with the Academy of Sciences of Turin between 1895 and 1912. They shared a passion for reducing geometric ideas to their logical form and expressing these ideas symbolically.
In 1898 Pieri wrote I principii della geometria di posizione composti in un sistema logico-deduttivo. It progressively introduced independent axioms:
based on nineteen sequentially independent axioms – each independent of the preceding ones – which are introduced one by one as they are needed in the development, thus allowing the reader to determine on which axioms a given theorem depends.
Pie
|
https://en.wikipedia.org/wiki/Vagrancy%20%28biology%29
|
Vagrancy is a phenomenon in biology whereby an individual animal (usually a bird) appears well outside its normal range; they are known as vagrants. The term accidental is sometimes also used. There are a number of poorly understood factors which might cause an animal to become a vagrant, including internal causes such as navigatory errors (endogenous vagrancy) and external causes such as severe weather (exogenous vagrancy). Vagrancy events may lead to colonisation and eventually to speciation.
Birds
In the Northern Hemisphere, adult birds (possibly inexperienced younger adults) of many species are known to continue past their normal breeding range during their spring migration and end up in areas further north (such birds are termed spring overshoots).
In autumn, some young birds, instead of heading to their usual wintering grounds, take "incorrect" courses and migrate through areas which are not on their normal migration path. For example, Siberian passerines which normally winter in Southeast Asia are commonly found in Northwest Europe, e.g. Arctic warblers in Britain. This is reverse migration, where the birds migrate in the opposite direction to that expected (say, flying north-west instead of south-east). The causes of this are unknown, but genetic mutation or other anomalies relating to the bird's magnetic sensibilities is suspected.
Other birds are sent off course by storms, such as some North American birds blown across the Atlantic Ocean to Europe. Birds can also be blown out to sea, become physically exhausted, land on a ship and end up being carried to the ship's destination.
While many vagrant birds do not survive, if sufficient numbers wander to a new area they can establish new populations. Many isolated oceanic islands are home to species that are descended from landbirds blown out to sea, Hawaiian honeycreepers and Darwin's finches being prominent examples.
Insects
Vagrancy in insects is recorded from many groups—it is particularly well-stu
|
https://en.wikipedia.org/wiki/Tunnel%20broker
|
In the context of computer networking, a tunnel broker is a service which provides a network tunnel. These tunnels can provide encapsulated connectivity over existing infrastructure to another infrastructure.
There are a variety of tunnel brokers, including IPv4 tunnel brokers, though most commonly the term is used to refer to an IPv6 tunnel broker as defined in .
IPv6 tunnel brokers typically provide IPv6 to sites or end users over IPv4. In general, IPv6 tunnel brokers offer so called 'protocol 41' or proto-41 tunnels. These are tunnels where IPv6 is tunneled directly inside IPv4 packets by having the protocol field set to '41' (IPv6) in the IPv4 packet. In the case of IPv4 tunnel brokers IPv4 tunnels are provided to users by encapsulating IPv4 inside IPv6 as defined in .
Automated configuration
Configuration of IPv6 tunnels is usually done using the Tunnel Setup Protocol (TSP), or using Tunnel Information Control protocol (TIC). A client capable of this is AICCU (Automatic IPv6 Connectivity Client Utility). In addition to IPv6 tunnels TSP can also be used to set up IPv4 tunnels.
NAT issues
Proto-41 tunnels (direct IPv6 in IPv4) may not operate well situated behind NATs. One way around this is to configure the actual endpoint of the tunnel to be the DMZ on the NAT-utilizing equipment. Another method is to either use AYIYA or TSP, both of which send IPv6 inside UDP packets, which is able to cross most NAT setups and even firewalls.
A problem that still might occur is that of the timing-out of the state in the NAT machine. As a NAT remembers that a packet went outside to the Internet it allows another packet to come back in from the Internet that is related to the initial proto-41 packet. When this state expires, no other packets from the Internet will be accepted. This therefore breaks the connectivity of the tunnel until the user's host again sends out a packet to the tunnel broker.
Dynamic endpoints
When the endpoint isn't a static IP address, the use
|
https://en.wikipedia.org/wiki/NBName
|
NBName (note capitalization) is a computer program that can be used to carry out denial-of-service attacks that can disable NetBIOS services on Windows machines. It was written by Sir Dystic of CULT OF THE DEAD COW (cDc) and released July 29, 2000 at the DEF CON 8 convention in Las Vegas.
The program decodes and provides the user with all NetBIOS name packets it receives on UDP port 137. Its many command line options can effectively disable a NetBIOS network and prevent computers from rejoining it. According to Sir Dystic, "NBName can disable entire LANs and prevent machines from rejoining them...nodes on a NetBIOS network infected by the tool will think that their names already are being used by other machines. 'It should be impossible for everyone to figure out what is going on,' he added."
References
External links
NBName on Security Focus
Microsoft Security Bulletin
Malware
Windows security software
Denial-of-service attacks
Internet Protocol based network software
Cult of the Dead Cow software
|
https://en.wikipedia.org/wiki/Super%2035
|
Super 35 (originally known as Superscope 235) is a motion picture film format that uses exactly the same film stock as standard 35 mm film, but puts a larger image frame on that stock by using the space normally reserved for the optical analog sound track.
History
Super 35 was revived from a similar Superscope variant known as Superscope 235, which was originally developed by the Tushinsky Brothers (who founded Superscope Inc. in 1954) for RKO in 1954. The first film to be shot in Superscope was Vera Cruz, a western film produced by Hecht-Lancaster Productions and distributed through United Artists.
When cameraman Joe Dunton was preparing to shoot Dance Craze in 1980, he chose to revive the Superscope format by using a full silent-standard gate and slightly optically recentering the lens port (to adjust for the inclusion of the area of the optic soundtrack -the gray track on left side of the illustration). These two characteristics are central to the format.
It was adopted by Hollywood starting with Greystoke in 1984, under the format name Super Techniscope. It also received much early publicity for making the cockpit shots in Top Gun possible, since it was otherwise impossible to fit 35 mm cameras with large anamorphic lenses into the small free space in the cockpit. Later, as other camera rental houses and labs started to embrace the format, Super 35 became popular in the mid-1990s, and is now considered a ubiquitous production process, with usage on well over a thousand feature films. It is often the standard production format for television shows, music videos, and commercials. Since none of these require a release print, it is unnecessary to reserve space for an optical soundtrack.
When composing for 1.85:1, it was known as Super 1.85, since it was larger than standard 1.85.
When composing for 2.39:1, it was often typical to employ either a "common center", which keeps the 2.39 extraction area at the center of the film that results in extra headroom if o
|
https://en.wikipedia.org/wiki/Shared%20web%20hosting%20service
|
A shared web hosting service is a web hosting service where many websites reside on one web server connected to the Internet. The overall cost of server maintenance is spread over many customers. By using shared hosting, the website will share a physical server with one or more other websites.
Description
The service usually includes system administration as it is shared by many users. This is a benefit for users who do not want to deal with it, but a hindrance to power users who want more control. In general, shared hosting will be inappropriate for users who require extensive software development outside what the hosting provider supports. Almost all applications intended to be on a standard web server work fine with a shared web hosting service. But on the other hand, shared hosting is cheaper than other types of hosting such as dedicated server hosting. Shared hosting usually has usage limits and hosting providers should have extensive reliability features in place. Shared hosting services typically offer basic web statistics support, email and webmail services, auto script installations, updated PHP and MySQL, basic after-sale technical support that is included with a monthly subscription. It also typically uses a web-based control panel system. Most of the large hosting companies use their own custom-developed control panel or cPanel. Control panels and web interfaces can cause controversy however since web hosting companies sometimes sell the right to use their control panel system to others. Attempting to recreate the functionality of a specific control panel is common, which leads to many lawsuits over patent infringement.
Shared web hosting services
In shared hosting, the provider is generally responsible for managing servers, installing server software, security updates, technical support, and other aspects of the service. Most servers are based on the Linux operating system and LAMP (software bundle). Some providers offer Microsoft Windows-based or Fr
|
https://en.wikipedia.org/wiki/Table%20of%20explosive%20detonation%20velocities
|
This is a compilation of published detonation velocities for various high explosive compounds. Detonation velocity is the speed with which the detonation shock wave travels through the explosive. It is a key, directly measurable indicator of explosive performance, but depends on density which must always be specified, and may be too low if the test charge diameter is not large enough. Especially for little studied explosives there may be divergent published values due to charge diameter issues. In liquid explosives, like nitroglycerin, there may be two detonation velocities, one much higher than the other. The detonation velocity values presented here are typically for the highest practical density which maximizes achievable detonation velocity.
The velocity of detonation is an important indicator for overall energy and power of detonation, and in particular for the brisance or shattering effect of an explosive which is due to the detonation pressure. The pressure can be calculated using Chapman-Jouguet theory from the velocity and density.
See also
TNT equivalent
RE factor
References
Chemistry-related lists
Explosive chemicals
|
https://en.wikipedia.org/wiki/Seiberg%20duality
|
In quantum field theory, Seiberg duality, conjectured by Nathan Seiberg in 1994, is an S-duality relating two different supersymmetric QCDs. The two theories are not identical, but they agree at low energies. More precisely under a renormalization group flow they flow to the same IR fixed point, and so are in the same universality class. It is an extension to nonabelian gauge theories with N=1 supersymmetry of Montonen–Olive duality in N=4 theories and electromagnetic duality in abelian theories.
The statement of Seiberg duality
Seiberg duality is an equivalence of the IR fixed points in an N=1 theory with SU(Nc) as the gauge group and Nf flavors of fundamental chiral multiplets and Nf flavors of antifundamental chiral multiplets in the chiral limit (no bare masses) and an N=1 chiral QCD with Nf-Nc colors and Nf flavors, where Nc and Nf are positive integers satisfying
.
A stronger version of the duality relates not only the chiral limit but also the full deformation space of the theory. In the special case in which
the IR fixed point is a nontrivial interacting superconformal field theory. For a superconformal field theory, the anomalous scaling dimension of a chiral superfield where R is the R-charge. This is an exact result.
The dual theory contains a fundamental "meson" chiral superfield M which is color neutral but transforms as a bifundamental under the flavor symmetries.
The dual theory contains the superpotential .
Relations between the original and dual theories
Being an S-duality, Seiberg duality relates the strong coupling regime with the weak coupling regime, and interchanges chromoelectric fields (gluons) with chromomagnetic fields (gluons of the dual gauge group), and chromoelectric charges (quarks) with nonabelian 't Hooft–Polyakov monopoles. In particular, the Higgs phase is dual to the confinement phase as in the dual superconducting model.
The mesons and baryons are preserved by the duality. However, in the electric theory the meson
|
https://en.wikipedia.org/wiki/Heirloom%20tomato
|
An heirloom tomato (also called heritage tomato in the UK) is an open-pollinated, non-hybrid heirloom cultivar of tomato. They are classified as family heirlooms, commercial heirlooms, mystery heirlooms, or created heirlooms. They usually have a shorter shelf life and are less disease resistant than hybrids. They are grown for various reasons: for food, historical interest, access to wider varieties, and by people who wish to save seeds from year to year, as well as for their taste.
Taste
Many heirloom tomatoes are sweeter and lack a genetic mutation that gives tomatoes a uniform red color at the cost of the fruit's taste. Varieties bearing that mutation which have been favored by industry since the 1940sthat is, tomatoes which are not heirloomsfeature fruits with lower levels of carotenoids and a decreased ability to make sugar within the fruit.
Cultivars
Heirloom tomato cultivars can be found in a wide variety of colors, shapes, flavors, and sizes. Some heirloom cultivars can be prone to cracking or lack disease resistance. As with most garden plants, cultivars can be acclimated over several gardening seasons to thrive in a geographical location through careful selection and seed saving.
Some of the most famous examples include Aunt Ruby's German Green, Banana Legs, Big Rainbow, Black Krim, Brandywine, Cherokee Purple, Chocolate Cherry, Costoluto Genovese, Garden Peach, Gardener's Delight, Green Zebra, Hawaiian Pineapple, Hillbilly, Lollypop, Marglobe, Matt's Wild Cherry, Mortgage Lifter, Mr. Stripey, Neville Tomatoes, Paul Robeson, Pruden's Purple, Red Currant, San Marzano, Silvery Fir Tree, Three Sisters, and Yellow Pear.
Seed collecting
Heirloom seeds "breed true," unlike the seeds of hybridized plants. Both sides of the DNA in an heirloom variety come from a common stable cultivar. Heirloom tomato varieties are open pollinating, so cross-pollination can occur. Generally, tomatoes most likely to cross are those with potato leaves, double flowers (found o
|
https://en.wikipedia.org/wiki/Shlaer%E2%80%93Mellor%20method
|
The Shlaer–Mellor method, also known as object-oriented systems analysis (OOSA) or object-oriented analysis (OOA) is an object-oriented software development methodology introduced by Sally Shlaer and Stephen Mellor in 1988. The method makes the documented analysis so precise that it is possible to implement the analysis model directly by translation to the target architecture, rather than by elaborating model changes through a series of more platform-specific models. In the new millennium the Shlaer–Mellor method has migrated to the UML notation, becoming Executable UML.
Overview
The Shlaer–Mellor method is one of a number of software development methodologies which arrived in the late 1980s. Most familiar were object-oriented analysis and design (OOAD) by Grady Booch, object modeling technique (OMT) by James Rumbaugh, object-oriented software engineering by Ivar Jacobson and object-oriented analysis (OOA) by Shlaer and Mellor. These methods had adopted a new object-oriented paradigm to overcome the established weaknesses in the existing structured analysis and structured design (SASD) methods of the 1960s and 1970s. Of these well-known problems, Shlaer and Mellor chose to address:
The complexity of designs generated through the use of structured analysis and structured design (SASD) methods.
The problem of maintaining analysis and design documentation over time.
Before publication of their second book in 1991 Shlaer and Mellor had stopped naming their method "Object-Oriented Systems Analysis" in favor of just "Object-Oriented Analysis". The method started focusing on the concept of Recursive Design (RD), which enabled the automated translation aspect of the method.
What makes Shlaer–Mellor unique among the object-oriented methods is:
the degree to which object-oriented semantic decomposition is taken,
the precision of the Shlaer–Mellor Notation used to express the analysis, and
the defined behavior of that analysis model at run-time.
The general solut
|
https://en.wikipedia.org/wiki/Thinking%20in%20Java
|
Thinking in Java () is a book about the Java programming language, written by Bruce Eckel and first published in 1998. Prentice Hall published the 4th edition of the work in 2006. The book represents a print version of Eckel’s “Hands-on Java” seminar.
Bruce Eckel wrote “On Java8” as a sequel for Thinking in Java and it is available in Google Play as an ebook.
Publishing history
Eckel has made various versions of the book publicly available online.
Reception
Tech Republic says:
"The particularly cool thing about Thinking in Java is that even though a large amount of information is covered at a rapid pace, it is somehow all easily absorbed and understood. This is a testament to both Eckel’s obvious mastery of the subject and his skilled writing style."
Linux Weekly News praised the book in its review.
CodeSpot says:
"Thinking in Java is a must-read book, especially if you want to do programming in Java programing language or learn Object-Oriented Programming (OOP)."
Awards
Thinking in Java has won multiple awards from professional journals:
1998 Java Developers Journal Editors Choice Award for Best Book
Jolt Productivity Award, 1999
2000 JavaWorld Readers Choice Award for Best Book
2001 JavaWorld Editors Choice Award for Best Book
2003 Software Development Magazine Jolt Award for Best Book
2003 Java Developers Journal Readers Choice Award for Best Book
2007 Java Developer’s Journal Readers’ Choice Best Book
External links
Official site
References
Computer programming books
Java (programming language)
|
https://en.wikipedia.org/wiki/Gnotobiosis
|
Gnotobiosis (from Greek roots gnostos "known" and bios "life") refers to an engineered state of an organism in which all forms of life (i.e., microorganisms) in or on it, including its microbiota, have been identified. The term gnotobiotic organism, or gnotobiote, can refer to a model organism that is colonized with a specific community of known microorganisms (isobiotic or defined flora animal) or that contains no microorganisms (germ-free) often for experimental purposes. The study of gnotobiosis and the generation of various types of gnotobiotic model organisms as tools for studying interactions between host organisms and microorganisms is referred to as gnotobiology.
History
The concept and field of gnotobiology was born of a debate between Louis Pasteur and Marceli Nencki in the late 19th century, in which Pasteur argued that animal life needed bacteria to succeed while Nencki argued that animals would be healthier without bacteria, but it wasn't until 1960 that the Association for Gnotobiotics was formed. Early attempts in gnotobiology were limited by inadequate equipment and nutritional knowledge, however, advancements in nutritional sciences, animal anatomy and physiology, and immunology have allowed for the improvement of gnotobiotic technologies.
Methods
Guinea pigs were the first germ-free animal model described in 1896 by George Nuttall and Hans Thierfelder, establishing techniques still used today in gnotobiology. Early methods for maintaining sterile environments involved sterile glass jars and gloveboxes, which developed into a conversation surrounding uniformity of the methods in the field at the 1939 symposium on Micrurgical and Germ-free Methods at the University of Notre Dame. Many early (1930-1950s) accomplishments in gnotobiology came from Notre Dame University, The University of Lund, and Nagoya University. The Laboratories of Bacteriology at the University of Notre Dame (known as LOBUND) was founded by John J. Cavanaugh and is cited for ma
|
https://en.wikipedia.org/wiki/Route%20reestablishment%20notification
|
Route Reestablishment Notification (RRN) is a type of notification that is used in some communications protocols that use time-division multiplexing.
Network protocols
|
https://en.wikipedia.org/wiki/Conserved%20quantity
|
A conserved quantity is a property or value that remains constant over time in a system even when changes occur in the system. In mathematics, a conserved quantity of a dynamical system is formally defined as a function of the dependent variables, the value of which remains constant along each trajectory of the system.
Not all systems have conserved quantities, and conserved quantities are not unique, since one can always produce another such quantity by applying a suitable function, such as adding a constant, to a conserved quantity.
Since many laws of physics express some kind of conservation, conserved quantities commonly exist in mathematical models of physical systems. For example, any classical mechanics model will have mechanical energy as a conserved quantity as long as the forces involved are conservative.
Differential equations
For a first order system of differential equations
where bold indicates vector quantities, a scalar-valued function H(r) is a conserved quantity of the system if, for all time and initial conditions in some specific domain,
Note that by using the multivariate chain rule,
so that the definition may be written as
which contains information specific to the system and can be helpful in finding conserved quantities, or establishing whether or not a conserved quantity exists.
Hamiltonian mechanics
For a system defined by the Hamiltonian , a function f of the generalized coordinates q and generalized momenta p has time evolution
and hence is conserved if and only if . Here denotes the Poisson bracket.
Lagrangian mechanics
Suppose a system is defined by the Lagrangian L with generalized coordinates q. If L has no explicit time dependence (so ), then the energy E defined by
is conserved.
Furthermore, if , then q is said to be a cyclic coordinate and the generalized momentum p defined by
is conserved. This may be derived by using the Euler–Lagrange equations.
See also
Conservative system
Lyapunov function
Hamiltonian sy
|
https://en.wikipedia.org/wiki/Nightshade%20%281985%20video%20game%29
|
Nightshade is an action video game developed and published by Ultimate Play the Game. It was first released for the ZX Spectrum in 1985, and was then ported to the Amstrad CPC and BBC Micro later that year. It was also ported to the MSX exclusively in Japan in 1986. In the game, the player assumes the role of a knight who sets out to destroy four demons in a plague-infested village.
The game features scrolling isometric gameplay, an improvement over its flip-screen-driven predecessors, Knight Lore and Alien 8, all thanks to an enhanced version of the Ultimate Play the Game's Filmation game engine, branded Filmation II. The game received positive reviews upon release; critics praised its gameplay traits, graphics and colours, however one critic was divided over its perceived similarities to its predecessors.
Gameplay
The game is presented in an isometric format. The player assumes the role of a knight who enters the plague-infested village of Nightshade to vanquish four demons who reside within. Additionally, all residents from the village have been transformed into vampires and other supernatural creatures. Contact with these monsters infects the knight, with repeated contact turning the character from white to yellow and then to green, which will lead to the character's death. The knight may be hit up to three times by an enemy, however the fourth hit will result in a life being deducted.
The objective of the game is to locate and destroy four specific demons. Each demon is vulnerable to a particular object which must be collected by the player: a hammer, a Bible, a crucifix and an hourglass. Once the four items have been collected, the player must track down a specific demon and cast the correct item at it in order to destroy it. Once all four demons have been destroyed, the game will end. In order to defend against other enemies such as vampires and monsters, the player can arm themselves with "antibodies", which can then be thrown at enemies. Antibodies can
|
https://en.wikipedia.org/wiki/IBM%20System%20z9
|
IBM System z9 is a line of IBM mainframe computers. The first models were available on September 16, 2005. The System z9 also marks the end of the previously used eServer zSeries naming convention. It was also the last mainframe computer that NASA ever used.
Background
System z9 is a mainframe using the z/Architecture, previously known as ESAME. z/Architecture is a 64-bit architecture which replaces the previous 31-bit-addressing/32-bit-data ESA/390 architecture while remaining completely compatible with it as well as the older 24-bit-addressing/32-bit-data System/360 architecture. The primary advantage of this arrangement is that memory intensive applications like DB2 are no longer bounded by 31-bit memory restrictions while older applications can run without modifications.
Name change
With the announcement of the System z9 Business Class server, IBM has renamed the System z9 109 as the System z9 Enterprise Class server. IBM documentation abbreviates them as the z9 BC and z9 EC, respectively.
Notable differences
There are several functional enhancements in the System z9 compared to its zSeries predecessors. Some of the differences include:
Support Element & HMC
The Support Element is the most direct and lowest level way to access a mainframe. It circumvents even the Hardware Management Console and the operating system running on the mainframe. The HMC is a PC connected to the mainframe and emulates the Support Element. All preceding zSeries mainframes used a modified version of OS/2 with custom software to provide the interface. System z9's HMC no longer uses OS/2, but instead uses a modified version of Linux with an OS/2 lookalike interface to ease transition as well as a new interface. Unlike the previous HMC application on OS/2, the new HMC is web-based which means that even local access is done via a web browser. Remote HMC access is available, although only over an SSL encrypted HTTP connection. The web-based nature means that there is no longer a
|
https://en.wikipedia.org/wiki/Modular%20Approach%20to%20Software%20Construction%20Operation%20and%20Test
|
The Modular Approach to Software Construction Operation and Test (MASCOT) is a software engineering methodology developed under the auspices of the United Kingdom Ministry of Defence starting in the early 1970s at the Royal Radar Establishment and continuing its evolution over the next twenty years. The co-originators of MASCOT were Hugo Simpson and Ken Jackson (currently with Telelogic).
Where most methodologies tend to concentrate on bringing rigour and structure to a software project's functional aspects, MASCOT's primary purpose is to emphasise the architectural aspects of a project. Its creators purposely avoided saying anything about the functionality of the software being developed, and concentrated on the real-time control and interface definitions between concurrently running processes.
MASCOT was successfully used in a number of defence systems, most notably the Rapier ground-to-air missile system of the British Army. Although still in use on systems in the field, it never reached critical success and has been subsequently overshadowed by object oriented design methodologies based on UML.
A British Standards Institution (BSI) standard was drafted for version 3 of the methodology, but was never ratified. Copies of the draft standard can be still obtained from the BSI.
MASCOT in the field
The UK Ministry of Defence has been the primary user of the MASCOT method through its application in significant military systems, and at one stage mandated its use for new operational systems. Examples include the Rapier missile system, and various Royal Navy Command & Control Systems.
The Future of the Method
MASCOT's principles continue to evolve in the academic community (principally at the DCSC) and the aerospace industry Matra BAe Dynamics, through research into temporal aspects of software design and the expression of system architectures, most notably in the DORIS (Data-Oriented Requirements Implementation Scheme) method and implementation protocols. Work has a
|
https://en.wikipedia.org/wiki/Data%20assimilation
|
Data assimilation is a mathematical discipline that seeks to optimally combine theory (usually in the form of a numerical model) with observations. There may be a number of different goals sought – for example, to determine the optimal state estimate of a system, to determine initial conditions for a numerical forecast model, to interpolate sparse observation data using (e.g. physical) knowledge of the system being observed, to set numerical parameters based on training a model from observed data. Depending on the goal, different solution methods may be used. Data assimilation is distinguished from other forms of machine learning, image analysis, and statistical methods in that it utilizes a dynamical model of the system being analyzed.
Data assimilation initially developed in the field of numerical weather prediction. Numerical weather prediction models are equations describing the dynamical behavior of the atmosphere, typically coded into a computer program. In order to use these models to make forecasts, initial conditions are needed for the model that closely resemble the current state of the atmosphere. Simply inserting point-wise measurements into the numerical models did not provide a satisfactory solution. Real world measurements contain errors both due to the quality of the instrument and how accurately the position of the measurement is known. These errors can cause instabilities in the models that eliminate any level of skill in a forecast. Thus, more sophisticated methods were needed in order to initialize a model using all available data while making sure to maintain stability in the numerical model. Such data typically includes the measurements as well as a previous forecast valid at the same time the measurements are made. If applied iteratively, this process begins to accumulate information from past observations into all subsequent forecasts.
Because data assimilation developed out of the field of numerical weather prediction, it initially gained
|
https://en.wikipedia.org/wiki/Spin%20polarization
|
In particle physics, spin polarization is the degree to which the spin, i.e., the intrinsic angular momentum of elementary particles, is aligned with a given direction. This property may pertain to the spin, hence to the magnetic moment, of conduction electrons in ferromagnetic metals, such as iron, giving rise to spin-polarized currents. It may refer to (static) spin waves, preferential correlation of spin orientation with ordered lattices (semiconductors or insulators).
It may also pertain to beams of particles, produced for particular aims, such as polarized neutron scattering or muon spin spectroscopy. Spin polarization of electrons or of nuclei, often called simply magnetization, is also produced by the application of a magnetic field. Curie law is used to produce an induction signal in electron spin resonance (ESR or EPR) and in nuclear magnetic resonance (NMR).
Spin polarization is also important for spintronics, a branch of electronics. Magnetic semiconductors are being researched as possible spintronic materials.
The spin of free electrons is measured either by a LEED image from a clean wolfram-crystal (SPLEED) or by an electron microscope composed purely of electrostatic lenses and a gold foil as a sample. Back scattered electrons are decelerated by annular optics and focused onto a ring shaped electron multiplier at about 15°. The position on the ring is recorded. This whole device is called a Mott-detector. Depending on their spin the electrons have the chance to hit the ring at different positions. 1% of the electrons are scattered in the foil. Of these 1% are collected by the detector and then about 30% of the electrons hit the detector at the wrong position. Both devices work due to spin orbit coupling.
The circular polarization of electromagnetic fields is due to spin polarization of their constituent photons.
In the most generic context, spin polarization is any alignment of the components of a non-scalar (vectorial, tensorial, spinor) field w
|
https://en.wikipedia.org/wiki/Wall%20of%20Shame
|
"Wall of Shame" () is a phrase that is most commonly associated with the Berlin Wall. In this context, the phrase was coined by Willy Brandt, and it was used by the government of West Berlin, and later popularized in the English-speaking world and elsewhere from the beginning of the 1960s. Inspired by its usage in reference to the Berlin Wall, the term has later been used more widely.
For example, the term "Wall of Shame" can be applied to things, including physical barriers (walls, fences, etc.) serving dishonourable or disputed separation purposes (like the Berlin Wall and the American border wall), physical and virtual bulletin boards listing names or images for purposes of shaming, and even lists in print (i.e., walls of text naming people, companies, etc. for the purpose of shaming them, or as record of embarrassment).
Additionally, "Wall of Shame" may be a significant part in the building of a "Hall of Shame", although, more often, a "Wall of Shame" is a monument in its own right (i.e., a wall not having been erected as part of any "Hall of Shame" endeavour). More recently, the term "Wall of Shame" has been used in reference to the Mexico–United States barrier, the Egypt–Gaza barrier, the Israeli West Bank barrier and Moroccan Western Sahara Wall.
Applied to Japanese culture
The earliest use of the term, which is a translation of a Japanese phrase, may have been by Ruth Benedict, in her influential book, The Chrysanthemum and the Sword (1948), and other anthropologists discussing the honor shame culture of Japan.
Applied to the Berlin Wall
The term was used by the government of West Berlin to refer to the Berlin Wall, which surrounded West Berlin and separated it from East Berlin and the GDR. In 1961, the government of East Germany named the erected wall as the "Anti-Fascist Protection Rampart", a part of the inner German border; many Berliners, however, called it "Schandmauer" ("Wall of Shame").
The term was coined by governing mayor Willy Brandt. Out
|
https://en.wikipedia.org/wiki/Australian%20Nuclear%20Science%20and%20Technology%20Organisation
|
The Australian Nuclear Science & Technology Organisation (ANSTO) is Australia's national nuclear organisation and the centre of Australian nuclear expertise. It is a statutory body of the Australian government formed in 1987 to replace the Australian Atomic Energy Commission.
Its head office and main facilities are in southern outskirts of Sydney at Lucas Heights, in the Sutherland Shire.
Purpose
The Australian Nuclear Science and Technology Organisation Act 1987 (Cth) prescribes its general purpose.
Mission statement
To support the development and implementation of government policies and initiatives in nuclear and related areas, domestically and internationally
To operate nuclear science and technology based facilities, for the benefit of industry and the Australian and international research community
To undertake research that will advance the application of nuclear science and technology
To apply nuclear science, techniques, and expertise to address Australia 's environmental challenges and increase the competitiveness of Australian industry
To manufacture and advance the use of radiopharmaceuticals which will improve the health of Australians
Structure
ANSTO is governed by The Hon Dr Annabelle Bennett. Penelope Dobson is the Deputy Chair. The CEO, Shaun Jenkinson, manages the organisation.
ANSTO operates five research facilities:
OPAL research reactor
Centre for Accelerator Science
Australian Centre for Neutron Scattering
Cyclotron facility
Australian Synchrotron
Major research instruments include:
The ANTARES particle accelerator
High-resolution neutron powder diffractometer, ECHIDNA
High-intensity neutron powder diffractometer, WOMBAT
Strain scanner, KOWARI
Neutron reflectometer, PLATYPUS
ANSTO also manufactures radiopharmaceuticals and performs commercial work such as silicon doping by nuclear transmutation.
Nuclear reactors
ANSTO currently has two nuclear reactors onsite: HIFAR and the OPAL from the Argentine company INVAP. HIFAR w
|
https://en.wikipedia.org/wiki/Common%20Access%20Card
|
The Common Access Card, also commonly referred to as the CAC, is the standard identification for Active Duty United States Defense personnel. The card itself is a smart card about the size of a credit card. Defense personnel that use the CAC include the Selected Reserve and National Guard, United States Department of Defense (DoD) civilian employees, United States Coast Guard (USCG) civilian employees and eligible DoD and USCG contractor personnel. It is also the principal card used to enable physical access to buildings and controlled spaces, and it provides access to defense computer networks and systems. It also serves as an identification card under the Geneva Conventions (especially the Third Geneva Convention). In combination with a personal identification number, a CAC satisfies the requirement for two-factor authentication: something the user knows combined with something the user has. The CAC also satisfies the requirements for digital signature and data encryption technologies: authentication, integrity and non-repudiation.
The CAC is a controlled item. As of 2008, DoD has issued over 17 million smart cards. This number includes reissues to accommodate changes in name, rank, or status and to replace lost or stolen cards. As of the same date, approximately 3.5 million unterminated or active CACs are in circulation. DoD has deployed an issuance infrastructure at over 1,000 sites in more than 25 countries around the world and is rolling out more than one million card readers and associated middleware.
Issuance
The CAC is issued to Active United States Armed Forces (Regular, Reserves and National Guard) in the Department of Defense and the U.S. Coast Guard; DoD civilians; USCG civilians; non-DoD/other government employees and State Employees of the National Guard; and eligible DoD and USCG contractors who need access to DoD or USCG facilities and/or DoD computer network systems:
Active Duty U.S. Armed Forces (to include Cadets and Midshipmen of the U.S. Se
|
https://en.wikipedia.org/wiki/Physical%20optics
|
In physics, physical optics, or wave optics, is the branch of optics that studies interference, diffraction, polarization, and other phenomena for which the ray approximation of geometric optics is not valid. This usage tends not to include effects such as quantum noise in optical communication, which is studied in the sub-branch of coherence theory.
Principle
Physical optics is also the name of an approximation commonly used in optics, electrical engineering and applied physics. In this context, it is an intermediate method between geometric optics, which ignores wave effects, and full wave electromagnetism, which is a precise theory. The word "physical" means that it is more physical than geometric or ray optics and not that it is an exact physical theory.
This approximation consists of using ray optics to estimate the field on a surface and then integrating that field over the surface to calculate the transmitted or scattered field. This resembles the Born approximation, in that the details of the problem are treated as a perturbation.
In optics, it is a standard way of estimating diffraction effects. In radio, this approximation is used to estimate some effects that resemble optical effects. It models several interference, diffraction and polarization effects but not the dependence of diffraction on polarization. Since this is a high-frequency approximation, it is often more accurate in optics than for radio.
In optics, it typically consists of integrating ray-estimated field over a lens, mirror or aperture to calculate the transmitted or scattered field.
In radar scattering it usually means taking the current that would be found on a tangent plane of similar material as the current at each point on the front, i. e. the geometrically illuminated part, of a scatterer. Current on the shadowed parts is taken as zero. The approximate scattered field is then obtained by an integral over these approximate currents. This is useful for bodies with large smooth con
|
https://en.wikipedia.org/wiki/Balkan%20Mathematical%20Olympiad
|
The Balkan Mathematical Olympiad (BMO) is an international contest of winners of high-school national competitions from European countries.
Participants (incomplete)
Albania
BMO 1991: 1.Julian Mulla 2.Erion Dasho 3.Elton Bojaxhi 4.Enkel Hysnelaj
BMO 1993: 1.Gjergji Guri 2.Jonada Rrapo 3.Ermir Qeli 4.Mirela Ciperiani 5.Gjergji Sugari 6.Pirro Bracka
BMO 1997 1.Alkid Ademi 2.Ermal Rexhepi 3.Aksel Bode 4.Gerard Gjonaj 5.Amarda Shehu
BMO 2002: 1.Deni Raco 2.Evarist Byberi 3.Arlind Kopliku 4.Kreshnik Xhangolli 5.Dritan Tako 6.Erind Angjeliu
BMO 2006: 1.Eni Duka 2.Erion Dula 3.Keler Marku 4.Klevis Ymeri 5.Anri Rembeci 6.Gjergji Zaimi
BMO 2008: 1.Tedi Aliaj 2.Sindi Shkodrani 3.Disel Spahia 4.Arbeg Gani 5.Arnold Spahiu
BMO 2009: 1.Andi Reci 2.Ridgers Mema 3.Arbana Grembi 4.Niko Kaso 5.Erixhen Sula 6.Ornela Xhelili
BMO 2010: 1.Andi Nika 2.Olsi Leka 3.Florida Ahmetaj 4.Ledio Bidaj 5.Endrit Shehaj 6.Fatjon Gerra
BMO 2011: 1.Florida Ahmetaj 2.Erjona Topalli 3.Keti Veliaj 4.Disel Spahija 5.Franc Hodo 6.Ridgers Mema
BMO 2012: 1.Boriana Gjura 2.Fatjon Gera 3.Erjona Topalli 4.Gledis Kallço 5.Florida Ahmetaj 6.Genti Gjika
BMO 2013: 1.Antonino Sota 2.Boriana Gjura 3.Ardis Cani 4.Gledis Kallço 5.Enis Barbullushi
BMO 2014: 1.Gent Gjika 2.Boriana Gjura 3.Gledis Kallço 4.Antonino Sota 5.Enis Barbullushi 6.Geri Shehu
BMO 2015: 1.Gledis Kallço 2.Ana Peçini 3.Alboreno Voci 4.Selion Haxhi 5.Naisila Puka 6.Enes Kristo
BMO 2016: 1.Gledis Kallço 2.Ana Peçini 3.Fjona Parllaku 5.Stefan Haxhillazi 5.Kevin Isufa 6.Barjol Lami
BMO 2016/Albania B: Laura Sheshi 2.Gledis Zeneli 3.Liana Shpani 4.Enea Prifti 5.Jovan Shandro 6.Aleksandros Ruci
BMO 2017: 1.Enea Prifti 2.Barjol Lami 3.Stefan Haxhillazi 4.Rei Myderrizi 5.Safet Hoxha 6.Lorenc Bushi
Republic of North Macedonia
BMO 2001: 1.Ilija Jovceski 2.Todor Ristov 3.Kire Trivodaliev 4.Riste Gligorov 5.Zoran Dimov 6.Irina Panovska
BMO 2002: 1.Ilija Jovceski: Silver Medal
|
https://en.wikipedia.org/wiki/Transitive%20set
|
In set theory, a branch of mathematics, a set is called transitive if either of the following equivalent conditions hold:
whenever , and , then .
whenever , and is not an urelement, then is a subset of .
Similarly, a class is transitive if every element of is a subset of .
Examples
Using the definition of ordinal numbers suggested by John von Neumann, ordinal numbers are defined as hereditarily transitive sets: an ordinal number is a transitive set whose members are also transitive (and thus ordinals). The class of all ordinals is a transitive class.
Any of the stages and leading to the construction of the von Neumann universe and Gödel's constructible universe are transitive sets. The universes and themselves are transitive classes.
This is a complete list of all finite transitive sets with up to 20 brackets:
Properties
A set is transitive if and only if , where is the union of all elements of that are sets, .
If is transitive, then is transitive.
If and are transitive, then and are transitive. In general, if is a class all of whose elements are transitive sets, then and are transitive. (The first sentence in this paragraph is the case of .)
A set that does not contain urelements is transitive if and only if it is a subset of its own power set, The power set of a transitive set without urelements is transitive.
Transitive closure
The transitive closure of a set is the smallest (with respect to inclusion) transitive set that includes (i.e. ). Suppose one is given a set , then the transitive closure of is
Proof. Denote and . Then we claim that the set
is transitive, and whenever is a transitive set including then .
Assume . Then for some and so . Since , . Thus is transitive.
Now let be as above. We prove by induction that for all , thus proving that : The base case holds since . Now assume . Then . But is transitive so , hence . This completes the proof.
Note that this is the set of all of the objects related
|
https://en.wikipedia.org/wiki/Generalized%20Appell%20polynomials
|
In mathematics, a polynomial sequence has a generalized Appell representation if the generating function for the polynomials takes on a certain form:
where the generating function or kernel is composed of the series
with
and
and all
and
with
Given the above, it is not hard to show that is a polynomial of degree .
Boas–Buck polynomials are a slightly more general class of polynomials.
Special cases
The choice of gives the class of Brenke polynomials.
The choice of results in the Sheffer sequence of polynomials, which include the general difference polynomials, such as the Newton polynomials.
The combined choice of and gives the Appell sequence of polynomials.
Explicit representation
The generalized Appell polynomials have the explicit representation
The constant is
where this sum extends over all compositions of into parts; that is, the sum extends over all such that
For the Appell polynomials, this becomes the formula
Recursion relation
Equivalently, a necessary and sufficient condition that the kernel can be written as with is that
where and have the power series
and
Substituting
immediately gives the recursion relation
For the special case of the Brenke polynomials, one has and thus all of the , simplifying the recursion relation significantly.
See also
q-difference polynomials
References
Ralph P. Boas, Jr. and R. Creighton Buck, Polynomial Expansions of Analytic Functions (Second Printing Corrected), (1964) Academic Press Inc., Publishers New York, Springer-Verlag, Berlin. Library of Congress Card Number 63-23263.
Polynomials
|
https://en.wikipedia.org/wiki/Ristra
|
A ristra (), also known as a sarta, is an arrangement of drying chile pepper pods, garlic bulbs, or other vegetables for later consumption. In addition to its practical use, the ristra has come to be a trademark of decorative design in the state of New Mexico as well as southern Arizona. Typically, large chiles such as New Mexico chiles and Anaheim peppers are used, although any kind of chile may be used.
Garlic can also be arranged into a ristra for drying and curing after the bulbs have matured and the leaves have died away.
Ristras are commonly used for decoration and "are said to bring health and good luck."
See also
List of dried foods
References
Dried foods
Chili peppers
Food and drink decorations
New Mexican cuisine
|
https://en.wikipedia.org/wiki/Sulfur%20cycle
|
The sulfur cycle is a biogeochemical cycle in which the sulfur moves between rocks, waterways and living systems. It is important in geology as it affects many minerals and in life because sulfur is an essential element (CHNOPS), being a constituent of many proteins and cofactors, and sulfur compounds can be used as oxidants or reductants in microbial respiration. The global sulfur cycle involves the transformations of sulfur species through different oxidation states, which play an important role in both geological and biological processes.
Steps of the sulfur cycle are:
Mineralization of organic sulfur into inorganic forms, such as hydrogen sulfide (H2S), elemental sulfur, as well as sulfide minerals.
Oxidation of hydrogen sulfide, sulfide, and elemental sulfur (S) to sulfate ().
Reduction of sulfate to sulfide.
Incorporation of sulfide into organic compounds (including metal-containing derivatives).
Disproportionation of sulfur compounds (elemental sulfur, sulfite, thiosulfate) into sulfate and hydrogen sulfide.
These are often termed as follows:
Assimilative sulfate reduction (see also sulfur assimilation) in which sulfate () is reduced by plants, fungi and various prokaryotes. The oxidation states of sulfur are +6 in sulfate and –2 in R–SH.
Desulfurization in which organic molecules containing sulfur can be desulfurized, producing hydrogen sulfide gas (H2S, oxidation state = –2). An analogous process for organic nitrogen compounds is deamination.
Oxidation of hydrogen sulfide produces elemental sulfur (S8), oxidation state = 0. This reaction occurs in the photosynthetic green and purple sulfur bacteria and some chemolithotrophs. Often the elemental sulfur is stored as polysulfides.
Oxidation in elemental sulfur by sulfur oxidizers produces sulfate.
Dissimilative sulfur reduction in which elemental sulfur can be reduced to hydrogen sulfide.
Dissimilative sulfate reduction in which sulfate reducers generate hydrogen sulfide from sulfate.
Sulfur oxidation
|
https://en.wikipedia.org/wiki/Cold%20chain
|
Cold chain is defined as the series of actions and equipment applied to maintain a product within a specified low-temperature range from harvest/production to consumption. An unbroken cold chain is an uninterrupted sequence of refrigerated production, storage and distribution activities, along with associated equipment and logistics, which maintain a desired low-temperature interval to keep the safety and quality of perishable or sensitive products, such as foods and medicines. In other words, the term denotes a low temperature-controlled supply chain network used to ensure and extend the shelf life of products, e.g. fresh agricultural produce, seafood, frozen food, photographic film, chemicals, and pharmaceutical products. Such products, during transport and end-use when in transient storage, are sometimes called cool cargo. Unlike other goods or merchandise, cold chain goods are perishable and always en-route towards end use or destination, even when held temporarily in cold stores and hence commonly referred to as "cargo" during its entire logistics cycle. Adequate cold storage, in particular, can be crucial to prevent quantitative and qualitative food losses.
History
Mobile refrigeration with ice from the ice trade began with reefer ships and refrigerator cars (iceboxes on wheels) in the mid-19th century. The term cold chain was first used in 1908. The first effective cold store in the UK opened in 1882 at St Katharine Docks. It could hold 59,000 carcasses, and by 1911 cold storage capacity in London had reached 2.84 million carcasses. By 1930 about a thousand refrigerated meat containers were in use which could be switched from road to railway.
Mobile mechanical refrigeration was invented by Frederick McKinley Jones, who co-founded Thermo King with entrepreneur Joseph A. "Joe" Numero. In 1938 Numero sold his Cinema Supplies Inc. movie sound equipment business to RCA to form the new entity, U.S. Thermo Control Company (later the Thermo King Corporation), in p
|
https://en.wikipedia.org/wiki/Metasystem%20transition
|
A metasystem transition is the emergence, through evolution, of a higher level of organization or control.
A metasystem is formed by the integration of a number of initially independent components, such as molecules (as theorized for instance by hypercycles), cells, or individuals, and the emergence of a system steering or controlling their interactions. As such, the collective of components becomes a new, goal-directed individual, capable of acting in a coordinated way. This metasystem is more complex, more intelligent, and more flexible in its actions than the initial component systems. Prime examples are the origin of life, the transition from unicellular to multicellular organisms, the emergence of eusociality or symbolic thought.
The concept of metasystem transition was introduced by the cybernetician Valentin Turchin in his 1970 book The Phenomenon of Science, and developed among others by Francis Heylighen in the Principia Cybernetica Project. Another related idea, that systems ("operators") evolve to become more complex by successive closures encapsulating components in a larger whole, is proposed in "the operator theory", developed by Gerard Jagers op Akkerhuis.
Turchin has applied the concept of metasystem transition in the domain of computing, via the notion of metacompilation or supercompilation. A supercompiler is a compiler program that compiles its own code, thus increasing its own efficiency, producing a remarkable speedup in its execution.
Evolutionary quanta
The following is the classical sequence of metasystem transitions in the history of animal evolution according to Turchin, from the origin of animate life to sapient culture:
Control of Position = Motion: the animal or agent develops the ability to control its position in space
Control of Motion = Irritability: the movement of the agent is no longer given, but a reaction to elementary sensations or stimuli
Control of Irritability = Reflex: different elementary sensations and their re
|
https://en.wikipedia.org/wiki/Plug%20compatible
|
Plug compatible refers to "hardware that is designed to perform exactly like another vendor's product." The term PCM was originally applied to manufacturers who made replacements for IBM peripherals. Later this term was used to refer to IBM-compatible computers.
PCM and peripherals
Before the rise of the PCM peripheral industry, computing systems were either configured with peripherals designed and built by the CPU vendor, or designed to use vendor-selected rebadged devices.
The first example of plug-compatible IBM subsystems were tape drives and controls offered by Telex beginning 1965. Memorex in 1968 was first to enter the IBM plug-compatible disk followed shortly thereafter by a number of suppliers such as CDC, Itel, and Storage Technology Corporation. This was boosted by the world's largest user of computing equipment in both directions.
Ultimately plug-compatible products were offered for most peripherals and system main memory.
PCM and computer systems
A plug-compatible machine is one that has been designed to be backward compatible with a prior machine. In particular, a new computer system that is plug-compatible has not only the same connectors and protocol interfaces to peripherals, but also binary-code compatibility—it runs the same software as the old system. A plug compatible manufacturer or PCM is a company that makes such products.
One recurring theme in plug-compatible systems is the ability to be bug compatible as well. That is, if the forerunner system had software or interface problems, then the successor must have (or simulate) the same problems. Otherwise, the new system may generate unpredictable results, defeating the full compatibility objective. Thus, it is important for customers to understand the difference between a "bug" and a "feature", where the latter is defined as an intentional modification to the previous system (e.g. higher speed, lighter weight, smaller package, better operator controls, etc.).
PCM and IBM mainframes
The or
|
https://en.wikipedia.org/wiki/Defense%20Information%20System%20Network
|
The Defense Information System Network (DISN) has been the United States Department of Defense's enterprise telecommunications network for providing data, video, and voice services for 40 years.
The DISN end-to-end infrastructure is composed of three major segments:
The sustaining base (I.e., base, post, camp, or station, and Service enterprise networks). The Command, Control, Communications, Computers and Intelligence (C4I) infrastructure will interface with the long-haul network to support the deployed warfighter. The sustaining base segment is primarily the responsibility of the individual Services.
The long-haul transport infrastructure, which includes the communication systems and services between the fixed environments and the deployed Joint Task Force (JTF) and/or Coalition Task Force (CTF) warfighter. The long-haul telecommunications infrastructure segment is primarily the responsibility of DISA.
The deployed warfighter, mobile users, and associated Combatant Commander telecommunications infrastructures are supporting the Joint Task Force (JTF) and/or Coalition Task Force (CTF). The deployed warfighter and associated Combatant Commander telecommunications infrastructure is primarily the responsibility of the individual Services.
The DISN provides the following multiple networking services:
Global Content Delivery System (GCDS)
Data Services
Sensitive but Unclassified (NIPRNet)
Secret Data Services (SIPRNet)
Multicast
Organizational Messaging
The Organizational Messaging Service provides a range of assured services to the customer community that includes the military services, DoD agencies, combatant commands (CCMDs), non-DoD U.S. government activities, and the Intelligence Community (IC). These services include the ability to exchange official information between military organizations and to support interoperability with allied nations, non-DoD activities, and the IC operating in both the strategic/fixed-base and the tactical/deployed enviro
|
https://en.wikipedia.org/wiki/P6%20%28microarchitecture%29
|
The P6 microarchitecture is the sixth-generation Intel x86 microarchitecture, implemented by the Pentium Pro microprocessor that was introduced in November 1995. It is frequently referred to as i686. It was planned to be succeeded by the NetBurst microarchitecture used by the Pentium 4 in 2000, but was revived for the Pentium M line of microprocessors. The successor to the Pentium M variant of the P6 microarchitecture is the Core microarchitecture which in turn is also derived from P6.
P6 was used within Intel's mainstream offerings from the Pentium Pro to Pentium III, and was widely known for low power consumption, excellent integer performance, and relatively high instructions per cycle (IPC).
Features
The P6 core was the sixth generation Intel microprocessor in the x86 line. The first implementation of the P6 core was the Pentium Pro CPU in 1995, the immediate successor to the original Pentium design (P5).
P6 processors dynamically translate IA-32 instructions into sequences of buffered RISC-like micro-operations, then analyze and reorder the micro-operations to detect parallelizable operations that may be issued to more than one execution unit at once. The Pentium Pro was the first x86 microprocessor designed by Intel to use this technique, though the NexGen Nx586, introduced in 1994, did so earlier.
Other features first implemented in the x86 space in the P6 core include:
Speculative execution and out-of-order completion (called "dynamic execution" by Intel), which required new retire units in the execution core. This lessened pipeline stalls, and in part enabled greater speed-scaling of the Pentium Pro and successive generations of CPUs.
Superpipelining, which increased from Pentium's 5-stage pipeline to 14 of the Pentium Pro and early model of the Pentium III (Coppermine), and eventually morphed into less than 10-stage pipeline of the Pentium M for embedded and mobile market due to energy inefficiency and higher voltage issues that encountered in the pr
|
https://en.wikipedia.org/wiki/Intel%20Core%20%28microarchitecture%29
|
The Intel Core microarchitecture (provisionally referred to as Next Generation Micro-architecture, and developed as Merom) is a multi-core processor microarchitecture launched by Intel in mid-2006. It is a major evolution over the Yonah, the previous iteration of the P6 microarchitecture series which started in 1995 with Pentium Pro. It also replaced the NetBurst microarchitecture, which suffered from high power consumption and heat intensity due to an inefficient pipeline designed for high clock rate. In early 2004 the new version of NetBurst (Prescott) needed very high power to reach the clocks it needed for competitive performance, making it unsuitable for the shift to dual/multi-core CPUs. On May 7, 2004 Intel confirmed the cancellation of the next NetBurst, Tejas and Jayhawk. Intel had been developing Merom, the 64-bit evolution of the Pentium M, since 2001, and decided to expand it to all market segments, replacing NetBurst in desktop computers and servers. It inherited from Pentium M the choice of a short and efficient pipeline, delivering superior performance despite not reaching the high clocks of NetBurst.
The first processors that used this architecture were code-named 'Merom', 'Conroe', and 'Woodcrest'; Merom is for mobile computing, Conroe is for desktop systems, and Woodcrest is for servers and workstations. While architecturally identical, the three processor lines differ in the socket used, bus speed, and power consumption. The first Core-based desktop and mobile processors were branded Core 2, later expanding to the lower-end Pentium Dual-Core, Pentium and Celeron brands; while server and workstation Core-based processors were branded Xeon.
Features
The Core microarchitecture returned to lower clock rates and improved the use of both available clock cycles and power when compared with the preceding NetBurst microarchitecture of the Pentium 4 and D-branded CPUs. The Core microarchitecture provides more efficient decoding stages, execution units,
|
https://en.wikipedia.org/wiki/Conference%20on%20Automated%20Deduction
|
The Conference on Automated Deduction (CADE) is the premier academic conference on automated deduction and related fields. The first CADE was organized in 1974 at the Argonne National Laboratory near Chicago. Most CADE meetings have been held in Europe and the United States. However, conferences have been held all over the world. Since 1996, CADE has been held yearly. In 2001, CADE was, for the first time, merged into the International Joint Conference on Automated Reasoning (IJCAR). This has been repeated biannually since 2004.
In 1996, CADE Inc. was formed as a non-profit sub-corporation of the Association for Automated Reasoning to organize the formerly individually organized conferences.
External links
, CADE
, AAR
References
Theoretical computer science conferences
Logic conferences
|
https://en.wikipedia.org/wiki/Ciphertext%20expansion
|
In cryptography, the term ciphertext expansion refers to the length increase of a message when it is encrypted. Many modern cryptosystems cause some degree of expansion during the encryption process, for instance when the resulting ciphertext must include a message-unique Initialization Vector (IV). Probabilistic encryption schemes cause ciphertext expansion, as the set of possible ciphertexts is necessarily greater than the set of input plaintexts. Certain schemes, such as Cocks Identity Based Encryption, or the Goldwasser-Micali cryptosystem result in ciphertexts hundreds or thousands of times longer than the plaintext.
Ciphertext expansion may be offset or increased by other processes which compress or expand the message, e.g., data compression or error correction coding.
References
Cryptography
|
https://en.wikipedia.org/wiki/Fluidized%20bed
|
A fluidized bed is a physical phenomenon that occurs when a solid particulate substance (usually present in a holding vessel) is under the right conditions so that it behaves like a fluid. The usual way to achieve a fluidized bed is to pump pressurized fluid into the particles. The resulting medium then has many properties and characteristics of normal fluids, such as the ability to free-flow under gravity, or to be pumped using fluid technologies.
The resulting phenomenon is called fluidization. Fluidized beds are used for several purposes, such as fluidized bed reactors (types of chemical reactors), solids separation, fluid catalytic cracking, fluidized bed combustion, heat or mass transfer or interface modification, such as applying a coating onto solid items. This technique is also becoming more common in aquaculture for the production of shellfish in integrated multi-trophic aquaculture systems.
Properties
A fluidized bed consists of fluid-solid mixture that exhibits fluid-like properties. As such, the upper surface of the bed is relatively horizontal, which is analogous to hydrostatic behavior. The bed can be considered to be a heterogeneous mixture of fluid and solid that can be represented by a single bulk density.
Furthermore, an object with a higher density than the bed will sink, whereas an object with a lower density than the bed will float, thus the bed can be considered to exhibit the fluid behavior expected of Archimedes' principle. As the "density", (actually the solid volume fraction of the suspension), of the bed can be altered by changing the fluid fraction, objects with different densities comparative to the bed can, by altering either the fluid or solid fraction, be caused to sink or float.
In fluidised beds, the contact of the solid particles with the fluidisation medium (a gas or a liquid) is greatly enhanced when compared to packed beds. This behavior in fluidised combustion beds enables good thermal transport inside the system and good
|
https://en.wikipedia.org/wiki/Hybrid%20drive
|
In computing, a hybrid drive (solid state hybrid drive – SSHD) is a logical or physical storage device that combines a faster storage medium such as solid-state drive (SSD) with a higher-capacity hard disk drive (HDD). The intent is adding some of the speed of SSDs to the cost-effective storage capacity of traditional HDDs. The purpose of the SSD in a hybrid drive is to act as a cache for the data stored on the HDD, improving the overall performance by keeping copies of the most frequently used data on the faster SSD drive.
There are two main configurations for implementing hybrid drives: dual-drive hybrid systems and solid-state hybrid drives. In dual-drive hybrid systems, physically separate SSD and HDD devices are installed in the same computer, having the data placement optimization performed either manually by the end user, or automatically by the operating system through the creation of a "hybrid" logical device. In solid-state hybrid drives, SSD and HDD functionalities are built into a single piece of hardware, where data placement optimization is performed either entirely by the device (self-optimized mode), or through placement "hints" supplied by the operating system (host-hinted mode).
Types
There are two main "hybrid" storage technologies that combine NAND flash memory or SSDs, with the HDD technology: dual-drive hybrid systems and solid-state hybrid drives.
Dual-drive hybrid systems
Dual-drive hybrid systems combine the usage of separate SSD and HDD devices installed in the same computer. Performance optimizations are managed in one of three ways:
By the computer user, who manually places more frequently accessed data onto the faster drive.
By the computer's operating system software, which combines SSD and HDD into a single hybrid volume, providing an easier experience to the end-user. Examples of hybrid volumes implementations in operating systems are ZFS' "hybrid storage pools", bcache and dm-cache on Linux, Intel's Hystor and Apple's Fusion D
|
https://en.wikipedia.org/wiki/PEDOT%3APSS
|
poly(3,4-ethylenedioxythiophene) polystyrene sulfonate (PEDOT:PSS) is a polymer mixture of two ionomers. One component in this mixture is made up of polystyrene sulfonate which is a sulfonated polystyrene. Part of the sulfonyl groups are deprotonated and carry a negative charge. The other component poly(3,4-ethylenedioxythiophene) (PEDOT) is a conjugated polymer and carries positive charges and is based on polythiophene. Together the charged macromolecules form a macromolecular salt.
Synthesis
PEDOT:PSS can be prepared by mixing an aqueous solution of PSS with EDOT monomer, and to the resulting mixture, a solution of sodium persulfate and ferric sulfate.
Applications
PEDOT:PSS has the highest efficiency among conductive organic thermoelectric materials (ZT~0.42) and thus can be used in flexible and biodegradable thermoelectric generators. Yet its largest application is as a transparent, conductive polymer with high ductility. For example, AGFA coats 200 million photographic films per year with a thin, extensively-stretched layer of virtually transparent and colorless PEDOT:PSS as an antistatic agent to prevent electrostatic discharges during production and normal film use, independent of humidity conditions, and as electrolyte in polymer electrolytic capacitors.
If organic compounds, including high boiling solvents like methylpyrrolidone, dimethyl sulfoxide, sorbitol, ionic liquids and surfactants, are added conductivity increases by many orders of magnitude. This makes it also suitable as a transparent electrode, for example in touchscreens, organic light-emitting diodes, flexible organic solar cells and electronic paper to replace the traditionally used indium tin oxide (ITO). Owing to the high conductivity (up to 4600 S/cm), it can be used as a cathode material in capacitors replacing manganese dioxide or liquid electrolytes. It is also used in organic electrochemical transistors.
The conductivity of PEDOT:PSS can also be significantly improved by a post-trea
|
https://en.wikipedia.org/wiki/Float-zone%20silicon
|
Float-zone silicon is very pure silicon obtained by vertical zone melting. The process was developed at Bell Labs by Henry Theuerer in 1955 as a modification of a method developed by William Gardner Pfann for germanium. In the vertical configuration molten silicon has sufficient surface tension to keep the charge from separating. The major advantages is crucibleless growth that prevents contamination of the silicon from the vessel itself and therefore an inherently high-purity alternative to boule crystals grown by the Czochralski method.
The concentrations of light impurities, such as carbon (C) and oxygen (O2) elements, are extremely low. Another light impurity, nitrogen (N2), helps to control microdefects and also brings about an improvement in mechanical strength of the wafers, and is now being intentionally added during the growth stages.
The diameters of float-zone wafers are generally not greater than 200 mm due to the surface tension limitations during growth. A polycrystalline rod of ultrapure electronic-grade silicon is passed through an RF heating coil, which creates a localized molten zone from which the crystal ingot grows. A seed crystal is used at one end to start the growth. The whole process is carried out in an evacuated chamber or in an inert gas purge.
The molten zone carries the impurities away with it and hence reduces impurity concentration (most impurities are more soluble in the melt than the crystal). Specialized doping techniques like core doping, pill doping, gas doping and neutron transmutation doping are used to incorporate a uniform concentration of desirable impurity.
Float-zone silicon wafers may be irradiated by neutrons to turn it into a n-doped semiconductor.
Application
Float-zone silicon is typically used for power devices and detector applications, where high-resistivity is required. It is highly transparent to terahertz radiation, and is usually used to fabricate optical components, such as lenses and windows, for teraher
|
https://en.wikipedia.org/wiki/IBM%20High%20Availability%20Cluster%20Multiprocessing
|
IBM PowerHA SystemMirror (formerly IBM PowerHA and HACMP) is IBM's solution for high-availability clusters on the AIX Unix and Linux for IBM System p platforms and stands for High Availability Cluster Multiprocessing. IBM's HACMP product was first shipped in 1991 and is now in its 20th release - PowerHA SystemMirror for AIX 7.1.
PowerHA can run on up to 32 computers or nodes, each of which is either actively running an application (active) or waiting to take over when another node fails (passive). Data on file systems can be shared between systems in the cluster.
PowerHA relies heavily on IBM's Reliable Scalable Cluster Technology (RSCT). PowerHA is an RSCT aware client. RSCT is distributed with AIX. RSCT includes a daemon called group services that coordinates the response to events of interest to the cluster (for example, an interface or a node fails, or an administrator makes a change to the cluster configuration). Up until PowerHA V6.1, RSCT also monitored cluster nodes, networks and network adapters for failures using the topology services daemon (topsvcs). In the current release (V7.1), RSCT provides coordinate response between nodes, but monitoring and communication are provided by the Cluster Aware AIX (CAA) infrastructure.
The 7.1 release of PowerHA relies heavily on CAA, a clustering infrastructure built into the operating system and exploited by RSCT and PowerHA. CAA provides the monitoring and communication infrastructure for PowerHA and other clustering solutions on AIX, as well as cluster-wide event notification using the Autonomic Health Advisor File System (AHAFS) and cluster-aware AIX commands with clcmd. CAA replaces the function provided by Topology Services (topsvcs) in RSCT in previous releases of PowerHA/HACMP .
IBM PowerHA SystemMirror Timeline
IBM PowerHA SystemMirror Releases
PowerHA SystemMirror 7
PowerHA SystemMirror 7.2, released in .
PowerHA SystemMirror 7.2.1, released in .
New User Interface.
PowerHA SystemMirror 7.1 was
|
https://en.wikipedia.org/wiki/High-availability%20cluster
|
High-availability clusters (also known as HA clusters, fail-over clusters) are groups of computers that support server applications that can be reliably utilized with a minimum amount of down-time. They operate by using high availability software to harness redundant computers in groups or clusters that provide continued service when system components fail. Without clustering, if a server running a particular application crashes, the application will be unavailable until the crashed server is fixed. HA clustering remedies this situation by detecting hardware/software faults, and immediately restarting the application on another system without requiring administrative intervention, a process known as failover. As part of this process, clustering software may configure the node before starting the application on it. For example, appropriate file systems may need to be imported and mounted, network hardware may have to be configured, and some supporting applications may need to be running as well.
HA clusters are often used for critical databases, file sharing on a network, business applications, and customer services such as electronic commerce websites.
HA cluster implementations attempt to build redundancy into a cluster to eliminate single points of failure, including multiple network connections and data storage which is redundantly connected via storage area networks.
HA clusters usually use a heartbeat private network connection which is used to monitor the health and status of each node in the cluster. One subtle but serious condition all clustering software must be able to handle is split-brain, which occurs when all of the private links go down simultaneously, but the cluster nodes are still running. If that happens, each node in the cluster may mistakenly decide that every other node has gone down and attempt to start services that other nodes are still running. Having duplicate instances of services may cause data corruption on the shared storage.
HA
|
https://en.wikipedia.org/wiki/Newton%27s%20identities
|
In mathematics, Newton's identities, also known as the Girard–Newton formulae, give relations between two types of symmetric polynomials, namely between power sums and elementary symmetric polynomials. Evaluated at the roots of a monic polynomial P in one variable, they allow expressing the sums of the k-th powers of all roots of P (counted with their multiplicity) in terms of the coefficients of P, without actually finding those roots. These identities were found by Isaac Newton around 1666, apparently in ignorance of earlier work (1629) by Albert Girard. They have applications in many areas of mathematics, including Galois theory, invariant theory, group theory, combinatorics, as well as further applications outside mathematics, including general relativity.
Mathematical statement
Formulation in terms of symmetric polynomials
Let x1, ..., xn be variables, denote for k ≥ 1 by pk(x1, ..., xn) the k-th power sum:
and for k ≥ 0 denote by ek(x1, ..., xn) the elementary symmetric polynomial (that is, the sum of all distinct products of k distinct variables), so
Then Newton's identities can be stated as
valid for all .
Also, one has
for all .
Concretely, one gets for the first few values of k:
The form and validity of these equations do not depend on the number n of variables (although the point where the left-hand side becomes 0 does, namely after the n-th identity), which makes it possible to state them as identities in the ring of symmetric functions. In that ring one has
and so on; here the left-hand sides never become zero.
These equations allow to recursively express the ei in terms of the pk; to be able to do the inverse, one may rewrite them as
In general, we have
valid for all n ≥k ≥ 1.
Also, one has
for all k > n ≥ 1.
Application to the roots of a polynomial
The polynomial with roots xi may be expanded as
where the coefficients are the symmetric polynomials defined above.
Given the power sums of the roots
the coefficien
|
https://en.wikipedia.org/wiki/FreeBSD%20jail
|
The jail mechanism is an implementation of FreeBSD's OS-level virtualisation that allows system administrators to partition a FreeBSD-derived computer system into several independent mini-systems called jails, all sharing the same kernel, with very little overhead. It is implemented through a system call, jail(2), as well as a userland utility, jail(8), plus, depending on the system, a number of other utilities. The functionality was committed into FreeBSD in 1999 by Poul-Henning Kamp after some period of production use by a hosting provider, and was first released with FreeBSD 4.0, thus being supported on a number of FreeBSD descendants, including DragonFly BSD, to this day.
History
The need for the FreeBSD jails came from a small shared-environment hosting provider's (R&D Associates, Inc.'s owner, Derrick T. Woolworth) desire to establish a clean, clear-cut separation between their own services and those of their customers, mainly for security and ease of administration (jail(8)). Instead of adding a new layer of fine-grained configuration options, the solution adopted by Poul-Henning Kamp was to compartmentalize the system – both its files and its resources – in such a way that only the right people are given access to the right compartments.
Jails were first introduced in FreeBSD version 4.0, that was released on . Most of the original functionality is supported on DragonFly, and several of the new features have been ported as well.
Goals
FreeBSD jails mainly aim at three goals:
Virtualization: Each jail is a virtual environment running on the host machine with its own files, processes, user and superuser accounts. From within a jailed process, the environment is almost indistinguishable from a real system.
Security: Each jail is sealed from the others, thus providing an additional level of security.
Ease of delegation: The limited scope of a jail allows system administrators to delegate several tasks which require superuser access without handing out
|
https://en.wikipedia.org/wiki/Video%20Disk%20Control%20Protocol
|
Video Disk Control Protocol (VDCP) is a proprietary communications protocol primarily used in broadcast automation to control hard disk video servers for broadcast television. VDCP was originally developed by Louth Automation and is commonly called the Louth Protocol. At the time it was developed when Hewlett Packard (eventually sold to Pinnacle Systems) and Tektronix were both bringing to market the first of the VideoFile Servers to be used in the broadcast industry. They contacted Louth Automation who then designed the communications protocol basing it on Sony protocols of both the Sony LMS Storage Device and the Sony VTR. The principal work was carried out by Ken Louth at Louth Automation.
VDCP uses a tightly coupled master-slave methodology. The controlling device takes the initiative in communications between the controlling broadcast automation device and the controlled device (video disk). VDCP conforms to the Open Systems Interconnection (OSI) reference model.
VDCP is a serial communications protocol based on RS-422. It is derived from the Sony 9-Pin Protocol, an industry-standard protocol for control of professional broadcast VTRs that is used in online editing.
Full details of the protocol are available from Harris Broadcast, who acquired Louth in 2000.
External links
Harris Broadcast
NDCP Launch press release
Broadcast engineering
Television technology
Digital television
Television terminology
|
https://en.wikipedia.org/wiki/Underglaze
|
Underglaze is a method of decorating pottery in which painted decoration is applied to the surface before it is covered with a transparent ceramic glaze and fired in a kiln. Because the glaze subsequently covers it, such decoration is completely durable, and it also allows the production of pottery with a surface that has a uniform sheen. Underglaze decoration uses pigments derived from oxides which fuse with the glaze when the piece is fired in a kiln. It is also a cheaper method, as only a single firing is needed, whereas overglaze decoration requires a second firing at a lower temperature.
Many historical styles, for example Persian mina'i ware, Japanese Imari ware, Chinese doucai and wucai, combine the two types of decoration. In such cases the first firing for the body, underglaze decoration and glaze is followed by the second firing after the overglaze enamels have been applied. However, because the main or glost firing is at a higher temperature than used in overglaze decoration, the range of colours available in underglaze is more limited, and was especially so for porcelain in historical times, as the firing temperature required for the porcelain body is especially high. Early porcelain was largely restricted to underglaze blue, and a range of browns and reds. Other colours turned black in a high-temperature firing.
Examples of oxides that do not lose their colour during a glost firing are the cobalt blue made famous by Chinese Ming dynasty blue and white porcelain and the cobalt and turquoise blues, pale purple, sage green, and bole red characteristic of İznik pottery – only some European centres knew how to achieve a good red. The painting styles used are covered at (among other articles): china painting, blue and white pottery, tin-glazed pottery, maiolica, Egyptian faience, Delftware. In modern times a wider range of underglaze colours are available.
An archaeological excavation at the Tongguan kiln Site proved that the technology of underglaze colou
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.