source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/United%20Nations%20Permanent%20Forum%20on%20Indigenous%20Issues
|
The United Nations Permanent Forum on Indigenous Issues (UNPFII or PFII) is the UN's central coordinating body for matters relating to the concerns and rights of the world's indigenous peoples. There are more than 370 million indigenous people (also known as native, original, aboriginal and first peoples) in some 70 countries worldwide.
The forum was created in 2000 as an outcome of the UN's International Year for the World's Indigenous People in 1993, within the first International Decade of the World's Indigenous People (1995–2004). It is an advisory body within the framework of the United Nations System that reports to the UN's Economic and Social Council (ECOSOC).
History
Resolution 45/164 of the United Nations General Assembly was adopted on 18 December 1990, proclaiming that 1993 would be the International Year for the World's Indigenous People, "with a view to strengthening international cooperation for the solution of problems faced by indigenous communities in areas such as human rights, the environment, development, education and health". The year was launched in Australia by Prime Minister Paul Keating's memorable Redfern speech on 10 December 1992, in which he addressed Indigenous Australians' disadvantage.
The creation of the permanent forum was discussed at the 1993 World Conference on Human Rights in Vienna, Austria. The Vienna Declaration and Programme of Action recommended that such a forum should be established within the first United Nations International Decade of the World's Indigenous Peoples.
A working group was formed and various other meetings took place that led to the establishment of the permanent forum by Economic and Social Council Resolution 2000/22 on 28 July 2000.
Functions and operation
It submits recommendations to the Council on issues related to indigenous peoples. It holds a two-week session each year which takes place at the United Nations Headquarters in New York City but it could also take place in Geneva or any other p
|
https://en.wikipedia.org/wiki/UltraSPARC%20IV
|
The UltraSPARC IV Jaguar and follow-up UltraSPARC IV+ Panther are microprocessors designed by Sun Microsystems and manufactured by Texas Instruments. They are the fourth generation of UltraSPARC microprocessors, and implement the 64-bit SPARC V9 instruction set architecture (ISA). The UltraSPARC IV was originally to be succeeded by the UltraSPARC V Millennium, which was canceled after the announcement of the Niagara, now UltraSPARC T1 microprocessor in early 2004. It was instead succeeded by the Fujitsu-designed SPARC64 VI.
The UltraSPARC IV was developed as part of Sun's Throughput Computing initiative, which included the UltraSPARC V Millennium, Gemini and UltraSPARC T1 Niagara microprocessors. Of the four original designs in the initiative, two reached production: the UltraSPARC IV and the UltraSPARC T1. Whereas the Millennium and Niagara implemented block multithreading - also known as coarse-grained multithreading, the UltraSPARC IV implemented chip-multithreading (CMP) — multiple single-thread cores.
The UltraSPARC IV was the first multi-core SPARC processor, released in March, 2004. Internally, it implements two modified UltraSPARC III cores, and its physical packaging is identical to the UltraSPARC III with the exception of one pin. The UltraSPARC III cores were improved in a variety of ways. Instruction fetch, store bandwidth, and data prefetching were optimized. The floating-point adder implements additional hardware to handle more not a number (NaN) and underflow cases to avoid exceptions. Both cores share a L2 cache with a capacity of up to 16 MB but have their own L2 cache tags.
The UltraSPARC IV contains 66 million transistors and measures 22.1 mm by 16.1 mm (356 mm2). It was fabricated by Texas Instruments in their 0.13 μm process.
The UltraSPARC IV+, released in mid-2005, is also a dual-core design, featuring enhanced processor cores and an on-chip L2 cache. It is fabricated on a 90 nanometer manufacturing process. The initial speed of the Ult
|
https://en.wikipedia.org/wiki/Adductor%20canal
|
The adductor canal (also known as the subsartorial canal, or Hunter’s canal) is an aponeurotic tunnel in the middle third of the thigh giving passage to parts of the femoral artery, vein, and nerve. It extends from the apex of the femoral triangle to the adductor hiatus.
Structure
The adductor canal extends from the apex of the femoral triangle to the adductor hiatus. It is an intermuscular cleft situated on the medial aspect of the middle third of the anterior compartment of the thigh, and has the following boundaries:
medial wall - sartorius.
Posterior wall - adductor longus and adductor magnus.
Anterior- vastus medialis.
It is covered by a strong aponeurosis which extends from the vastus medialis, across the femoral vessels to the adductor longus and magnus.
Lying on the aponeurosis is the sartorius (tailor's) muscle.
Contents
The canal contains the subsartorial artery (distal segment of the femoral artery), subsartorial vein (distal segment of the femoral vein), and branches of the femoral nerve (specifically, the saphenous nerve, and the nerve to the vastus medialis). The femoral artery with its vein and the saphenous nerve enter this canal through the superior foramen. Then, the saphenous nerve and artery and vein of genus descendens exit through the anterior foramen, piercing the vastoadductor intermuscular septum. Finally, the femoral artery and vein exit via the inferior foramen (usually called the hiatus) through the inferior space between the oblique and medial heads of adductor magnus.
Clinical significance
The saphenous nerve may be compressed in the adductor canal. The adductor canal may be accessed for a saphenous nerve block, often used to treat pain caused by this compression.
History
The eponym 'Hunter’s canal' is named for John Hunter.
Additional Images
References
External links
- "Anterior and Medial Thigh Region: Sartorius Muscle and the Adductor Canal"
- "Anterior and Medial Thigh Region: Structures of the Adductor Canal"
Ana
|
https://en.wikipedia.org/wiki/Barker%20code
|
In telecommunication technology, a Barker code, or Barker sequence, is a finite sequence of digital values with the ideal autocorrelation property. It is used as a synchronising pattern between sender and receiver.
Explanation
Binary digits have very little meaning unless the significance of the individual digits is known. The transmission of a pre-arranged synchronising pattern of digits can enable a signal to be regenerated
by a receiver with a low probability of error. In simple terms it is equivalent to tying a label to one digit after which others may be related by counting. This is achieved by transmitting a special pattern of digits which is unambiguously recognised by the receiver. The longer the pattern the more accurately the data can be synchronised and errors due to distortion omitted. These patterns, called Barker Sequences are better known as Barker code after the inventor Ronald Hugh Barker. The process is “Group Synchronisation of Binary Digital Systems” first published in 1953. Initially developed for radar, telemetry and digital speech encryption in 1940 / 50's
Historical Background
During and after WWII digital technology became a key subject for research e.g. for radar, missile and gun fire control and encryption. In the 1950s scientists were trying various methods around the world to reduce errors in transmissions using code and to synchronise the received data. The problem being transmission noise, time delay and accuracy of received data. In 1948 the mathematician Claude Shannon published an article '"A mathematical Theory of Communication"' which laid out the basic elements of communication. In it he discusses the problems of noise.
Shannon realised that “communication signals must be treated in isolation from the meaning of the messages that they transmit” and laid down the theoretical foundations for digital circuits. “The problem of communication was primarily viewed as a deterministic signal-reconstruction problem: how to transform a
|
https://en.wikipedia.org/wiki/King%27s%20graph
|
In graph theory, a king's graph is a graph that represents all legal moves of the king chess piece on a chessboard where each vertex represents a square on a chessboard and each edge is a legal move. More specifically, an king's graph is a king's graph of an chessboard. It is the map graph formed from the squares of a chessboard by making a vertex for each square and an edge for each two squares that share an edge or a corner. It can also be constructed as the strong product of two path graphs.
For an king's graph the total number of vertices is and the number of edges is . For a square king's graph this simplifies so that the total number of vertices is and the total number of edges is .
The neighbourhood of a vertex in the king's graph corresponds to the Moore neighborhood for cellular automata.
A generalization of the king's graph, called a kinggraph, is formed from a squaregraph (a planar graph in which each bounded face is a quadrilateral and each interior vertex has at least four neighbors) by adding the two diagonals of every quadrilateral face of the squaregraph.
In the drawing of a king's graph obtained from an chessboard, there are crossings, but it is possible to obtain a drawing with fewer crossings by connecting the two nearest neighbors of each corner square by a curve outside the chessboard instead of by a diagonal line segment. In this way, crossings are always possible. For the king's graph of small chessboards, other drawings lead to even fewer crossings; in particular every king's graph is a planar graph. However, when both and are at least four, and they are not both equal to four, is the optimal number of crossings.
See also
Knight's graph
Queen's graph
Rook's graph
Bishop's graph
Lattice graph
References
Mathematical chess problems
Parametric families of graphs
|
https://en.wikipedia.org/wiki/Knight%27s%20graph
|
In graph theory, a knight's graph, or a knight's tour graph, is a graph that represents all legal moves of the knight chess piece on a chessboard. Each vertex of this graph represents a square of the chessboard, and each edge connects two squares that are a knight's move apart from each other.
More specifically, an knight's graph is a knight's graph of an chessboard.
Its vertices can be represented as the points of the Euclidean plane whose Cartesian coordinates are integers with and (the points at the centers of the chessboard squares), and with two
vertices connected by an edge when their Euclidean distance is .
For an knight's graph, the number of vertices is . If and then the number of edges is (otherwise there are no edges). For an knight's graph, these simplify so that the number of vertices is and the number of edges is .
A Hamiltonian cycle on the knight's graph is a (closed) knight's tour. A chessboard with an odd number of squares has no tour, because the knight's graph is a bipartite graph (each color of squares can be used as one of two independent sets, and knight moves always change square color) and only bipartite graphs with an even number of vertices can have Hamiltonian cycles. Most chessboards with an even number of squares have a knight's tour; Schwenk's theorem provides an exact listing of which ones do and which do not.
When it is modified to have toroidal boundary conditions (meaning that a knight is not blocked by the edge of the board, but instead continues onto the opposite edge) the knight's graph is the same as the four-dimensional hypercube graph.
See also
King's graph
Queen's graph
Rook's graph
Bishop's graph
References
External links
Mathematical chess problems
Parametric families of graphs
|
https://en.wikipedia.org/wiki/UML%20Partners
|
UML Partners was a consortium of system integrators and vendors convened in 1996 to specify the Unified Modeling Language (UML). Initially the consortium was led by Grady Booch, Ivar Jacobson, and James Rumbaugh of Rational Software. The UML Partners' UML 1.0 specification draft was proposed to the Object Management Group (OMG) in January 1997.
During the same month the UML Partners formed a Semantics Task Force, chaired by Cris Kobryn, to finalize the semantics of the specification and integrate it with other standardization efforts. The result of this work, UML 1.1, was submitted to the OMG in August 1997 and adopted by the OMG in November 1997.
Member list
Members of the consortium include:
Digital Equipment Corporation
Hewlett-Packard
i-Logix
IBM
ICON Computing
IntelliCorp
MCI Systemhouse
Microsoft
ObjecTime
Oracle Corporation
Platinum Technology
Ptech
Rational Software
Reich Technologies
Softeam
Taskon
Texas Instruments
Unisys
See also
Unified Modeling Language
object-oriented language
References
External links
2001: A Standardization Odyssey PDF document
UML 1.0 - 1.1 and the UML partners
Unified Modeling Language
Information technology organizations
|
https://en.wikipedia.org/wiki/Agent%20architecture
|
Agent architecture in computer science is a blueprint for software agents and intelligent control systems, depicting the arrangement of components. The architectures implemented by intelligent agents are referred to as cognitive architectures. The term agent is a conceptual idea, but not defined precisely. It consists of facts, set of goals and sometimes a plan library.
Types
Reactive architectures
Subsumption
Deliberative reasoning architectures
Procedural reasoning system (PRS)
Layered/hybrid architectures
3T
AuRA
Brahms
GAIuS
GRL
ICARUS
InteRRaP
TinyCog
TouringMachines
Cognitive architectures
ASMO
Soar
ACT-R
Brahms
LIDA
PreAct
Cougaar
PRODIGY
FORR
See also
Action selection
Cognitive architecture
Real-time Control System
References
Software architecture
Robot architectures
|
https://en.wikipedia.org/wiki/Global%20Communications%20Conference
|
The Global Communications Conference (GLOBECOM) is an annual international academic conference organised by the Institute of Electrical and Electronics Engineers' Communications Society. The first GLOBECOM was organised by the Communications Society's predecessor in 1957, with the full name of "National Symposium on Global Communications". The seventh GLOBECOM, in 1965 was called the "IEEE Communications Convention" and after that the conference was renamed as the International Conference on Communications (ICC) and GLOBECOM was no longer organised.
By 1982, the need for a second annual international conference on communications was apparent, and so the IEEE National Telecommunications Conference was re-organised to be international in scope, and renamed to the "Global Communications Conference", resurrecting the GLOBECOM acronym. GLOBECOM has been held annually since.
Recent GLOBECOMs have been attended by about 1,500 people. IEE has more than 400,000 members in 150 countries.
Past and Upcoming Conferences
See also
1912 London International Radiotelegraphic Convention
Communications
References
IEEE conferences
Telecommunication conferences
Computer networking conferences
|
https://en.wikipedia.org/wiki/Fidelipac
|
The Fidelipac, commonly known as a "NAB cartridge" or simply "cart", is a magnetic tape sound recording format, used for radio broadcasting for playback of material over the air such as radio commercials, jingles, station identifications, and music, and for indoor background music. Fidelipac is the official name of this industry standard audio tape cartridge. It was developed in 1954 by inventor George Eash (although the invention of the Fidelipac cartridge has also been credited to Vern Nolte of the Automatic Tape Company), and commercially introduced in 1959 by Collins Radio Co. at the 1959 NAB Convention. The cartridge was often used at radio stations until the late 1990s, when such formats as MiniDisc and computerized broadcast automation predominated.
History
The Fidelipac cartridge was the first audio tape cartridge available commercially, based on the endless-loop tape cartridge design developed by Bernard Cousino in 1952, while Eash shared space in Cousino's electronics shop in the early 1950s. Instead of manufacturing the Fidelipac format himself after developing it, Eash decided to license it for manufacture to Telepro Industries, in Cherry Hill, New Jersey. Telepro then manufactured and marketed the format under the Fidelipac brand name.
Tape format
Fidelipac was originally a analog recording tape, two-track format. One of the tracks was used for monaural program audio, and the other being used for a cue track to control the player, where either a primary cue tone was recorded to automatically stop the cart, a secondary tone was recorded to automatically re-cue the cart to the beginning of the cart's program material (in some models, two secondary tones, one after the program material, and one before it, were recorded to have the cart machine automatically fast-forward through any leftover blank tape at the end of a cart's program), or a tertiary tone, which was used by some players to trigger another cart player or another form of external equipme
|
https://en.wikipedia.org/wiki/Probe%20effect
|
Probe effect is an unintended alteration in system behavior caused by measuring that system. In code profiling and performance measurements, the delays introduced by insertion or removal of code instrumentation may result in a non-functioning application, or unpredictable behavior.
Examples
In electronics, by attaching a multimeter, oscilloscope, or other testing device via a test probe, small amounts of capacitance, resistance, or inductance may be introduced. Though good scopes have very slight effects, in sensitive circuitry these can lead to unexpected failures, or conversely, unexpected fixes to failures.
In debugging of parallel computer programs, sometimes failures (such as deadlocks) are not present when the debugger's code (which was meant to help to find a reason for deadlocks by visualising points of interest in the program code) is attached to the program. This is because additional code changed the timing of the execution of parallel processes, and because of that deadlocks were avoided. This type of bug is known colloquially as a Heisenbug, by analogy with the observer effect in quantum mechanics.
See also
Observer effect (physics)
Observer's paradox
Sources
Software testing
Debugging
|
https://en.wikipedia.org/wiki/Complement%20fixation%20test
|
The complement fixation test is an immunological medical test that can be used to detect the presence of either specific antibody or specific antigen in a patient's serum, based on whether complement fixation occurs. It was widely used to diagnose infections, particularly with microbes that are not easily detected by culture methods, and in rheumatic diseases. However, in clinical diagnostics labs it has been largely superseded by other serological methods such as ELISA and by DNA-based methods of pathogen detection, particularly PCR.
Process
The complement system is a system of serum proteins that react with antigen-antibody complexes. If this reaction occurs on a cell surface, it will result in the formation of trans-membrane pores and therefore destruction of the cell. The basic steps of a complement fixation test are as follows:
Serum is separated from the patient.
Patients naturally have different levels of complement proteins in their serum. To negate any effects this might have on the test, the complement proteins in the patient's serum must be destroyed and replaced by a known amount of standardized complement proteins.
The serum is heated in such a way that all of the complement proteins—but none of the antibodies—within it are destroyed. (This is possible because complement proteins are much more susceptible to destruction by heat than antibodies.)
A known amount of standard complement proteins are added to the serum. (These proteins are frequently obtained from guinea pig serum.)
The antigen of interest is added to the serum.
Sheep red blood cells () which have been pre-bound to anti- antibodies are added to the serum. The test is considered negative if the solution turns pink at this point and positive otherwise.
If the patient's serum contains antibodies against the antigen of interest, they will bind to the antigen in step 3 to form antigen-antibody complexes. The complement proteins will react with these complexes and be depleted. Thus
|
https://en.wikipedia.org/wiki/String%20bog
|
A string bog or string mire is a bog consisting of slightly elevated ridges and islands, with woody plants, alternating with flat, wet sedge mat areas. String bogs occur on slightly sloping surfaces, with the ridges at right angles to the direction of water flow. They are an example of patterned vegetation.
String bogs are also known as aapa moors or aapa mires (from Finnish aapasuo) or Strangmoor (from the German).
A string bog has a pattern of narrow (2–3m wide), low (less than 1m high) ridges oriented at right angles to the direction of drainage with wet depressions or pools occurring between the ridges. The water and peat are very low in nutrients because the water has been derived from other ombrotrophic wetlands, which receive all of their water and nutrients from precipitation, rather than from streams or springs. The peat thickness is greater than 1m.
String bogs are features associated with periglacial climates, where the temperature results in long periods of subzero temperatures. The active layer exists as frozen ground for long periods and melts in the spring thaw. Slow melting results in characteristic mass movement processes and features associated with specific periglacial environments.
See also
Blanket bog
Flark
Marsh
References
Canadian Soil Information Service - Local Surface Forms (checked 2014-10-18)
String bog
Ecology
de:Regenmoor#Aapamoore
|
https://en.wikipedia.org/wiki/Sunrise%20equation
|
The sunrise equation or sunset equation can be used to derive the time of sunrise or sunset for any solar declination and latitude in terms of local solar time when sunrise and sunset actually occur.
Formulation
It is formulated as:
where:
is the solar hour angle at either sunrise (when negative value is taken) or sunset (when positive value is taken);
is the latitude of the observer on the Earth;
is the sun declination.
Principles
The Earth rotates at an angular velocity of 15°/hour. Therefore, the expression , where is in degree, gives the interval of time in hours from sunrise to local solar noon or from local solar noon to sunset.
The sign convention is typically that the observer latitude is 0 at the equator, positive for the Northern Hemisphere and negative for the Southern Hemisphere, and the solar declination is 0 at the vernal and autumnal equinoxes when the sun is exactly above the equator, positive during the Northern Hemisphere summer and negative during the Northern Hemisphere winter.
The expression above is always applicable for latitudes between the Arctic Circle and Antarctic Circle. North of the Arctic Circle or south of the Antarctic Circle, there is at least one day of the year with no sunrise or sunset. Formally, there is a sunrise or sunset when during the Northern Hemisphere summer, and when during the Southern Hemisphere winter. For locations outside these latitudes, it is either 24-hour daytime or 24-hour nighttime.
Expressions for the solar hour angle
In the equation given at the beginning, the cosine function on the left side gives results in the range [-1, 1], but the value of the expression on the right side is in the range . An applicable expression for in the format of Fortran 90 is as follows:
omegao = acos(max(min(-tan(delta*rpd)*tan(phi*rpd), 1.0), -1.0))*dpr
where omegao is in degree, delta is in degree, phi is in degree, rpd is equal to , and dpr is equal to .
The above expression gives results in degree in t
|
https://en.wikipedia.org/wiki/Explosives%20safety
|
Explosives safety originated as a formal program in the United States in the aftermath of World War I when several ammunition storage areas were destroyed in a series of mishaps. The most serious occurred at Picatinny Arsenal Ammunition Storage Depot, New Jersey, in July, 1926 when an electrical storm led to fires that caused explosions and widespread destruction. The severe property damage and 19 fatalities led Congress to empower a board of Army and Naval officers to investigate the Picatinny Arsenal disaster and determine if similar conditions existed at other ammunition depots. The board reported in its findings that this mishap could recur, prompting Congress to establish a permanent board of colonels to develop explosives safety standards and ensure compliance beginning in 1928. This organization evolved into the Department of Defense Explosives Safety Board (DDESB) and is chartered in Title 10 of the US Code. The DDESB authors Defense Explosives Safety Regulation (DESR) 6055.9 which establishes the explosives safety standards for the Department of Defense. The DDESB also evaluates scientific data which may adjust those standards, reviews and approves all explosives site plans for new construction, and conducts worldwide visits to locations containing US title munitions. The cardinal principle of explosives safety is expose the minimum number of people for the minimum time to the minimum amount of explosives.
US Air Force
The United States Air Force counterpart to the DDESB is the Air Force Safety Center (AFSEC/SEW). Similar safety functions are found at major command headquarters, intermediate command headquarters, and installation weapons safety offices, culminating with unit-level explosives safety programs. The current Air Force regulation governing explosives safety is Air Force Manual (AFMAN) 91-201. AFMAN 91-201 was developed using DESR 6055.09 as a parent regulation, and in most cases follows the limitations set forth in the DESR (excluding mission-sp
|
https://en.wikipedia.org/wiki/Phase%20portrait
|
In mathematics, a phase portrait is a geometric representation of the orbits of a dynamical system in the phase plane. Each set of initial conditions is represented by a different point or curve.
Phase portraits are an invaluable tool in studying dynamical systems. They consist of a plot of typical trajectories in the phase space. This reveals information such as whether an attractor, a repellor or limit cycle is present for the chosen parameter value. The concept of topological equivalence is important in classifying the behaviour of systems by specifying when two different phase portraits represent the same qualitative dynamic behavior. An attractor is a stable point which is also called a "sink". The repeller is considered as an unstable point, which is also known as a "source".
A phase portrait graph of a dynamical system depicts the system's trajectories (with arrows) and stable steady states (with dots) and unstable steady states (with circles) in a phase space. The axes are of state variables.
Examples
Simple pendulum, see picture (right).
Simple harmonic oscillator where the phase portrait is made up of ellipses centred at the origin, which is a fixed point.
Damped harmonic motion, see animation (right).
Van der Pol oscillator see picture (bottom right).
Visualizing the behavior of ordinary differential equations
A phase portrait represents the directional behavior of a system of ordinary differential equations (ODEs). The phase portrait can indicate the stability of the system.
The phase portrait behavior of a system of ODEs can be determined by the eigenvalues or the trace and determinant (trace = λ1 + λ2, determinant = λ1 x λ2) of the system.
See also
Phase space
Phase plane
References
Chapter 1.
External links
Linear Phase Portraits, an MIT Mathlet.
Dynamical systems
Diagrams
|
https://en.wikipedia.org/wiki/List%20of%20long%20marriages
|
This is a list of long marriages. It includes marriages extending over at least 80 years.
Background
A study by Robert and Jeanette Lauer, reported in the Journal of Family Issues, conducted on 40 sets of spouses married for at least 50 years, concluded that the long-term married couples received high scores on the Lock-Wallace marital satisfaction test and were closely aligned on how their marriages were doing. In a 1979 study on about 55 couples in marriages with an average length of 55.5 years, couples said their marriages lasted so long because of mutual devotion and special regard for each other. Couples who have been married for a long time have a lower likelihood of divorcing because "common economic interests and friendship networks increase over time" and during stress can assist in sustaining the relationship.
Another study found that people in long marriages are wedded to the idea of "marital permanency" in which "They don't see divorce as an option". Sociologist Pepper Schwartz, the American Association of Retired Persons's relationships authority, said that it was helpful to have a spouse who is quick to recover when there are surprises in life.
A study of 1,152 couples who had been married for over 50 years found that they attributed their long marriages to faith in each other, love, ability to make concessions, admiration for each other, reliance on each other, children, and strong communication. Bowling Green State University (BGSU)'s National Center for Family and Marriage Research found that 7% of American marriages last at least 50 years.
Recording longest marriages
The longest marriage recorded (although not officially recognized) is a granite wedding anniversary (90 years) between Karam and Kartari Chand, who both lived in the United Kingdom, but were married in India. Karam and Kartari Chand married in 1925 and died in 2016 and 2019 respectively.
Guinness World Records published its first edition in 1955. In the 1984 to 1998 editions, the
|
https://en.wikipedia.org/wiki/Burst%20phase
|
Burst phase is the first ten cycles of colorburst in the "porch" of the synchronising pulse in the PAL (Phase Alternation Line) broadcast television systems format. The frequency of this burst is 4.43361875 MHz; it is precise to .5 Hz, and is used as the reference frequency to synchronise the local oscillators of the colour decoder in a PAL television set.
This colorburst is sometimes called a "swinging burst", since it swings plus or minus 45 degrees line by line (hence the expression "phase alternating line"). This swing is used to set the centre frequency of the colour reference oscillator in the decoder. The swing of the burst phase distinguishes PAL from non-PAL lines, and produces the IDENT signal at 7.8Khz half the line scan of 15625 khz.
As in the NTSC system, U and V are used to modulate the color subcarrier using two balanced modulators operating in phase quadrature: one modulator is driven by the subcarrier at sine phase; the other modulator is driven by the subcarrier at cosine phase. The outputs of the modulators are added together to form the modulated chrominance signal:
C=Usin ωt±Vω=2πFSC
FSC=4.43361875 MHz(±5 Hz) for(B, D, G, H, I, N) PALFSC=3.58205625 MHz(±5 Hz) for (NC)PALFSC=3.57561143 MHz(±10 Hz) for (M)PAL
In PAL, the phase of V is reversed every other line. V was chosen for the reversal process since it has a lower gain factor than U and therefore is less susceptible to a one-half FH switching rate imbalance. The result of alternating the V phase at the line rate is that any color subcarrier phase errors produce complementary errors, allowing line-to-line averaging at the receiver to cancel the errors and generate the correct hue with slightly reduced saturation. This technique requires the PAL receiver to be able to determine the correct V phase. This is done using a technique known as AB sync, PAL sync, PAL switch, or swinging burst, consisting of alternating the phase of the color burst ten cycles long, by ±45° at the line rate
|
https://en.wikipedia.org/wiki/Supervisory%20control
|
Supervisory control is a general term for control of many individual controllers or control loops, such as within a distributed control system. It refers to a high level of overall monitoring of individual process controllers, which is not necessary for the operation of each controller, but gives the operator an overall plant process view, and allows integration of operation between controllers.
A more specific use of the term is for a Supervisory Control and Data Acquisition system or SCADA, which refers to a specific class of system for use in process control, often on fairly small and remote applications such as a pipeline transport, water distribution, or wastewater utility system station.
Forms
Supervisory control often takes one of two forms. In one, the controlled machine or process continues autonomously. It is observed from time to time by a human who, when deeming it necessary, intervenes to modify the control algorithm in some way. In the other, the process accepts an instruction, carries it out autonomously, reports the results and awaits further commands. With manual control, the operator interacts directly with a controlled process or task using switches, levers, screws, valves etc., to control actuators. This concept was incorporated in the earliest machines which sought to extend the physical capabilities of man. In contrast, with automatic control, the machine adapts to changing circumstances and makes decisions in pursuit of some goal which can be as simple as switching a heating system on and off to maintain a room temperature within a specified range. Sheridan defines supervisory control as follows: "in the strictest sense, supervisory control means that one or more human operators are intermittently programming and continually receiving information from a computer that itself closes an autonomous control loop through artificial effectors to the controlled process or task environment."
Other points
Robotics applications have traditionally a
|
https://en.wikipedia.org/wiki/Control%20loop
|
A control loop is the fundamental building block of control systems in general and industrial control systems in particular. It consists of the process sensor, the controller function, and the final control element (FCE) which controls the process necessary to automatically adjust the value of a measured process variable (PV) to equal the value of a desired set-point (SP).
There are two common classes of control loop: open loop and closed loop. In an open-loop control system, the control action from the controller is independent of the process variable. An example of this is a central heating boiler controlled only by a timer. The control action is the switching on or off of the boiler. The process variable is the building temperature. This controller operates the heating system for a constant time regardless of the temperature of the building.
In a closed-loop control system, the control action from the controller is dependent on the desired and actual process variable. In the case of the boiler analogy, this would utilize a thermostat to monitor the building temperature, and feed back a signal to ensure the controller output maintains the building temperature close to that set on the thermostat. A closed-loop controller has a feedback loop which ensures the controller exerts a control action to control a process variable at the same value as the setpoint. For this reason, closed-loop controllers are also called feedback controllers.
Open-loop and closed-loop
Fundamentally, there are two types of control loop: open-loop control (feedforward), and closed-loop control (feedback).
In open-loop control, the control action from the controller is independent of the "process output" (or "controlled process variable"). A good example of this is a central heating boiler controlled only by a timer, so that heat is applied for a constant time, regardless of the temperature of the building. The control action is the switching on/off of the boiler, but the controlled vari
|
https://en.wikipedia.org/wiki/Engine%20shaft
|
For mine construction, an engine shaft is a mine shaft used for the purpose of pumping, irrespective of the prime mover.
See also
Outline of mining
References
Underground mining
|
https://en.wikipedia.org/wiki/Golden%20triangle%20%28mathematics%29
|
A golden triangle, also called a sublime triangle, is an isosceles triangle in which the duplicated side is in the golden ratio to the base side:
Angles
The vertex angle is:
Hence the golden triangle is an acute (isosceles) triangle.
Since the angles of a triangle sum to radians, each of the base angles (CBX and CXB) is:
Note:
The golden triangle is uniquely identified as the only triangle to have its three angles in the ratio 1 : 2 : 2 (36°, 72°, 72°).
In other geometric figures
Golden triangles can be found in the spikes of regular pentagrams.
Golden triangles can also be found in a regular decagon, an equiangular and equilateral ten-sided polygon, by connecting any two adjacent vertices to the center. This is because: 180(10−2)/10 = 144° is the interior angle, and bisecting it through the vertex to the center: 144/2 = 72°.
Also, golden triangles are found in the nets of several stellations of dodecahedrons and icosahedrons.
Logarithmic spiral
The golden triangle is used to form some points of a logarithmic spiral. By bisecting one of the base angles, a new point is created that in turn, makes another golden triangle. The bisection process can be continued indefinitely, creating an infinite number of golden triangles. A logarithmic spiral can be drawn through the vertices. This spiral is also known as an equiangular spiral, a term coined by René Descartes. "If a straight line is drawn from the pole to any point on the curve, it cuts the curve at precisely the same angle," hence equiangular. This spiral is different from the golden spiral: the golden spiral grows by a factor of the golden ratio in each quarter-turn, whereas the spiral through these golden triangles takes an angle of 108° to grow by the same factor.
Golden gnomon
Closely related to the golden triangle is the golden gnomon, which is the isosceles triangle in which the ratio of the equal side lengths to the base length is the reciprocal of the golden ratio .
"The golden triangle h
|
https://en.wikipedia.org/wiki/Enterprise%20Volume%20Management%20System
|
Enterprise Volume Management System (EVMS) was a flexible, integrated volume management software used to manage storage systems under Linux.
Its features include:
Handle EVMS, Linux LVM and LVM2 volumes
Handle many kinds of disk partitioning schemes
Handle many different file systems (Ext2, Ext3, FAT, JFS, NTFS, OCFS2, OpenGFS, ReiserFS, Swap, XFS etc.)
Multi-disk (MD) management
Software RAID: level 0, 1, 4 and 5 (no support for level 6 and 10)
Drive linking (device concatenation)
Multipath I/O support
Manage shared cluster storage
Expand and shrink volumes and file systems online or offline (depending on the file system's capabilities)
Snapshots (frozen images of volumes), optionally writable
Conversion between different volume types
Move partitions
Make, check and repair file systems
Bad block relocation
Three types of user interface: GUI, text mode interface and CLI
Backup and restore the EVMS metadata
EVMS is licensed under the GNU General Public License version 2 or later. EVMS is supported now in some Linux distributions, among others it is now (2008) SUSE, Debian and Gentoo
LVM vs EVMS
For a while, both LVM and EVMS were competing for inclusion in the mainline kernel. EVMS had more features and better userland tools, but the internals of LVM were more attractive to kernel developers, so in the end LVM won the battle for inclusion. In response, the EVMS team decided to concentrate on porting the EVMS userland tools to work with the LVM kernelspace.
Sometime after the release of version 2.5.5 on February 26, 2006, IBM discontinued development of the project. There have been no further releases. In 2008 Novell announced that the company would be moving from EVMS to LVM in future editions of their SUSE products, while continuing to fully support customers using EVMS.
References
External links
Free software programmed in C
Free system software
Volume manager
Linux file system-related software
|
https://en.wikipedia.org/wiki/Building%20implosion
|
In the controlled demolition industry, building implosion is the strategic placing of explosive material and timing of its detonation so that a structure collapses on itself in a matter of seconds, minimizing the physical damage to its immediate surroundings. Despite its terminology, building implosion also includes the controlled demolition of other structures, such as bridges, smokestacks, towers, and tunnels.
Building implosion, which reduces to seconds a process which could take months or years to achieve by other methods, typically occurs in urban areas and often involves large landmark structures.
The actual use of the term "implosion" to refer to the destruction of a building is a misnomer. This had been stated of the destruction of 1515 Tower in West Palm Beach, Florida. "What happens is, you use explosive materials in critical structural connections to allow gravity to bring it down."
Terminology
The term building implosion can be misleading to a layperson: The technique is not a true implosion phenomenon. A true implosion usually involves a difference between internal (lower) and external (higher) pressure, or inward and outward forces, that is so large that the structure collapses inward into itself.
In contrast, building implosion techniques do not rely on the difference between internal and external pressure to collapse a structure. Instead, the goal is to induce a progressive collapse by weakening or removing critical supports; therefore, the building can no longer withstand gravity loads and will fail under its own weight.
Numerous small explosives, strategically placed within the structure, are used to catalyze the collapse. Nitroglycerin, dynamite, or other explosives are used to shatter reinforced concrete supports. Linear shaped charges are used to sever steel supports. These explosives are progressively detonated on supports throughout the structure. Then, explosives on the lower floors initiate the controlled collapse.
A simple structure
|
https://en.wikipedia.org/wiki/Docking%20sleeve
|
In mechanical engineering, a docking sleeve or mounting boss is a tube or enclosure used to couple two mechanical components together, or for chilling, or to retain two components together; this permits two equally sized appendages to be connected via insertion and fixing within the construction. Docking sleeves may be physically solid or flexible, their implementation varying widely according to the required application of the device. The most common application is the plastic appendage that receives a screw in order to attach two parts.
References
Mechanical engineering
|
https://en.wikipedia.org/wiki/Time%20Gal
|
is an interactive movie video game developed and published by Taito and Toei Company, and originally released as a laserdisc game in Japan for the arcades in 1985. It is an action game which uses full motion video (FMV) to display the on-screen action. The player must correctly choose the on-screen character's actions to progress the story. The pre-recorded animation for the game was produced by Toei Company.
The game is set in a fictional future where time travel is possible. The protagonist, Reika, travels to different time periods in search of a criminal, Luda, from her time. After successfully tracking down Luda, Reika prevents his plans to alter the past. Time Gal was inspired by the success of earlier laserdisc video games that used pre-recorded animation, including Dragon's Lair (1983) and the previous Taito/Toei collaboration Ninja Hayate (1984), while Reika's character design bears similarities to the anime characters Lum (from Urusei Yatsura) and Yuri (from Dirty Pair).
The game was later ported to the Sega CD for a worldwide release, and also to the LaserActive in Japan. The Sega CD version received a generally favorable reception from critics.
Gameplay
Time Gal is an interactive movie game that uses pre-recorded animation rather than sprites to display the on-screen action. Gameplay is divided into levels, referred to as time periods. The game begins in 3001 AD with the theft of a time travel device. The thief, Luda, steals the device to take over the world by changing history. Reika, the protagonist also known as Time Gal, uses her own time travel device to pursue him; she travels to different time periods, such as 70,000,000 BC, 44 BC, 1588 AD, and 2010 AD, in search of Luda. Each time period is a scenario that presents a series of threats that must be avoided or confronted. Successfully navigating the sequences allows the player to progress to another period.
The player uses a joystick and button to input commands, though home versions use a game
|
https://en.wikipedia.org/wiki/Detector%20%28radio%29
|
In radio, a detector is a device or circuit that extracts information from a modulated radio frequency current or voltage. The term dates from the first three decades of radio (1888-1918). Unlike modern radio stations which transmit sound (an audio signal) on an uninterrupted carrier wave, early radio stations transmitted information by radiotelegraphy. The transmitter was switched on and off to produce long or short periods of radio waves, spelling out text messages in Morse code. Therefore, early radio receivers did not have to demodulate the radio signal, but just distinguish between the presence or absence of a radio signal, to reproduce the Morse code "dots" and "dashes". The device that performed this function in the receiver circuit was called a detector. A variety of different detector devices, such as the coherer, electrolytic detector, magnetic detector and the crystal detector, were used during the wireless telegraphy era until superseded by vacuum tube technology.
After the invention of amplitude modulation (AM) enabled the development of AM radiotelephony, the transmission of sound (audio), during World War 1, the term evolved to mean a demodulator, (usually a vacuum tube) which extracted the audio signal from the radio frequency carrier wave. This is its current meaning, although modern detectors usually consist of semiconductor diodes, transistors, or integrated circuits.
In a superheterodyne receiver the term is also sometimes used to refer to the mixer, the tube or transistor which converts the incoming radio frequency signal to the intermediate frequency. The mixer is called the first detector, while the demodulator that extracts the audio signal from the intermediate frequency is called the second detector. In microwave and millimeter wave technology the terms detector and crystal detector refer to waveguide or coaxial transmission line components, used for power or SWR measurement, that typically incorporate point contact diodes or surface b
|
https://en.wikipedia.org/wiki/T7%20DNA%20helicase
|
T7 DNA helicase (gp4) is a hexameric motor protein encoded by T7 phages that uses energy from dTTP hydrolysis to process unidirectionally along single stranded DNA, separating (helicase) the two strands as it progresses. It is also a primase, making short stretches of RNA that initiates DNA synthesis. It forms a complex with T7 DNA polymerase. Its homologs are found in mitochondria (as Twinkle) and chloroplasts.
Crystal structure
The crystal structure was solved to 3.0 Å resolution in 2000, as shown in the figure in the reference. In (A), notice that the separate subunits appear to be anchored through interactions between an alpha helix and an adjacent subunit. In (B), there are six sets of three loops. The red loop, known as loop II, contains three lysine residues and is thought to be involved in binding the ssDNA that is fed through the center of the enzyme.
Mechanism of sequential dTTP hydrolysis
Crampton et al. have proposed a mechanism for the ssDNA-dependent hydrolysis of dTTP by T7 DNA helicase as shown in the figure below. In their model, protein loops located on each hexameric subunit, each of which contain three lysine residues, sequentially interact with the negatively charged phosphate backbone of ssDNA. This interaction presumably causes a conformational change in the actively bound subunit, providing for the efficient release of dTDP from its dTTP binding site. In the process of dTDP release, the ssDNA is transferred to the neighboring subunit, which undergoes a similar process. Previous studies have already suggested that ssDNA is able to bind to two hexameric subunits simultaneously.
See also
Helicase
References
External links
Molecular biology
EC 3.6.4
DNA replication
Phage proteins
|
https://en.wikipedia.org/wiki/MPACT%202
|
Mpact-2 is a 125 MHz vector-processing graphics, audio and video media processor, a second generation in the Mpact family of Chromatic Research media processors, which can be used only as a co-processor to the main Central Processing Unit (CPU) of a microcomputer.
Hardware using the Mpact-2 uses OEM firmware to provide plug-and-play facility, and may be used with either a PCI or AGP bus.
UAD-1 DSP cards
The UAD-1 was a digital signal processor (DSP) card using the Mpact-2 sold by Universal Audio (acquired by ATI Technologies in November 1998), which uses the DSP, rather than the host computer's CPU, to process audio plug-ins. This allows accurate, but processor-intensive, reverbs, EQs, compressors and limiters to be handled in real time and without burdening the CPU. 3D functionality is hard-wired. The UAD-1 was superseded by the UAD-2, based on the Analog Devices 21369 and 21469 DSPs, in 2009.
UAD-1 hardware was produced with three interfaces: PCI (UAD-1), PCI Express (UAD-1e), and ExpressCard (UAD-Xpander). The cards were offered by Chromatic Research (formerly named Xenon Microsystems), and were part of the Chromatic Mpact 2 Video Adapter.
References
Yao, Yang (18 November 1996). "Chromatic's Mpact 2 Boosts 3D". Microprocessor Report, pp. 1, 6–10.
Digital signal processors
|
https://en.wikipedia.org/wiki/Genomic%20island
|
A genomic island (GI) is part of a genome that has evidence of horizontal origins. The term is usually used in microbiology, especially with regard to bacteria. A GI can code for many functions, can be involved in symbiosis or pathogenesis, and may help an organism's adaptation. Many sub-classes of GIs exist that are based on the function that they confer. For example, a GI associated with pathogenesis is often called a pathogenicity island (PAIs), while GIs that contain many antibiotic resistant genes are referred to as antibiotic resistance islands. The same GI can occur in distantly related species as a result of various types of lateral gene transfer (transformation, conjugation, transduction). This can be determined by base composition analysis, as well as phylogeny estimations.
Computational prediction
Various genomic island predictions programs have been developed. These tools can be broadly grouped into sequence based methods and comparative genomics/phylogeny based methods.
Sequence based methods depend on the naturally occurring variation that exists between the genome sequence composition of different species. Genomic regions that show abnormal sequence composition (such as nucleotide bias or codon bias) suggests that these regions may have been horizontally transferred. Two major problems with these methods are that false predictions can occur due to natural variation in the genome (sometimes due to highly expressed genes) and that horizontally transferred DNA will ameliorate (change to the host genome) over time; therefore, limiting predictions to only recently acquired GIs.
Comparative genomics based methods try to identify regions that show signs that they have been horizontally transferred using information from several related species. For example, a genomic region that is present in one species, but is not present in several other related species suggests that the region may have been horizontally transferred. The alternative explanations are
|
https://en.wikipedia.org/wiki/Reactimeter
|
A reactimeter is a diagnostic device used in nuclear power plants (and other nuclear applications) for measuring the reactivity of the nuclear chain reaction (in inhours) of fissile materials as they approach criticality.
References
Measuring instruments
|
https://en.wikipedia.org/wiki/VESA%20Plug%20and%20Display
|
VESA Plug and Display (abbreviated as P&D) is a video connector that carries digital signals for monitors, such as flat panel displays and video projectors, ratified by Video Electronics Standards Association (VESA) in 1997. Introduced around the same time as the competing connectors for the Digital Visual Interface (DVI, 1999) and VESA's own Digital Flat Panel (DFP, 1999), it was marketed as a replacement for the VESA Enhanced Video Connector (EVC, 1994). Unlike DVI, it never achieved widespread implementation.
The P&D connector shares the 30-pin plus quad-coax layout of EVC, which carries digital video, analog video, and data over Universal Serial Bus (USB) and IEEE 1394 (FireWire). At a minimum, the P&D connector is required to carry digital video, in which case the connector is designated P&D-D; when both digital and analog video are included, the connector is designated P&D-A/D.
Design
The P&D receptacle and plug are required to bear a standardized symbol to designate the standards with which it is compatible. The upper left quadrant designates analog video support. The upper right quadrant designates digital video support. The lower quadrants designate IEEE 1394 and USB support.
All P&D connectors are required to carry single-link TMDS digital video signal (max 160 MHz), and support VESA Display Data Channel version 2 at a minimum. Maximum resolution is 1600×1280 with a 60 Hz refresh rate.
Analogue video signals, if supported, must be provided as three separate color channels (red / green / blue) along with one composite or two (horizontal & vertical) sync signals. The nominal impedance of each signal line is 75 Ω and each channel must be capable of carrying a bandwidth of at least 2.4 GHz. The type designation for the analogue video signals designates the voltage values of the signals only, including the Type 4 (VESA) analog DC protocol introduced with EVC:
The P&D connector supports optional charging power at 18–20 VDC and up to 1.5 A. In addition, a s
|
https://en.wikipedia.org/wiki/Dictionary%20of%20the%20Scots%20Language
|
The Dictionary of the Scots Language (DSL) (, ) is an online Scots–English dictionary, now run by Dictionaries of the Scots Language, formerly known as Scottish Language Dictionaries, a registered SCIO charity. Freely available via the Internet, the work comprises the two major dictionaries of the Scots language:
Dictionary of the Older Scottish Tongue (DOST), 12 volumes
Scottish National Dictionary (SND), 10 volumes
The DOST contains information about Older Scots words in use from the 12th to the end of the 17th centuries (Early and Middle Scots); SND contains information about Scots words in use from 1700 to the 1970s (Modern Scots). Together these 22 volumes provide a comprehensive history of Scots. The SND Bibliography and the DOST Register of Titles have also been digitised and can be searched in the same way as the main data files. A new supplement compiled by Scottish Language Dictionaries was added in 2005.
The digitisation project, which ran from February 2001 to January 2004, was based at the University of Dundee and primarily funded by a grant from the Arts and Humanities Research Board, with additional support provided by the Scottish National Dictionary Association and the Russell Trust. The project team was led by academic, Dr Victor Skretkowicz and lexicographer, Susan Rennie, a former Senior Editor with the Scottish National Dictionary Association. Its methodology was based on a previous, pilot project by Rennie to digitise the Scottish National Dictionary (the eSND project), using a customised XML markup based on Text Encoding Initiative guidelines. The Dictionary of the Scots Language data was later used to create sample categories for a new Historical Thesaurus of Scots project, led by Rennie at the University of Glasgow, which was launched in 2015.
Dr Victor Skretkowicz was born in Hamilton, Ontario, in 1942; joined the University of Dundee's English Department in 1978 and in 1989, became the Dundee University's representative on the Joint Coun
|
https://en.wikipedia.org/wiki/Tera%20Term
|
Tera Term (alternatively TeraTerm) is an open-source, free, software implemented, terminal emulator (communications) program. It emulates different types of computer terminals, from DEC VT100 to DEC VT382. It supports Telnet, SSH 1 & 2 and serial port connections. It also has a built-in macro scripting language (supporting Oniguruma regular expressions) and a few other useful plugins.
History
The first versions of Tera Term were created by T. Teranishi from Japan. At the time, it was the only freely available terminal emulator to effectively support the Japanese language. Original development of Tera Term stopped in the late 1990s at version 2.3, but other organizations have created variations.
In October 2002, Ayera Technologies released TeraTerm Pro 3.1.3 supporting SSH2 and added multiple other features like a built-in web server for API integration with external systems, recurring "keep-alive" commands, and ODBC database support via the TT Macro Scripting Language. Ayera Technologies did not make their source open, but does provide limited technical support.
In 2004, Yutaka Hirata, a software designer from Japan, restarted development of the open source version of Tera Term. He added his own implementation of SSH2 and many new features on top of what was part of version 2.3.
To avoid confusion with version numbers and to indicate that Tera Term developed by Yutaka was more recent than version 3.1.3 from Ayera Technologies, it was decided to give this branch of Tera Term Professional version numbers starting 4.xx.
In January 2005, Boris Maisuradze, together with Yutaka Hirata, started the TeraTerm Support forum where they answered questions from Tera Term users. Posting in this forum was the best way to suggest new features for Tera Term or propose new commands for the Tera Term Macro language. For more than 10 years the forum was hosted on LogMeTT.com website maintained by Boris Maisuradze. Boris also developed several freeware tools that became part of Ter
|
https://en.wikipedia.org/wiki/Hamulus
|
A hamus or hamulus is a structure functioning as, or in the form of, hooks or hooklets.
Etymology
The terms are directly from Latin, in which hamus means "hook". The plural is hami.
Hamulus is the diminutive – hooklet or little hook. The plural is hamuli.
Adjectives are hamate and hamulate, as in "a hamulate wing-coupling", in which the wings of certain insects in flight are joined by hooking hamuli on one wing into folds on a matching wing. Hamulate can also mean "having hamuli". The terms hamose, hamular, hamous and hamiform also have been used to mean "hooked", or "hook-shaped". Terms such as hamate that do not indicate a diminutive usually refer particularly to a hook at the tip, whereas diminutive terms such as hamulose tend to imply that something is beset with small hooks.
Anatomy
In vertebrate anatomy, a hamulus is a small, hook-shaped portion of a bone, or possibly of other hard tissue.
In human anatomy, examples include:
pterygoid hamulus
hamulus of hamate bone
lacrimal hamulus
Arthropoda
In arthropod morphology hamuli are hooklets, usually in the form of projections of the surface of the exoskeleton. Hami might be actual evaginations of the whole thickness of the exoskeleton. The best-known examples are probably the row of hamuli on the anterior edge of the metathoracic (rear) wings of Hymenoptera such as the honeybee. The hooks attach to a fold on the posterior edge of the mesothoracic (front) wings.
It is less widely realised that similar hamuli, though usually fewer, are used in wing coupling in the Sternorrhyncha, the suborder of aphids and scale insects. In the Sternorrhyncha such wing coupling occurs particularly in the males of some species. The rear wings of that suborder frequently are reduced or absent, and in many species the last vestige of the rear wing to persist is a futile little strap holding the hamuli, still hooking into the fold of the large front wings.
In those Springtails (Collembola) that have a functional furcula, the
|
https://en.wikipedia.org/wiki/Binary%20multiplier
|
A binary multiplier is an electronic circuit used in digital electronics, such as a computer, to multiply two binary numbers.
A variety of computer arithmetic techniques can be used to implement a digital multiplier. Most techniques involve computing the set of partial products, which are then summed together using binary adders. This process is similar to long multiplication, except that it uses a base-2 (binary) numeral system.
History
Between 1947 and 1949 Arthur Alec Robinson worked for English Electric Ltd, as a student apprentice, and then as a development engineer. Crucially during this period he studied for a PhD degree at the University of Manchester, where he worked on the design of the hardware multiplier for the early Mark 1 computer.
However, until the late 1970s, most minicomputers did not have a multiply instruction, and so programmers used a "multiply routine"
which repeatedly shifts and accumulates partial results,
often written using loop unwinding. Mainframe computers had multiply instructions, but they did the same sorts of shifts and adds as a "multiply routine".
Early microprocessors also had no multiply instruction. Though the multiply instruction became common with the 16-bit generation,
at least two 8-bit processors have a multiply instruction: the Motorola 6809, introduced in 1978, and Intel MCS-51 family, developed in 1980, and later the modern Atmel AVR 8-bit microprocessors present in the ATMega, ATTiny and ATXMega microcontrollers.
As more transistors per chip became available due to larger-scale integration, it became possible to put enough adders on a single chip to sum all the partial products at once, rather than reuse a single adder to handle each partial product one at a time.
Because some common digital signal processing algorithms spend most of their time multiplying, digital signal processor designers sacrifice considerable chip area in order to make the multiply as fast as possible; a single-cycle multiply–accumulate un
|
https://en.wikipedia.org/wiki/Digital%20comparator
|
A digital comparator or magnitude comparator is a hardware electronic device that takes two numbers as input in binary form and determines whether one number is greater than, less than or equal to the other number. Comparators are used in central processing units (CPUs) and microcontrollers (MCUs). Examples of digital comparator include the CMOS 4063 and 4585 and the TTL 7485 and 74682.
An XNOR gate is a basic comparator, because its output is "1" only if its two input bits are equal.
The analog equivalent of digital comparator is the voltage comparator. Many microcontrollers have analog comparators on some of their inputs that can be read or trigger an interrupt.
Implementation
Consider two 4-bit binary numbers A and B so
Here each subscript represents one of the digits in the numbers.
Equality
The binary numbers A and B will be equal if all the pairs of significant digits of both numbers are equal, i.e.,
, , and
Since the numbers are binary, the digits are either 0 or 1 and the boolean function for equality of any two digits and can be expressed as
we can also replace it by XNOR gate in digital electronics.
is 1 only if and are equal.
For the equality of A and B, all variables (for i=0,1,2,3) must be 1.
So the equality condition of A and B can be implemented using the AND operation as
The binary variable (A=B) is 1 only if all pairs of digits of the two numbers are equal.
Inequality
In order to manually determine the greater of two binary numbers, we inspect the relative magnitudes of pairs of significant digits, starting from the most significant bit, gradually proceeding towards lower significant bits until an inequality is found. When an inequality is found, if the corresponding bit of A is 1 and that of B is 0 then we conclude that A>B.
This sequential comparison can be expressed logically as:
(A>B) and (A < B) are output binary variables, which are equal to 1 when A>B or A<B respectively.
See also
List of LM-series integrated c
|
https://en.wikipedia.org/wiki/Comet%20%28programming%29
|
Comet is a web application model in which a long-held HTTPS request allows a web server to push data to a browser, without the browser explicitly requesting it. Comet is an umbrella term, encompassing multiple techniques for achieving this interaction. All these methods rely on features included by default in browsers, such as JavaScript, rather than on non-default plugins. The Comet approach differs from the original model of the web, in which a browser requests a complete web page at a time.
The use of Comet techniques in web development predates the use of the word Comet as a neologism for the collective techniques. Comet is known by several other names, including
Ajax Push,
Reverse Ajax, Two-way-web, HTTP Streaming, and
HTTP server push
among others. The term Comet is not an acronym, but was coined by Alex Russell in his 2006 blog post.
In recent years, the standardisation and widespread support of WebSocket and Server-sent events has rendered the Comet model obsolete.
History
Early Java applets
The ability to embed Java applets into browsers (starting with Netscape Navigator 2.0 in March 1996) made two-way sustained communications possible, using a raw TCP socket to communicate between the browser and the server. This socket can remain open as long as the browser is at the document hosting the applet. Event notifications can be sent in any format text or binary and decoded by the applet.
The first browser-to-browser communication framework
The very first application using browser-to-browser communications was Tango Interactive, implemented in 1996–98 at the Northeast Parallel Architectures Center (NPAC) at Syracuse University using DARPA funding. TANGO architecture has been patented by Syracuse University. TANGO framework has been extensively used as a distance education tool. The framework has been commercialized by CollabWorx and used in a dozen or so Command&Control and Training applications in the United States Department of Defense.
First Comet appli
|
https://en.wikipedia.org/wiki/Enriched%20text
|
Enriched text is a formatted text format for e-mail, defined by the IETF in RFC 1896 and associated with the text/enriched MIME type which is defined in RFC 1563. It is "intended to facilitate the wider interoperation of simple enriched text across a wide variety of hardware and software platforms". As of 2012, enriched text remained almost unknown in e-mail traffic, while HTML e-mail is widely used. Enriched text, or at least the subset of HTML that can be transformed into enriched text, is seen as preferable to full HTML for use with e-mail (mainly because of security considerations).
A predecessor of this MIME type was called text/richtext in RFC 1341 and RFC 1521. Neither should be confused with Rich Text Format (MIME type text/rtf or application/rtf) which are unrelated specifications, devised by Microsoft.
A single newline in enriched text is treated as a space. Formatting commands are in the same style as SGML and HTML. They must be balanced and nested.
Enriched text is a supported format of Emacs, Mutt, Mulberry and Netscape Communicator.
Examples
References
External links
The text/enriched MIME Content-type
Email
Internet Standards
Computer file formats
Markup languages
|
https://en.wikipedia.org/wiki/Martha%20%28passenger%20pigeon%29
|
Martha ( – September 1, 1914) was the last known living passenger pigeon (Ectopistes migratorius); she was named "Martha" in honor of the first First Lady Martha Washington.
Early life
The history of the Cincinnati Zoo's passenger pigeons has been described by Arlie William Schorger in his monograph on the species as "hopelessly confused," and he also said that it is "difficult to find a more garbled history" than that of Martha. The generally accepted version is that, by the turn of the 20th century, the last known group of passenger pigeons was kept by Professor Charles Otis Whitman at the University of Chicago. Whitman originally acquired his passenger pigeons from David Whittaker of Wisconsin, who sent him six birds, two of which later bred and hatched Martha in about 1885. Martha was named in honor of Martha Washington. Whitman kept these pigeons to study their behavior, along with rock doves and Eurasian collared-doves. Whitman and the Cincinnati Zoo, recognizing the decline of the wild populations, attempted to consistently breed the surviving birds, including attempts at making a rock dove foster passenger pigeon eggs. These attempts were unsuccessful, and Whitman sent Martha to the Cincinnati Zoo in 1902.
However, other sources argue that Martha was instead the descendant of three pairs of passenger pigeons purchased by the Cincinnati Zoo in 1877. Another source claimed that when the Cincinnati Zoo opened in 1875, it already had 22 birds in its collection. These sources claim that Martha was hatched at the Cincinnati Zoo in 1885, and that the passenger pigeons were originally kept not because of the rarity of the species, but to enable guests to have a closer look at a native species.
Cincinnati Zoo
By November 1907, Martha and her two male companions at the Cincinnati Zoo were the only known surviving passenger pigeons after four captive males in Milwaukee died during the winter. One of the Cincinnati males died in April 1909, followed by the remaining
|
https://en.wikipedia.org/wiki/CER-12
|
CER ( – Digital Electronic Computer) model 12 was a third-generation digital computer developed by Mihajlo Pupin Institute (Serbia) in 1971 and intended for "business and statistical data processing" (see ref. Lit. #1 and #4). However, the manufacturer also stated, at the time, that having in mind its architecture and performance, it can also be used successfully in solving "wide array of scientific and technical issues" (ref. Lit.#2 and #3). Computer CER-12 consisted of multiple modules connected via wire wrap and connectors.
Central Unit
Primary memory
Type: magnetic core memory
Capacity: up to 8 modules, each consisting of 8 kilowords (1 word = 4 8-bit bytes).
Speed: cycle time: 1 μs, access time 0.4 μs.
Arithmetic unit contains:
32-bit accumulator register
two separate groups of eight 2-byte index registers
single-byte adder supporting both binary and BCD addition (same unit is used for subtraction, multiplication and division had to be implemented in software)
Control Unit
Control unit contains a program counter and instruction registers. It fetches instructions and facilitates program flow. It supports single-operand instruction set and works with all 16 index registers of the arithmetic unit.
Interrupt System
Interrupt system of CER-12 consists of a number of dedicated registers and software. It supports up to 32 interrupt channels.
Control Panel
Control panel of CER-12 allowed the operator to control and alter program flow and/or to eliminate errors detected by error-detection circuitry. It features a number of indicators and switches.
Operating system and other software
Following software was shipped with CER-12:
Operating System
"Symbolic programming language" and assembler (called "autocoder")
Input/output subroutines
A number of test programs
COBOL and FORTRAN IV compilers
Linear programming and PERT planning software
A library of applications and subroutines
Peripherals
5-8 track, 500 characters per second punched tape reader PE 10
|
https://en.wikipedia.org/wiki/CER-20
|
CER (Serbian: Цифарски Електронски Рачунар / Cifarski Elektronski Računar - Digital Electronic Computer) model 20 was an early digital computer developed by Mihajlo Pupin Institute (Serbia). It was designed as a functioning prototype of an "electronic bookkeeping machine". The first prototype was planned for 1964.
References
See also
CER Computers
Mihajlo Pupin Institute
History of computer hardware in the SFRY
CER-020
CER computers
|
https://en.wikipedia.org/wiki/Covox
|
SRT, Inc., doing business as Covox, Inc., was a small, privately owned American technology company active from 1975 to 1994. The company released a number of sound-generating devices for microcomputers and personal computers from the 1980s to the 1990s. They are perhaps best known for the Speech Thing, a digital-to-analog converter that plugs into a parallel port of the IBM Personal Computer. Covox was originally based in Southern California but moved their headquarters to Eugene, Oregon, in the early 1980s.
History
SRT, Inc., was founded by Larry Stewart in Southern California in 1975. Stewart had previously worked in the aerospace industry into the 1960s, where he got the idea for Av-Alarm, a sound-generating device intended to scare off birds from outside locations such as vegetable crops and vineyards. SRT relocated to Eugene, Oregon, in 1982, Stewart finding Oregon to be a cheaper state in which to conduct his business. Around this time, he hired his sons Mike Stewart and Brad Stewart to manage the company. Together they established Covox, Inc., a subsidiary of SRT, in 1982; this subsidiary was dedicated to audio products for microcomputers and personal computers and soon after subsumed the SRT name. Brad Stewart, named the company's vice president, was responsible for the development all of Covox's products. Covox's first product was released in 1984; called the Voice Master, it was a low-cost speech-synthesis board for the Commodore 64, intended for business and education. A successor to this device, the Voice Master II, was released in 1990. By mid-1987, sales of Covox products represented 85 percent of SRT's total sales.
In late 1987, Covox released the Speech Thing, a simple digital-to-analog converter that plugs into a parallel port of the IBM Personal Computer (and compatibles). It was the first sound device for the IBM PC capable of playing digital audio samples. The Speech Thing initially sold poorly but later found widespread adoption among video ga
|
https://en.wikipedia.org/wiki/INAH%203
|
INAH-3 is the short form for the third interstitial nucleus of the anterior hypothalamus, and is the sexually dimorphic nucleus of humans. The INAH-3 is significantly larger in males than in females regardless of age and larger in heterosexual males than in homosexual males and heterosexual females.
Research
The term INAH (interstitial nuclei of the anterior hypothalamus), first proposed in 1989 by a group of the University of California at Los Angeles, refers to 4 previously undescribed cell groups of the preoptic-anterior hypothalamic area (PO-AHA) of the human brain, which is a structure that influences gonadotropin secretion, maternal behavior, and sexual behavior in several mammalian species. There are four nuclei in the PO-AHA (INAH1-4). One of these nuclei, INAH-3, was found to be 2.8 times larger in the male brain than in the female brain regardless of age.
A study authored by Simon LeVay and published in the journal Science suggests that the region is an important biological substrate with regard to sexual orientation. This article reported the INAH-3 to be smaller on average in homosexual men than in heterosexual men, and in fact has approximately the same size in homosexual men as in heterosexual women. Further research has found that the INAH3 is smaller in volume in homosexual men than in heterosexual men because homosexual men have a higher neuronal packing density (the
number of neurons per cubic millimeter) in the INAH3 than heterosexual men; there is no difference in the number or cross-sectional area of neurons in the INAH3 of homosexual versus heterosexual men. It has also been found that there is no effect of HIV infection on the size of INAH3, that is, HIV infection cannot account for the observed difference in INAH3 volume between homosexual and heterosexual men.
LeVay noted three possibilities that could account for his findings: 1. The structural differences in INAH3 between homosexual and heterosexual males were present prenatally or in e
|
https://en.wikipedia.org/wiki/Six%20degrees%20of%20freedom
|
Six degrees of freedom (6DOF) refers to the six mechanical degrees of freedom of movement of a rigid body in three-dimensional space. Specifically, the body is free to change position as forward/backward (surge), up/down (heave), left/right (sway) translation in three perpendicular axes, combined with changes in orientation through rotation about three perpendicular axes, often termed yaw (normal axis), pitch (transverse axis), and roll (longitudinal axis).
Three degrees of freedom (3DOF), a term often used in the context of virtual reality, typically refers to tracking of rotational motion only: pitch, yaw, and roll.
Robotics
Serial and parallel manipulator systems are generally designed to position an end-effector with six degrees of freedom, consisting of three in translation and three in orientation. This provides a direct relationship between actuator positions and the configuration of the manipulator defined by its forward and inverse kinematics.
Robot arms are described by their degrees of freedom. This is a practical metric, in contrast to the abstract definition of degrees of freedom which measures the aggregate positioning capability of a system.
In 2007, Dean Kamen, inventor of the Segway, unveiled a prototype robotic arm with 14 degrees of freedom for DARPA. Humanoid robots typically have 30 or more degrees of freedom, with six degrees of freedom per arm, five or six in each leg, and several more in torso and neck.
Engineering
The term is important in mechanical systems, especially biomechanical systems, for analyzing and measuring properties of these types of systems that need to account for all six degrees of freedom. Measurement of the six degrees of freedom is accomplished today through both AC and DC magnetic or electromagnetic fields in sensors that transmit positional and angular data to a processing unit. The data is made relevant through software that integrates the data based on the needs and programming of the users.
The six degrees of
|
https://en.wikipedia.org/wiki/Sylvester%27s%20sequence
|
In number theory, Sylvester's sequence is an integer sequence in which each term is the product of the previous terms, plus one. The first few terms of the sequence are
2, 3, 7, 43, 1807, 3263443, 10650056950807, 113423713055421844361000443 .
Sylvester's sequence is named after James Joseph Sylvester, who first investigated it in 1880. Its values grow doubly exponentially, and the sum of its reciprocals forms a series of unit fractions that converges to 1 more rapidly than any other series of unit fractions. The recurrence by which it is defined allows the numbers in the sequence to be factored more easily than other numbers of the same magnitude, but, due to the rapid growth of the sequence, complete prime factorizations are known only for a few of its terms. Values derived from this sequence have also been used to construct finite Egyptian fraction representations of 1, Sasakian Einstein manifolds, and hard instances for online algorithms.
Formal definitions
Formally, Sylvester's sequence can be defined by the formula
The product of the empty set is 1, so s0 = 2.
Alternatively, one may define the sequence by the recurrence
with s0 = 2.
It is straightforward to show by induction that this is equivalent to the other definition.
Closed form formula and asymptotics
The Sylvester numbers grow doubly exponentially as a function of n. Specifically, it can be shown that
for a number E that is approximately 1.26408473530530... . This formula has the effect of the following algorithm:
s0 is the nearest integer to E 2; s1 is the nearest integer to E 4; s2 is the nearest integer to E 8; for sn, take E 2, square it n more times, and take the nearest integer.
This would only be a practical algorithm if we had a better way of calculating E to the requisite number of places than calculating sn and taking its repeated square root.
The double-exponential growth of the Sylvester sequence is unsurprising if one compares it to the sequence of Fer
|
https://en.wikipedia.org/wiki/ECATT
|
eCATT (extended Computer Aided Test Tool) is a tool for software test automation developed by SAP. eCATT offers a graphical user interface with ABAP script editor and its own command syntax. The capability for recording and for parameterizing the test components is also present.
External links
Another blog for SAP eCATT tool
eCATT Community on Orkut
eCATT Tutorial
Software testing tools
SAP SE
|
https://en.wikipedia.org/wiki/Resident%20monitor
|
In computing, a resident monitor is a type of system software program that was used in many early computers from the 1950s to 1970s. It can be considered a precursor to the operating system. The name is derived from a program which is always present in the computer's memory, thus being "resident". Because memory was very limited on those systems, the resident monitor was often little more than a stub that would gain control at the end of a job and load a non-resident portion to perform required job cleanup and setup tasks.
On a general-use computer using punched card input, the resident monitor governed the machine before and after each job control card was executed, loaded and interpreted each control card, and acted as a job sequencer for batch processing operations. The resident monitor could clear memory from the last used program (with the exception of itself), load programs, search for program data and maintain standard input-output routines in memory.
Similar system software layers were typically in use in the early days of the later minicomputers and microcomputers before they gained the power to support full operating systems.
Current use
Resident monitor functionality is present in many embedded systems, boot loaders, and various embedded command lines. The original functions present in all resident monitors are augmented with present-day functions dealing with boot time hardware, disks, ethernet, wireless controllers, etc. Typically, these functions are accessed using a serial terminal or a physical keyboard and display, if attached. Such a resident monitor is frequently called a debugger, boot loader, command-line interface (CLI), etc. The original meaning of serial-accessed or terminal-accessed resident monitor is not frequently used, although the functionality remained the same, and was augmented.
Typical functions of a resident monitor include examining and editing ram and/or ROM (including flash EEPROM) and sometimes special function register
|
https://en.wikipedia.org/wiki/TSIG
|
TSIG (transaction signature) is a computer-networking protocol defined
in RFC 2845. Primarily it enables the Domain Name System (DNS) to authenticate updates to a DNS database. It is most commonly used to update Dynamic DNS or a secondary/slave DNS server. TSIG uses shared secret keys and one-way hashing to provide a cryptographically secure means of authenticating each endpoint of a connection as being allowed to make or respond to a DNS update.
Although queries to DNS may usually be made without authentication, updates to DNS must be authenticated, since they make lasting changes to the structure of the Internet naming system. As the update request may arrive via an insecure channel (the Internet), one must take measures to ensure the authenticity and integrity of the request. The use of a key shared by the client making the update and the DNS server helps to ensure the authenticity and integrity of the update request. A one-way hashing function serves to prevent malicious observers from modifying the update and forwarding on to the destination, thus ensuring integrity of the message from source to destination.
A timestamp is included in the TSIG protocol to prevent recorded responses from being reused, which would allow an attacker to breach the security of TSIG. This places a requirement on dynamic DNS servers and TSIG clients to contain an accurate clock. Since DNS servers are connected to a network, the Network Time Protocol can provide an accurate time source.
DNS updates, like queries, are normally transported via UDP since it requires lower overhead than TCP. However, DNS servers support both UDP and TCP requests.
Implementation
An update, as specified in RFC 2136, is a set of instructions to a DNS server. These include a header, the zone to be updated, the prerequisites that must be satisfied, and the record(s) to be updated. TSIG adds a final record, which includes a timestamp and the hash of the request. It also includes the name of the secr
|
https://en.wikipedia.org/wiki/Ptolemy%20Project
|
The Ptolemy Project is an ongoing project aimed at modeling, simulating, and designing concurrent, real-time, embedded systems. The focus of the Ptolemy Project is on assembling concurrent components. The principal product of the project is the Ptolemy II model based design and simulation tool. The Ptolemy Project is conducted in the Industrial Cyber-Physical Systems Center (iCyPhy) in the Department of Electrical Engineering and Computer Sciences of the University of California at Berkeley, and is directed by Prof. Edward A. Lee.
The key underlying principle in the project is the use of well-defined models of computation that govern the interaction between components. A major problem area being addressed is the use of heterogeneous mixtures of models of computation.
The project is named after Claudius Ptolemaeus, the 2nd century Greek astronomer, mathematician, and geographer.
The Kepler Project, a community-driven collaboration among researchers at three other University of California campuses has created the Kepler scientific workflow system which is based on Ptolemy II.
References
External links
The Kepler Project
Free science software
Software projects
Systems engineering
Visual programming languages
Software using the BSD license
|
https://en.wikipedia.org/wiki/Kinesis%20%28biology%29
|
Kinesis, like a taxis or tropism, is a movement or activity of a cell or an organism in response to a stimulus (such as gas exposure, light intensity or ambient temperature).
Unlike taxis, the response to the stimulus provided is non-directional. The animal does not move toward or away from the stimulus but moves at either a slow or fast rate depending on its "comfort zone." In this case, a fast movement (non-random) means that the animal is searching for its comfort zone while a slow movement indicates that it has found it.
Types
There are two main types of kineses, both resulting in aggregations. However, the stimulus does not act to attract or repel individuals.
Orthokinesis: in which the speed of movement of the individual is dependent upon the stimulus intensity. For example, the locomotion of the collembola, Orchesella cincta, in relation to water. With increased water saturation in the soil there is an increase in the direction of its movement towards the aimed place.
Klinokinesis: in which the frequency or rate of turning is proportional to stimulus intensity. For example the behaviour of the flatworm (Dendrocoelum lacteum) which turns more frequently in response to increasing light thus ensuring that it spends more time in dark areas.
Basic model of kinesis
The kinesis strategy controlled by the locally and instantly evaluated well-being (fitness) can be described in simple words: Animals stay longer in good conditions and leave bad conditions more quickly. If the well-being is measured by the local reproduction coefficient then the minimal reaction-diffusion model of kinesis can be written as follows:
For each population in the biological community,
where:
is the population density of ith species,
represents the abiotic characteristics of the living conditions (can be multidimensional),
is the reproduction coefficient, which depends on all and on s,
is the equilibrium diffusion coefficient (defined for equilibrium ). The coefficient charac
|
https://en.wikipedia.org/wiki/List%20of%20mangrove%20ecoregions
|
This is a list of mangrove ecoregions ordered according to whether they lie in the Afrotropical, Australasian, Indomalayan, or Neotropical realms of the world. Mangrove estuaries such as those found in the Sundarbans of southwestern Bangladesh are rich productive ecosystems which serve as spawning grounds and nurseries for shrimp, crabs, and many fish species, a richness which is lost if the area is cleared and converted to ponds for shrimp farming or rice paddies.
Afrotropical
Australasian
Indomalayan
Nearctic
Neotropical
See also
Mangrove
Ecoregion
Notes
|
https://en.wikipedia.org/wiki/Time%20Stamp%20Counter
|
The Time Stamp Counter (TSC) is a 64-bit register present on all x86 processors since the Pentium. It counts the number of CPU cycles since its reset. The instruction RDTSC returns the TSC in EDX:EAX. In x86-64 mode, RDTSC also clears the upper 32 bits of RAX and RDX. Its opcode is 0F 31. Pentium competitors such as the Cyrix 6x86 did not always have a TSC and may consider RDTSC an illegal instruction. Cyrix included a Time Stamp Counter in their MII.
Use
The Time Stamp Counter was once an excellent high-resolution, low-overhead way for a program to get CPU timing information. With the advent of multi-core/hyper-threaded CPUs, systems with multiple CPUs, and hibernating operating systems, the TSC cannot be relied upon to provide accurate results — unless great care is taken to correct the possible flaws: rate of tick and whether all cores (processors) have identical values in their time-keeping registers. There is no promise that the timestamp counters of multiple CPUs on a single motherboard will be synchronized. Therefore, a program can get reliable results only by limiting itself to run on one specific CPU. Even then, the CPU speed may change because of power-saving measures taken by the OS or BIOS, or the system may be hibernated and later resumed, resetting the TSC. In those latter cases, to stay relevant, the program must re-calibrate the counter periodically.
Relying on the TSC also reduces portability, as other processors may not have a similar feature. Recent Intel processors include a constant rate TSC (identified by the kern.timecounter.invariant_tsc sysctl on FreeBSD or by the "constant_tsc" flag in Linux's /proc/cpuinfo). With these processors, the TSC ticks at the processor's nominal frequency, regardless of the actual CPU clock frequency due to turbo or power saving states. Hence TSC ticks are counting the passage of time, not the number of CPU clock cycles elapsed.
On Windows platforms, Microsoft strongly discourages using the TSC for high-resolu
|
https://en.wikipedia.org/wiki/Underactuation
|
Underactuation is a technical term used in robotics and control theory to describe mechanical systems that cannot be commanded to follow arbitrary trajectories in configuration space. This condition can occur for a number of reasons, the simplest of which is when the system has a lower number of actuators than degrees of freedom. In this case, the system is said to be trivially underactuated.
The class of underactuated mechanical systems is very rich and includes such diverse members as automobiles, airplanes, and even animals.
Definition
To understand the mathematical conditions which lead to underactuation, one must examine the dynamics that govern the systems in question. Newton's laws of motion dictate that the dynamics of mechanical systems are inherently second order. In general, these dynamics can be described by a second order differential equation:
Where:
is the position state vector is the vector of control inputs is time.
Furthermore, in many cases the dynamics for these systems can be rewritten to be affine in the control inputs:
When expressed in this form, the system is said to be underactuated if:
When this condition is met, there are acceleration directions that can not be produced no matter what the control vector is.
Note that does not explicitly represent the number of actuators present in the system. Indeed, there may be more actuators than degrees of freedom and the system may still be underactuated. Also worth noting is the dependence of on the state . That is, there may exist states in which an otherwise fully actuated system becomes underactuated.
Examples
The classic inverted pendulum is an example of a trivially underactuated system: it has two degrees of freedom (one for its support's motion in the horizontal plane, and one for the angular motion of the pendulum), but only one of them (the cart position) is actuated, and the other is only indirectly controlled. Although naturally extremely unstable, this underactua
|
https://en.wikipedia.org/wiki/KOZL-TV
|
KOZL-TV (channel 27) is a television station in Springfield, Missouri, United States, affiliated with MyNetworkTV. It is owned by Nexstar Media Group alongside Osage Beach–licensed Fox affiliate KRBK (channel 49); Nexstar also provides certain services to CBS affiliate KOLR (channel 10) under a local marketing agreement (LMA) with Mission Broadcasting. The stations share studios on East Division Street in Springfield, while KOZL-TV's transmitter is located on Switchgrass Road, north of Fordland.
History
Early history
The station first signed on the air in 1968 as KMTC; founded by Meyer Communications, it originally operated as the market's first full-time ABC affiliate. It originally operated from studios located on East Cherry Street in Springfield. Prior to its sign-on, ABC programming had been limited to off-hours clearances on KYTV (channel 3) and KTTS-TV (channel 10, now KOLR) from their respective sign-ons in October and March 1953. Although the Springfield market had had a large enough population to support three full-time network affiliates since the 1950s, prospective station owners were skeptical about launching a UHF station in a market that stretched across a large and mostly mountainous swath of Missouri and Arkansas. UHF stations have never gotten very good reception across large areas or rugged terrain. In 1980, the station adopted the on-air brand "C-27". In 1985, the station was purchased by Woods Communications; after the sale was finalized, channel 27 changed its call letters to KDEB-TV (named after Deborah Woods, the daughter of the president of Woods Communications).
As a Fox affiliate
In January 1985, KMTC renewed its ABC affiliation. The following month, TV syndicator Telepictures, who had recently purchased cross-town independent KSPR (channel 33), attempted to persuade ABC to make an affiliation agreement via a presentation to the network. ABC then convinced KMTC to develop their own presentation for the network that would defend the sta
|
https://en.wikipedia.org/wiki/Virtual%20appliance
|
A virtual appliance is a pre-configured virtual machine image, ready to run on a hypervisor; virtual appliances are a subset of the broader class of software appliances. Installation of a software appliance on a virtual machine and packaging that into an image creates a virtual appliance. Like software appliances, virtual appliances are intended to eliminate the installation, configuration and maintenance costs associated with running complex stacks of software.
A virtual appliance is not a complete virtual machine platform, but rather a software image containing a software stack designed to run on a virtual machine platform which may be a Type 1 or Type 2 hypervisor
. Like a physical computer, a hypervisor is merely a platform for running an operating system environment and does not provide application software itself.
Many virtual appliances provide a Web page user interface to permit their configuration. A virtual appliance is usually built to host a single application; it therefore represents a new way to deploy applications on a network.
File formats
Virtual appliances are provided to the user or customer as files, via either electronic downloads or physical distribution. The file format most commonly used is the Open Virtualization Format (OVF). It may also be distributed as Open Virtual Appliance (OVA), the .ova file format is interchangeable with .ovf. The Distributed Management Task Force (DMTF) publishes the OVF specification documentation. Most virtualization platforms, including those from VMware, Microsoft, Oracle, and Citrix, can install virtual appliances from an OVF file.
Grid computing
Virtualization solves a key problem in the grid computing arena – namely, the reality that any sufficiently large grid will inevitably consist of a wide variety of heterogeneous hardware and operating system configurations. Adding virtual appliances into the picture allows for extremely rapid provisioning of grid nodes and importantly, cleanly decouples the grid
|
https://en.wikipedia.org/wiki/BSD%20Authentication
|
BSD Authentication, otherwise known as BSD Auth, is an authentication framework and software API employed by OpenBSD and accompanying software such as OpenSSH. It originated with BSD/OS, and although the specification and implementation were donated to the FreeBSD project by BSDi, OpenBSD chose to adopt the framework in release 2.9. Pluggable Authentication Modules (PAM) serves a similar purpose on other operating systems such as Linux, FreeBSD and NetBSD.
BSD Auth performs authentication by executing scripts or programs as separate processes from the one requiring the authentication. This prevents the child authentication process from interfering with the parent except through a narrowly defined inter-process communication API, a technique inspired by the principle of least privilege and known as privilege separation. This behaviour has significant security benefits, notably improved fail-safeness of software, and robustness against malicious and accidental software bugs.
See also
Name Service Switch
References
External links
Berkeley Software Distribution
Computer access control frameworks
Unix authentication-related software
|
https://en.wikipedia.org/wiki/California%20State%20Summer%20School%20for%20Mathematics%20and%20Science
|
The California State Summer School for Mathematics and Science (COSMOS) is a summer program for high school students in California for the purpose of preparing them for careers in mathematics and sciences. It is often abbreviated COSMOS, although COSMOS does not contain the correct letters to create an accurate abbreviation. The program is hosted on four different campuses of the University of California, at Davis, Irvine, San Diego, and Santa Cruz.
History
COSMOS was established by the California State Legislature in the summer of 2000 to stimulate the interests of and provide opportunities for talented California high school students. The California State Summer School for Mathematics & Science is modeled after the California State Summer School for the Arts. In the first summer, 292 students enrolled in the program. Each COSMOS campus only holds 150 students, so selection is competitive. It is a great experience in exploring the sciences and a good activity for college applications, especially the University of California application. This program is designed for extremely gifted students who make amazing discoveries in STEM (Science, Technology, Engineering, Mathematics) areas.
References
State evaluation report of the COSMOS program
External links
Official site
Schools in California
Science education in the United States
Schools of mathematics
Summer schools
Science and technology in California
2000 establishments in California
Mathematics summer camps
|
https://en.wikipedia.org/wiki/Scoreboarding
|
Scoreboarding is a centralized method, first used in the CDC 6600 computer, for dynamically scheduling instructions so that they can execute out of order when there are no conflicts and the hardware is available.
In a scoreboard, the data dependencies of every instruction are logged, tracked and strictly observed at all times. Instructions are released only when the scoreboard determines that there are no conflicts with previously issued ("in flight") instructions. If an instruction is stalled because it is unsafe to issue (or there are insufficient resources), the scoreboard monitors the flow of executing instructions until all dependencies have been resolved before the stalled instruction is issued. In essence: reads proceed on the absence of write hazards, and writes proceed in the absence of read hazards.
Scoreboarding is essentially a hardware implementation of the same underlying algorithm seen in dataflow languages, creating a Directed Acyclic Graph, where the same logic is applied in the programming language runtime.
Stages
Instructions are decoded in order and go through the following four stages.
Issue: The system checks which registers will be read and written by this instruction and where conflicts WAR and RAW and WAW are detected. RAW and WAR hazards are recorded using a Dependency Matrix (constructed from SR NOR latches in the original 6600 design) as it will be needed in the following stages. Simultaneously, an entry is recorded in a second Matrix, which records the instruction order as a Directed Acyclic Graph. In order to avoid output dependencies (WAW – Write after Write) the instruction is stalled until instructions intending to write to the same register are completed. The instruction is also stalled when required functional units are currently busy. No instruction is ever issued unless it is fully trackable from start to finish.
Read operands: After an instruction has been issued and correctly allocated to the required hardware modul
|
https://en.wikipedia.org/wiki/Reservation%20station
|
A unified reservation station, also known as unified scheduler, is a decentralized feature of the microarchitecture of a CPU that allows for register renaming, and is used by the Tomasulo algorithm for dynamic instruction scheduling.
Reservation stations permit the CPU to fetch and re-use a data value as soon as it has been computed, rather than waiting for it to be stored in a register and re-read. When instructions are issued, they can designate the reservation station from which they want their input to read. When multiple instructions need to write to the same register, all can proceed and only the (logically) last one need actually be written.
It checks if the operands are available (RAW) and if execution unit is free (Structural hazard) before starting execution.
Instructions are stored with available parameters, and executed when ready. Results are identified by the unit that will execute the corresponding instruction.
Implicitly register renaming solves WAR and WAW hazards. Since this is a fully associative structure, it has a very high cost in comparators (need to compare all results returned from processing units with all stored addresses).
In Tomasulo's algorithm, instructions are issued in sequence to Reservation Stations which buffer the instruction as well as the operands of the instruction. If the operand is not available, the Reservation Station listens on a Common Data Bus for the operand to become available. When the operand becomes available, the Reservation Station buffers it, and the execution of the instruction can begin.
Functional Units (such as an adder or a multiplier), each have their own corresponding Reservation Stations. The output of the Functional Unit connects to the Common Data Bus, where Reservation Stations are listening for the operands they need.
Bibliography
Computer Architecture: A Quantitative Approach, John L. Hennessy, David A. Patterson, 2012 () "3.4 Overcoming Data Hazards with Dynamic Scheduling", p 172-180
R
|
https://en.wikipedia.org/wiki/Mosaic%20evolution
|
Mosaic evolution (or modular evolution) is the concept, mainly from palaeontology, that evolutionary change takes place in some body parts or systems without simultaneous changes in other parts. Another definition is the "evolution of characters at various rates both within and between species".408 Its place in evolutionary theory comes under long-term trends or macroevolution.
Background
In the neodarwinist theory of evolution, as postulated by Stephen Jay Gould, there is room for differing development, when a life form matures earlier or later, in shape and size. This is due to allomorphism. Organs develop at differing rhythms, as a creature grows and matures. Thus a "heterochronic clock" has three variants: 1) time, as a straight line; 2) general size, as a curved line; 3) shape, as another curved line.
When a creature is advanced in size, it may develop at a smaller rate. Alternatively, it may maintain its original size or, if delayed, it may result in a larger sized creature. That is insufficient to understand heterochronic mechanism.
Size must be combined with shape, so a creature may retain paedomorphic features if advanced in shape or present recapitulatory appearance when retarded in shape. These names are not very indicative, as past theories of development were very confusing.
A creature in its ontogeny may combine heterochronic features in six vectors, although Gould considers that there is some binding with growth and sexual maturation. A creature may, for example, present some neotenic features and retarded development, resulting in new features derived from an original creature only by regulatory genes. Most novel human features (compared to closely related apes) were of this nature, not implying major change in structural genes, as was classically considered.
Taxonomic range
It is not claimed that this pattern is universal, but there is now a wide range of examples from many different taxa, including:
Hominid evolution: the early evolution of
|
https://en.wikipedia.org/wiki/Heatwork
|
Heatwork is the combined effect of temperature and time. It is important to several industries:
Ceramics
Glass and metal annealing
Metal heat treating
Pyrometric devices can be used to gauge heat work as they deform or contract due to heatwork to produce temperature equivalents. Within tolerances, firing can be undertaken at lower temperatures for a longer period to achieve comparable results. When the amount of heatwork of two firings is the same, the pieces may look identical, but there may be differences not visible, such as mechanical strength and microstructure. Heatwork is taught in material science courses, but is not a precise measurement or a valid scientific concept.
External links
Temperature equivalents table & description of Bullers Rings.
Temperature equivalents table & description of Nimra Cerglass pyrometric cones.
Temperature equivalents table & description of Orton pyrometric cones.
Temperature equivalents table of Seger pyrometric cones.
Temperature Equivalents, °F & °C for Bullers Ring.
Glass physics
Pottery
Metallurgy
Ceramic engineering
|
https://en.wikipedia.org/wiki/Selenium%20%28software%29
|
Selenium is an open source umbrella project for a range of tools and libraries aimed at supporting browser automation. It provides a playback tool for authoring functional tests across most modern web browsers, without the need to learn a test scripting language (Selenium IDE). It also provides a test domain-specific language (Selenese) to write tests in a number of popular programming languages, including JavaScript (Node.js), C#, Groovy, Java, Perl, PHP, Python, Ruby and Scala. Selenium runs on Windows, Linux, and macOS. It is open-source software released under the Apache License 2.0.
History
Selenium was originally developed by Jason Huggins in 2004 as an internal tool at ThoughtWorks. Huggins was later joined by other programmers and testers at ThoughtWorks, before Paul Hammant joined the team and steered the development of the second mode of operation that would later become "Selenium Remote Control" (RC). The tool was open sourced that year.
In 2005 Dan Fabulich and Nelson Sproul (with help from Pat Lightbody) made an offer to accept a series of patches that would transform Selenium-RC into what it became best known for. In the same meeting, the steering of Selenium as a project would continue as a committee, with Huggins and Hammant being the ThoughtWorks representatives.
In 2007, Huggins joined Google. Together with others like Jennifer Bevan, he continued with the development and stabilization of Selenium RC. At the same time, Simon Stewart at ThoughtWorks developed a superior browser automation tool called WebDriver. In 2009, after a meeting between the developers at the Google Test Automation Conference, it was decided to merge the two projects, and call the new project Selenium WebDriver, or Selenium 2.0.
In 2008, Philippe Hanrigou (then at ThoughtWorks) made "Selenium Grid", which provides a hub allowing the running of multiple Selenium tests concurrently on any number of local or remote systems, thus minimizing test execution time. Grid offered,
|
https://en.wikipedia.org/wiki/Metrics%20%28networking%29
|
Router metrics are configuration values used by a router to make routing decisions. A metric is typically one of many fields in a routing table. Router metrics help the router choose the best route among multiple feasible routes to a destination. The route will go in the direction of the gateway with the lowest metric.
A router metric is typically based on information such as path length, bandwidth, load, hop count, path cost, delay, maximum transmission unit (MTU), reliability and communications cost.
Examples
A metric can include:
measuring link utilization (using SNMP)
number of hops (hop count)
speed of the path
packet loss (router congestion/conditions)
Network delay
path reliability
path bandwidth
throughput [SNMP - query routers]
load
Maximum transmission unit (MTU)
administrator configured value
In EIGRP, metrics is represented by an integer from 0 to 4,294,967,295 (The size of a 32-bit integer). In Microsoft Windows XP routing it ranges from 1 to 9999.
A metric can be considered as:
additive - the total cost of a path is the sum of the costs of individual links along the path,
concave - the total cost of a path is the minimum of the costs of individual links along the path,
multiplicative - the total cost of a path is the product of the costs of individual links along the path.
Service level metrics
Router metrics are metrics used by a router to make routing decisions. It is typically one of many fields in a routing table.
Router metrics can contain any number of values that help the router determine the best route among multiple routes to a destination. A router metric typically based on information like path length, bandwidth, load, hop count, path cost, delay, MTU, reliability and communications cost.
See also
Administrative distance, indicates the source of routing table entry and is used in preference to metrics for routing decisions
References
External links
Survey of routing metrics
Computer network analysis
Network perfo
|
https://en.wikipedia.org/wiki/HyperSCSI
|
HyperSCSI is an outdated computer network protocol for accessing storage by sending and receiving SCSI commands. It was developed by researchers at the Data Storage Institute in Singapore in 2000 to 2003.
HyperSCSI is unlike iSCSI in that it bypassed the internet protocol suite (TCP/IP) and works directly over Ethernet to form its storage area network (SAN). It skipped the routing, retransmission, segmentation, reassembly, and all the other problems that the TCP/IP suite addresses. Compared to iSCSI, this was meant to give a performance benefit at the cost of IP's flexibility. An independent performance test showed that performance was unstable with network congestion.
Since HyperSCSI was in direct competition with the older and well established Fibre Channel, and the standardized iSCSI, it was not adopted by commercial vendors. Some researchers at Huazhong University of Science and Technology noted the failure to provide any transport layer protocol, so implemented a reliability layer in 2007.
Another version called HS/IP was developed over the Internet Protocol (IP).
See also
Fibre Channel over Ethernet
Fibre Channel over IP
Internet Fibre Channel Protocol
References
External links
including an introduction and features of HyperSCSI, and a comparison with iSCSI
SCSI
Network protocols
Ethernet
|
https://en.wikipedia.org/wiki/Pfsync
|
pfsync is a computer protocol used to synchronise firewall states between machines running Packet Filter (PF) for high availability. It is used along with CARP to make sure a backup firewall has the same information as the main firewall. When the main machine in the firewall cluster dies, the backup machine is able to accept current connections without loss.
See also
OpenBSD
PF (firewall)
CARP
Linux-HA
Linux Virtual Server
References
External links
PF: Firewall Redundancy with CARP and pfsync (OpenBSD PF FAQ)
pfsync(4) man-page in OpenBSD, FreeBSD and NetBSD
sys/net/if_pfsync.h in OpenBSD
sys/net/if_pfsync.c in OpenBSD
Internet protocols
High-availability cluster computing
BSD software
OpenBSD
FreeBSD
NetBSD
Firewall software
|
https://en.wikipedia.org/wiki/Interaural%20time%20difference
|
The interaural time difference (or ITD) when concerning humans or animals, is the difference in arrival time of a sound between two ears. It is important in the localization of sounds, as it provides a cue to the direction or angle of the sound source from the head. If a signal arrives at the head from one side, the signal has further to travel to reach the far ear than the near ear. This pathlength difference results in a time difference between the sound's arrivals at the ears, which is detected and aids the process of identifying the direction of sound source.
When a signal is produced in the horizontal plane, its angle in relation to the head is referred to as its azimuth, with 0 degrees (0°) azimuth being directly in front of the listener, 90° to the right, and 180° being directly behind.
Different methods for measuring ITDs
For an abrupt stimulus such as a click, onset ITDs are measured. An onset ITD is the time difference between the onset of the signal reaching two ears.
A transient ITD can be measured when using a random noise stimulus and is calculated as the time difference between a set peak of the noise stimulus reaching the ears.
If the stimulus used is not abrupt but periodic then ongoing ITDs are measured. This is where the waveforms reaching both ears can be shifted in time until they perfectly match up and the size of this shift is recorded as the ITD. This shift is known as the interaural phase difference (IPD) and can be used for measuring the ITDs of periodic inputs such as pure tones and amplitude modulated stimuli. An amplitude modulated stimulus IPD can be assessed by looking at either the waveform envelope or the waveform fine structure.
Duplex theory
The Duplex theory proposed by Lord Rayleigh (1907) provides an explanation for the ability of humans to localise sounds by time differences between the sounds reaching each ear (ITDs) and differences in sound level entering the ears (interaural level differences, ILDs). But there still lies
|
https://en.wikipedia.org/wiki/Bob%20Colwell
|
Robert P. "Bob" Colwell (born 1954) is an electrical engineer who worked at Intel and later served as Director of the Microsystems Technology Office (MTO) at DARPA. He was the chief IA-32 architect on the Pentium Pro, Pentium II, Pentium III, and Pentium 4 microprocessors. Bob retired from Intel in 2000. He was an Intel Fellow from 1995 to 2000.
Early life and education
Colwell grew up in a small blue collar town in Pennsylvania and was born into a family of six children. His father was a milkman for 35 years. He attended the University of Pittsburgh and gained an undergraduate degree in Electrical Engineering. He later attended Carnegie Mellon University to get a PhD in Electrical Engineering.
Career
Colwell worked at a company called Multiflow in the late 1980s as a design engineer.
In 1990 he joined Intel as a senior architect and was involved in the development of the P6 "core". The P6 core was used in the Pentium Pro, Pentium II, and Pentium III microprocessors, and designs derived from it are used in the Pentium M, Core Duo and Core Solo, and Core 2 microprocessors sold by Intel.
Memberships and awards
Colwell earned the ACM Eckert-Mauchly Award in 2005, and wrote the "At Random" column for Computer, a journal published by the IEEE Computer Society.
Publications
Colwell is the author of several papers in addition to the book The Pentium Chronicles: The People, Passion, and Politics Behind Intel's Landmark Chips, . Colwell has spoken at universities on the challenges in chip design and management principles needed to tackle them.
Personal life
Colwell met his wife in college and he married in 1979. He has three children.
External links
List of publications
Internet stream of Stanford Talk, February 18, 2004 (ASF)
Bob Colwell's talk at GCC
Bio page at DARPA MTO
Computer designers
1954 births
Living people
Swanson School of Engineering alumni
Fellows of the American Academy of Arts and Sciences
Members of the United States National Academy of Engin
|
https://en.wikipedia.org/wiki/Block%20upconverter
|
A block upconverter (BUC) is used in the transmission (uplink) of satellite signals. It converts a band of frequencies from a lower frequency to a higher frequency. Modern BUCs convert from the L band to Ku band, C band and Ka band. Older BUCs convert from a 70 MHz intermediate frequency (IF) to Ku band or C band.
Most BUCs use phase-locked loop local oscillators and require an external 10 MHz frequency reference to maintain the correct transmit frequency.
BUCs used in remote locations are often 2 or 4 W in the Ku band and 5 W in the C band. The 10 MHz reference frequency is usually sent on the same feedline as the main carrier. Many smaller BUCs also get their direct current (DC) over the feedline, using an internal DC block.
BUCs are generally used in conjunction with low-noise block converters (LNB). The BUC, being an up-converting device, makes up the "transmit" side of the system, while the LNB is the down-converting device and makes up the "receive" side. An example of a system utilizing both a BUC and an LNB is a VSAT system, used for bidirectional Internet access via satellite.
The block upconverter is a block shaped device assembled with the LNB in association with an OMT, orthogonal mode transducer to the feed-horn that faces the reflector parabolic dish. This is opposed to other types of frequency upconverter which may be rack mounted indoors or not co-located with the dish.
Radio technology
Satellite broadcasting
Telecommunications equipment
|
https://en.wikipedia.org/wiki/Gauss%E2%80%93Manin%20connection
|
In mathematics, the Gauss–Manin connection is a connection on a certain vector bundle over a base space S of a family of algebraic varieties . The fibers of the vector bundle are the de Rham cohomology groups of the fibers of the family. It was introduced by for curves S and by in higher dimensions.
Flat sections of the bundle are described by differential equations; the best-known of these is the Picard–Fuchs equation, which arises when the family of varieties is taken to be the family of elliptic curves. In intuitive terms, when the family is locally trivial, cohomology classes can be moved from one fiber in the family to nearby fibers, providing the 'flat section' concept in purely topological terms. The existence of the connection is to be inferred from the flat sections.
Intuition
Consider a smooth morphism of schemes over characteristic 0. If we consider these spaces as complex analytic spaces, then the Ehresmann fibration theorem tells us that each fiber is a smooth manifold and each fiber is diffeomorphic. This tells us that the de-Rham cohomology groups are all isomorphic. We can use this observation to ask what happens when we try to differentiate cohomology classes using vector fields from the base space .
Consider a cohomology class such that where is the inclusion map. Then, if we consider the classes
eventually there will be a relation between them, called the Picard–Fuchs equation. The Gauss–Manin connection is a tool which encodes this information into a connection on the flat vector bundle on constructed from the .
Example
A commonly cited example is the Dwork construction of the Picard–Fuchs equation. Let
be the elliptic curve .
Here, is a free parameter describing the curve; it is an element of the complex projective line (the family of hypersurfaces in dimensions of degree n, defined analogously, has been intensively studied in recent years, in connection with the modularity theorem and its extensions). Thus, the base space o
|
https://en.wikipedia.org/wiki/LM317
|
The LM317 is a popular adjustable positive linear voltage regulator. It was designed by Bob Dobkin in 1976 while he worked at National Semiconductor.
The LM337 is the negative complement to the LM317, which regulates voltages below a reference. It was designed by Bob Pease, who also worked for National Semiconductor.
Specifications
Without a heat sink with an ambient temperature at 50 °C such as on a hot summer day inside a box, a maximum power dissipation of (TJ-TA)/RθJA = ((125-50)/80) = 0.98 W can be permitted. (A piece of shiny sheet metal of aluminium with the dimensions 6 x 6 cm and 1.5 mm thick, results in a thermal resistance that permits 4.7 W of heat dissipation).
In a constant voltage mode with an input voltage source at VIN at 34 V and a desired output voltage of 5 V, the maximum output current will be PMAX / (VIN-VO) = 0.98 / (34-5) = 32 mA.
For a constant current mode with an input voltage source at VIN at 12 V and a forward voltage drop of VF=3.6 V, the maximum output current will be PMAX / (VIN - VF) = 0.98 / (12-3.6) = 117 mA.
Operation
As linear regulators, the LM317 and LM337 are used in DC to DC converter applications.
Linear regulators inherently waste power; the power dissipated is the current passed multiplied by the voltage difference between input and output. A LM317 commonly requires a heat sink to prevent the operating temperature from rising too high. For large voltage differences, the power lost as heat can ultimately be greater than that provided to the circuit. This is the tradeoff for using linear regulators, which are a simple way to provide a stable voltage with few additional components. The alternative is to use a switching voltage regulator, which is usually more efficient, but has a larger footprint and requires a larger number of associated components.
In packages with a heat-dissipating mounting tab, such as TO-220, the tab is connected internally to the output pin which may make it necessary to electrically isolat
|
https://en.wikipedia.org/wiki/List%20of%20interstitial%20cells
|
Interstitial cell refers to any cell that lies in the spaces between the functional cells of a tissue.
Examples include:
Interstitial cell of Cajal (ICC)
Leydig cells, cells present in the male testes responsible for the production of androgen (male sex hormone)
A portion of the stroma of ovary
Certain cells in the pineal gland
Renal interstitial cells
neuroglial cells
See also
List of human cell types derived from the germ layers
References
Sybil B Parker (ed). "Interstitial cell". McGraw Hill Dictionary of Scientific and Technical Terms. Fifth Edition. International Edition. 1994. Page 1041.
Cell biology
Biology-related lists
|
https://en.wikipedia.org/wiki/Lifting%20scheme
|
The lifting scheme is a technique for both designing wavelets and performing the discrete wavelet transform (DWT). In an implementation, it is often worthwhile to merge these steps and design the wavelet filters while performing the wavelet transform. This is then called the second-generation wavelet transform. The technique was introduced by Wim Sweldens.
The lifting scheme factorizes any discrete wavelet transform with finite filters into a series of elementary convolution operators, so-called lifting steps, which reduces the number of arithmetic operations by nearly a factor two. Treatment of signal boundaries is also simplified.
The discrete wavelet transform applies several filters separately to the same signal. In contrast to that, for the lifting scheme, the signal is divided like a zipper. Then a series of convolution–accumulate operations across the divided signals is applied.
Basics
The simplest version of a forward wavelet transform expressed in the lifting scheme is shown in the figure above. means predict step, which will be considered in isolation. The predict step calculates the wavelet function in the wavelet transform. This is a high-pass filter. The update step calculates the scaling function, which results in a smoother version of the data.
As mentioned above, the lifting scheme is an alternative technique for performing the DWT using biorthogonal wavelets. In order to perform the DWT using the lifting scheme, the corresponding lifting and scaling steps must be derived from the biorthogonal wavelets. The analysis filters () of the particular wavelet are first written in polyphase matrix
where .
The polyphase matrix is a 2 × 2 matrix containing the analysis low-pass and high-pass filters, each split up into their even and odd polynomial coefficients and normalized. From here the matrix is factored into a series of 2 × 2 upper- and lower-triangular matrices, each with diagonal entries equal to 1. The upper-triangular matrices contain the co
|
https://en.wikipedia.org/wiki/Fiber%20laser
|
A fiber laser (or fibre laser in Commonwealth English) is a laser in which the active gain medium is an optical fiber doped with rare-earth elements such as erbium, ytterbium, neodymium, dysprosium, praseodymium, thulium and holmium. They are related to doped fiber amplifiers, which provide light amplification without lasing.
Fiber nonlinearities, such as stimulated Raman scattering or four-wave mixing can also provide gain and thus serve as gain media for a fiber laser.
Characteristics
An advantage of fiber lasers over other types of lasers is that the laser light is both generated and delivered by an inherently flexible medium, which allows easier delivery to the focusing location and target. This can be important for laser cutting, welding, and folding of metals and polymers. Another advantage is high output power compared to other types of laser. Fiber lasers can have active regions several kilometers long, and so can provide very high optical gain. They can support kilowatt levels of continuous output power because of the fiber's high surface area to volume ratio, which allows efficient cooling. The fiber's waveguide properties reduce or eliminate thermal distortion of the optical path, typically producing a diffraction-limited, high-quality optical beam. Fiber lasers are compact compared to solid-state or gas lasers of comparable power, because the fiber can be bent and coiled, except in the case of thicker rod-type designs, to save space. They have lower cost of ownership. Fiber lasers are reliable and exhibit high temperature and vibrational stability and extended lifetime. High peak power and nanosecond pulses improve marking and engraving. The additional power and better beam quality provide cleaner cut edges and faster cutting speeds.
Design and manufacture
Unlike most other types of lasers, the laser cavity in fiber lasers is constructed monolithically by fusion splicing different types of fiber; fiber Bragg gratings replace conventional diel
|
https://en.wikipedia.org/wiki/Society%20of%20Chemical%20Industry
|
The Society of Chemical Industry (SCI) is a learned society set up in 1881 "to further the application of chemistry and related sciences for the public benefit".
Offices
The society's headquarters is in Belgrave Square, London. There are semi-independent branches in the United States, Canada and Australia.
Aims
The society aims to accelerate the rate of scientific innovations being commercialised by industry to benefit society. It does this through promoting collaborations between scientists and industrialists, running technical and innovation conferences, building communities across academia and industry and publishing scientific content through its journals and digital platforms.
It also promotes science education.
History
On 21 November 1879, Lancashire chemist John Hargreaves canvassed a meeting of chemists and managers in Widnes, St Helens and Runcorn to consider the formation of a chemical society. Modelled on the successful Tyne Chemical Society already operating in Newcastle, the newly proposed South Lancashire Chemical Society held its first meeting on 29 January 1880 in Liverpool, with the eminent industrial chemist and soda manufacturer Ludwig Mond presiding.
It was quickly decided that the society should not be limited to just the local region and the title 'the Society of Chemical Industry’ was finally settled upon at a meeting in London on 4 April 1881, as being 'more inclusive'. Held at the offices of the Chemical Society, now the headquarters of the Royal Society of Chemistry, in Burlington House, this meeting was presided over by Henry Roscoe, appointed first president of SCI, and attended by Eustace Carey, Ludwig Mond, FA Abel, Lowthian Bell, William H Perkin, Walter Weldon, Edward Rider Cook, Thomas Tyrer and George E Davis; all prominent scientists, industrialists and MPs of the time.
The society grew rapidly, launching international and regional sections. In 1881 Ivan Levinstein was a founder of the Manchester Section of the Society of Che
|
https://en.wikipedia.org/wiki/History%20of%20sound%20recording
|
The history of sound recording - which has progressed in waves, driven by the invention and commercial introduction of new technologies — can be roughly divided into four main periods:
The Acoustic era (1877–1925)
The Electrical era (1925–1945)
The Magnetic era (1945–1975)
The Digital era (1975–present)
Experiments in capturing sound on a recording medium for preservation and reproduction began in earnest during the Industrial Revolution of the 1800s. Many pioneering attempts to record and reproduce sound were made during the latter half of the 19th century – notably Édouard-Léon Scott de Martinville's phonautograph of 1857 – and these efforts culminated in the invention of the phonograph by Thomas Edison in 1877. Digital recording emerged in the late 20th century and has since flourished with the popularity of digital music and online streaming services.
Overview
The Acoustic Era (1877–1925)
The earliest practical recording technologies were entirely mechanical devices. These recorders typically used a large conical horn to collect and focus the physical air pressure of the sound waves produced by the human voice or musical instruments. A sensitive membrane or diaphragm, located at the apex of the cone, was connected to an articulated scriber or stylus, and as the changing air pressure moved the diaphragm back and forth, the stylus scratched or incised an analogue of the sound waves onto a moving recording medium, such as a roll of coated paper, or a cylinder or disc coated with a soft material such as wax or a soft metal.
These early recordings were necessarily of low fidelity and volume and captured only a narrow segment of the audible sound spectrum — typically only from around 250 Hz up to about 2,500 Hz — so musicians and engineers were forced to adapt to these sonic limitations. Musical ensembles of the period often favoured louder instruments such as trumpet, cornet, and trombone; lower-register brass instruments such as the tuba and the euphonium
|
https://en.wikipedia.org/wiki/Intel%20LANSpool
|
LANSPool was network printer administration software developed by Intel. The package was designed specifically for the Novell NetWare network operating system. The software allowed users to share printers and faxes and for administrators to modify LAN printing operations. The software takes its name from the acronym for local area network (LAN) and the spooling technique by which computers send information to slow peripherals such as printers.
In March 1992, Intel announced that users of version 3.01 of the software might be at risk from the Michelangelo virus as the manufacturer had found the virus on master copies of the 5¼-inch floppy disks.
External links
retroSoftware - LANSPool
LANSpool and Michelangelo virus
LANSpool
LANSpool
LANSpool
|
https://en.wikipedia.org/wiki/History%20of%20computer%20hardware%20in%20Yugoslavia
|
The Socialist Federal Republic of Yugoslavia (SFRY) was a socialist country that existed in the second half of the 20th century. Being socialist meant that strict technology import rules and regulations shaped the development of computer history in the country, unlike in the Western world. However, since it was a non-aligned country, it had no ties to the Soviet Bloc either. One of the major ideas contributing to the development of any technology in SFRY was the apparent need to be independent of foreign suppliers for spare parts, fueling domestic computer development.
Development
Early computers
In former Yugoslavia, at the end of 1962 there were 30 installed electronic computers, in 1966, there were 56, and in 1968 there were 95.
Having received training in the European computer centres (Paris 1954 and 1955, Darmstadt 1959, Wien 1960, Cambridge 1961 and London 1964), engineers from the BK.Institute-Vinča and the Mihailo Pupin Institute- Belgrade, led by Prof. dr Tihomir Aleksić, started a project of designing the first "domestic" digital computer at the end of the 1950s. This was to become a line of CER (), starting with the model CER-10 in 1960, a primarily vacuum tube and electronic relays-based computer.
By 1964, CER-20 computer was designed and completed as "electronic bookkeeping machine", as the manufacturer recognized increasing need in accounting market. This special-purpose trend continued with the release of CER-22 in 1967, which was intended for on-line "banking" applications.
There were more CER models, such as CER-11, CER-12, and CER-200, but there is currently little information here available on them.
In the late 1970s, "Ei-Niš Računarski Centar" from Niš, Serbia, started assembling Mainframe computers H6000 under Honeywell license, mainly for banking businesses. Computer initially had a great success that later led into local limited parts production. In addition, the company produced models such as H6 and H66 and was alive as late as early 2
|
https://en.wikipedia.org/wiki/Intel%20Hub%20Architecture
|
Intel Hub Architecture (IHA), also known as Accelerated Hub Architecture (AHA) was Intel's architecture for the 8xx family of chipsets, starting in 1999 with the Intel 810. It uses a memory controller hub (MCH) that is connected to an I/O controller hub (ICH) via a 266 MB/s bus. The MCH chip supports memory and AGP (replaced by PCI Express in 9xx series chipsets), while the ICH chip provides connectivity for PCI (revision 2.2 before ICH5 series and revision 2.3 since ICH5 series), USB (version 1.1 before ICH4 series and version 2.0 since ICH4 series), sound (originally AC'97, Azalia added in ICH6 series), IDE hard disks (supplemented by Serial ATA since ICH5 series, fully replaced IDE since ICH8 series for desktops and ICH9 series for notebooks) and LAN (uncommonly activated on desktop motherboards and notebooks, usually independent LAN controller were placed instead of PHY chip).
Intel claimed that, because of the high-speed channel between the sections, the IHA was faster than the earlier northbridge/southbridge design, which hooked all low-speed ports to the PCI bus. The IHA also optimized data transfer based on data type.
Next generation
Intel Hub Interface 2.0 was employed in Intel's line of E7xxx server chipsets. This new revision allowed for dedicated data paths for transferring greater than 1.0 GB/s of data to and from the MCH, which support I/O segments with greater reliability and faster access to high-speed networks.
Current status
IHA is now considered obsolete and no longer used, being superseded by the Direct Media Interface architecture. The Platform Controller Hub (PCH) providing most of the features previously seen in ICH chips while moving memory, graphics and PCI Express controllers to the CPU, introduced with the Intel 5 Series chipsets in 2009. This chipset architecture is still used in desktops, in some notebooks it is going to be replaced by SoC processor designs.
References
External links
First Intel chipset to use IHA
Hub Architecture
|
https://en.wikipedia.org/wiki/Loudness%20monitoring
|
Loudness monitoring of programme levels is needed in radio and television broadcasting, as well as in audio post production. Traditional methods of measuring signal levels, such as the peak programme meter and VU meter, do not give the subjectively valid measure of loudness that many would argue is needed to optimise the listening experience when changing channels or swapping disks.
The need for proper loudness monitoring is apparent in the loudness war that is now found everywhere in the audio field, and the extreme compression that is now applied to programme levels.
Loudness meters
Meters have been introduced that aim to measure the human perceived loudness by taking account of the equal-loudness contours and other factors, such as audio spectrum, duration, compression and intensity. One such device was developed by CBS Laboratories in the 1980s. Complaints to broadcasters about the intrusive level of interstitials programs (advertisements, commercials) has resulted in projects to develop such meters. Based on loudness metering, many manufacturers have developed real-time audio processors that adjust the audio signal to match a specified target loudness level that preserves volume consistency at home listeners.
EBU Mode meters
In August 2010, the European Broadcasting Union published a new metering specification EBU Tech 3341, which builds on ITU-R BS.1770. To make sure meters from different manufacturers provide the same reading in LUFS units, EBU Tech 3341 specifies the EBU Mode, which includes a Momentary (400ms), Short term (3s) and Integrated (from start to stop) meter and a set of audio signals to test the meters.
See also
Audio normalization
References
External links
EBU Tech 3341 audio test signals
ITU-R BS.1770: Algorithms to measure audio programme loudness and true-peak audio level
EBU publishes loudness test material
Audio engineering
Broadcast engineering
Sound production technology
Sound recording
|
https://en.wikipedia.org/wiki/Default%20constructor
|
In computer programming languages, the term default constructor can refer to a constructor that is automatically generated by the compiler in the absence of any programmer-defined constructors (e.g. in Java), and is usually a nullary constructor. In other languages (e.g. in C++) it is a constructor that can be called without having to provide any arguments, irrespective of whether the constructor is auto-generated or user-defined. Note that a constructor with formal parameters can still be called without arguments if default arguments were provided in the constructor's definition.
C++
In C++, the standard describes the default constructor for a class as a constructor that can be called with no arguments (this includes a constructor whose parameters all have default arguments). For example:
class MyClass
{
public:
MyClass(); // constructor declared
private:
int x;
};
MyClass::MyClass() : x(100) // constructor defined
{
}
int main()
{
MyClass m; // at runtime, object m is created, and the default constructor is called
}
When allocating memory dynamically, the constructor may be called by adding parenthesis after the class name. In a sense, this is an explicit call to the constructor:
int main()
{
MyClass * pointer = new MyClass(); // at runtime, an object is created, and the
// default constructor is called
}
If the constructor does have one or more parameters, but they all have default values, then it is still a default constructor. Remember that each class can have at most one default constructor, either one without parameters, or one whose all parameters have default values, such as in this case:
class MyClass
{
public:
MyClass (int i = 0, std::string s = ""); // constructor declared
private:
int x;
int y;
std::string z;
};
MyClass::MyClass(int i, std::string s) // constructor defined
{
x = 100;
y = i;
z = s;
}
In C++, default constructors are significant because
|
https://en.wikipedia.org/wiki/Refinable%20function
|
In mathematics, in the area of wavelet analysis, a refinable function is a function which fulfils some kind of self-similarity. A function is called refinable with respect to the mask if
This condition is called refinement equation, dilation equation or two-scale equation.
Using the convolution (denoted by a star, *) of a function with a discrete mask and the dilation operator one can write more concisely:
It means that one obtains the function, again, if you convolve the function with a discrete mask and then scale it back.
There is a similarity to iterated function systems and de Rham curves.
The operator is linear.
A refinable function is an eigenfunction of that operator.
Its absolute value is not uniquely defined.
That is, if is a refinable function,
then for every the function is refinable, too.
These functions play a fundamental role in wavelet theory as scaling functions.
Properties
Values at integral points
A refinable function is defined only implicitly.
It may also be that there are several functions which are refinable with respect to the same mask.
If shall have finite support
and the function values at integer arguments are wanted,
then the two scale equation becomes a system of simultaneous linear equations.
Let be the minimum index and be the maximum index
of non-zero elements of , then one obtains
Using the discretization operator, call it here, and the transfer matrix of , named , this can be written concisely as
This is again a fixed-point equation. But this one can now be considered as an eigenvector-eigenvalue problem. That is, a finitely supported refinable function exists only (but not necessarily), if has the eigenvalue 1.
Values at dyadic points
From the values at integral points you can derive the values at dyadic points,
i.e. points of the form , with and .
The star denotes the convolution of a discrete filter with a function.
With this step you can compute the values at points of the form .
By replacing iterated
|
https://en.wikipedia.org/wiki/Alignment%20level
|
The alignment level in an audio signal chain or on an audio recording is a defined anchor point that represents a reasonable or typical level.
Analogue
In analogue systems, alignment level is commonly 0 dBu (0.775 Volts RMS) in broadcast chains and in professional audio is commonly 0 VU, which is +4 dBu or 1.228 Volts RMS. Under normal situations, the 0 VU reference allows for a headroom of 18 dB or more above the reference level without significant distortion. This is largely due to the use of slow-responding VU meters in almost all analogue professional audio equipment which, by their design, and by specification respond to an average level, not peak levels.
Digital
In digital systems alignment level commonly is at −18 dBFS (18 dB below digital full scale), in accordance with EBU recommendations. Digital equipment must use peak reading metering systems to avoid severe digital distortion caused by the signal going beyond digital full scale. 24-bit original or master recordings commonly have an alignment level at −24 dBFS to allow extra headroom, which can then be reduced to match the available headroom of the final medium by audio level compression. FM broadcasts usually have only 9 dB of headroom as recommended by the EBU, but digital broadcasts, which could operate with 18 dB of headroom, given their low noise floor even in difficult reception areas, currently operate in a state of confusion, with some transmitting at maximum level while others operate at a much lower level even though they carry material that has been compressed for compatibility with the lower dynamic range of FM transmissions.
EBU
In EBU documents alignment level defines -18 dBFS as the level of the alignment signal, a 1 kHz sine tone for analog applications and 997 Hz in digital applications.
Motivation
Using alignment level rather than maximum permitted level as the reference point allows more sensible headroom management throughout the audio signal chain compression happens only whe
|
https://en.wikipedia.org/wiki/Precast%20concrete
|
Precast concrete is a construction product produced by casting concrete in a reusable mold or "form" which is then cured in a controlled environment, transported to the construction site and maneuvered into place; examples include precast beams, and wall panels for tilt up construction. In contrast, cast-in-place concrete is poured into site-specific forms and cured on site.
Recently lightweight expanded polystyrene foam is being used as the cores of precast wall panels, saving weight and increasing thermal insulation.
Precast stone is distinguished from precast concrete by the finer aggregate used in the mixture, so the result approaches the natural product.
Overview
Precast concrete is employed in both interior and exterior applications, from highway, bridge, and hi-rise projects to tilt-up building construction. By producing precast concrete in a controlled environment (typically referred to as a precast plant), the precast concrete is afforded the opportunity to properly cure and be closely monitored by plant employees. Using a precast concrete system offers many potential advantages over onsite casting. Precast concrete production can be performed on ground level, which maximizes safety in its casting. There is greater control over material quality and workmanship in a precast plant compared to a construction site. The forms used in a precast plant can be reused hundreds to thousands of times before they have to be replaced, often making it cheaper than onsite casting in terms of cost per unit of formwork.
Precast concrete forming systems for architectural applications differ in size, function, and cost. Precast architectural panels are also used to clad all or part of a building facade or erect free-standing walls for landscaping, soundproofing, and security. In appropriate instances precast products - such as beams for bridges, highways, and parking structure decks - can be prestressed structural elements. Stormwater drainage, water and sewage pipes, a
|
https://en.wikipedia.org/wiki/Lysimeter
|
A lysimeter (from Greek λύσις (loosening) and the suffix -meter) is a measuring device which can be used to measure the amount of actual evapotranspiration which is released by plants (usually crops or trees). By recording the amount of precipitation that an area receives and the amount lost through the soil, the amount of water lost to evapotranspiration can be calculated.
Lysimeters are of two types: weighing and non-weighing.
General Usage
A lysimeter is most accurate when vegetation is grown in a large soil tank which allows the rainfall input and water lost through the soil to be easily calculated. The amount of water lost by evapotranspiration can be worked out by calculating the difference between the weight before and after the precipitation input.
For trees, lysimeters can be expensive and are a poor representation of conditions outside of a laboratory or orchard, as it would be impossible to use a lysimeter to calculate the water balance for a whole forest. But for farm crops, a lysimeter can represent field conditions well since the device is installed and used outside the laboratory. A weighing lysimeter, for example, reveals the amount of water crops use by constantly weighing a huge block of soil in a field to detect losses of soil moisture (as well as any gains from precipitation). An example of their use is in the development of new xerophytic apple tree cultivars in order to adapt to changing climate patterns of reduced rainfall in traditional apple growing regions.
The University of Arizona's Biosphere 2 built the world's largest weighing lysimeters using a mixture of thirty 220,000 and 333,000 lb-capacity column load cells from Honeywell, Inc. as part of its Landscape Evolution Observatory project.
Use in whole plant physiological phenotyping systems
To date, physiology-based, high-throughput phenotyping systems (also known as plant functional phenotyping systems), which, used in combination with soil–plant–atmosphere continuum (SPAC) meas
|
https://en.wikipedia.org/wiki/Right%20to%20light
|
Right to light is a form of easement in English law that gives a long-standing owner of a building with windows a right to maintain an adequate level of illumination. The right was traditionally known as the doctrine of "ancient lights". It is also possible for a right to light to exist if granted expressly by deed, or granted implicitly, for example under the rule in Wheeldon v. Burrows (1879).
In England, the rights to ancient lights are most usually acquired under the Prescription Act 1832.
In American common law the doctrine died out during the 19th century, and is generally no longer recognized in the United States. Japanese law provides for a comparable concept known as nisshōken (literally "right to sunshine").
Rights
In effect, the owner of a building with windows that have received natural daylight for 20 years or more is entitled to forbid any construction or other obstruction on adjacent land that would block the light so as to deprive him or her of adequate illumination through those windows. The owner may build more or larger windows but cannot enlarge their new windows before the new period of 20 years has expired.
Once a right to light exists, the owner of the right is entitled to "sufficient light according to the ordinary notions of mankind": Colls v. Home & Colonial Stores Ltd (1904). Courts rely on expert witnesses to define this term. Since the 1920s, experts have used a method proposed by Percy Waldram to assist them with this. Waldram suggested that ordinary people require one foot-candle of illuminance (approximately ten lux) for reading and other work involving visual discrimination. This equates to a sky factor (similar to the daylight factor) of 0.2%. Today, Waldram's methods are increasingly subject to criticism, and the future of expert evidence in rights to light cases is currently the subject of much debate within the surveying profession.
After the Second World War, owners of buildings could gain new rights by registering proper
|
https://en.wikipedia.org/wiki/Shannon%20number
|
The Shannon number, named after the American mathematician Claude Shannon, is a conservative lower bound of the game-tree complexity of chess of 10120, based on an average of about 103 possibilities for a pair of moves consisting of a move for White followed by a move for Black, and a typical game lasting about 40 such pairs of moves.
Shannon's calculation
Shannon showed a calculation for the lower bound of the game-tree complexity of chess, resulting in about 10120 possible games, to demonstrate the impracticality of solving chess by brute force, in his 1950 paper "Programming a Computer for Playing Chess". (This influential paper introduced the field of computer chess.)
Shannon also estimated the number of possible positions, "of the general order of , or roughly 1043". This includes some illegal positions (e.g., pawns on the first rank, both kings in check) and excludes legal positions following captures and promotions.
After each player has moved a piece 5 times each (10 ply) there are 69,352,859,712,417 possible games that could have been played.
Tighter bounds
Upper
Taking Shannon's numbers into account, Victor Allis calculated an upper bound of 5×1052 for the number of positions, and estimated the true number to be about 1050. Recent results improve that estimate, by proving an upper bound of 8.7x1045, and showing an upper bound 4×1037 in the absence of promotions.
Lower
Allis also estimated the game-tree complexity to be at least 10123, "based on an average branching factor of 35 and an average game length of 80". As a comparison, the number of atoms in the observable universe, to which it is often compared, is roughly estimated to be 1080.
Accurate estimates
John Tromp and Peter Österlund estimated the number of legal chess positions with a 95% confidence level at , based on an efficiently computable bijection between integers and chess positions.
Number of sensible chess games
As a comparison to the Shannon number, if chess is analyz
|
https://en.wikipedia.org/wiki/Cosmopolitan%20distribution
|
In biogeography, cosmopolitan distribution is the term for the range of a taxon that extends across all of (or most of) the world, in appropriate habitats; most cosmopolitan species are known to be highly adaptable to a range of climatic and environmental conditions, though this is not always so. Killer whales (orcas) are among the most well-known cosmopolitan species on the planet, as they maintain several different resident and transient (migratory) populations in every major oceanic body on Earth, from the Arctic Circle to Antarctica and every coastal and open-water region in-between. Such a taxon (usually a species) is said to have a cosmopolitan distribution, or exhibit cosmopolitanism, as a species; another example, the rock dove (commonly referred to as a 'pigeon'), in addition to having been bred domestically for centuries, now occurs in most urban areas across the world.
The extreme opposite of a cosmopolitan species is an endemic (native) species, or one that is found only in a single geographical location. Endemism usually results in organisms with specific adaptations to one particular climate or region, and the species would likely face challenges if placed in a different environment. There are far more examples of endemic species than cosmopolitan species; one example being the snow leopard, a rare feline species found only in Central Asian mountain ranges, an environment the cats have adapted to over millennia.
Qualification
The caveat "in appropriate habitat" is used to qualify the term "cosmopolitan distribution", excluding in most instances polar regions, extreme altitudes, oceans, deserts, or small, isolated islands. For example, the housefly is highly cosmopolitan, yet is neither oceanic nor polar in its distribution.
Related terms and concepts
The term pandemism also is in use, but not all authors are consistent in the sense in which they use the term; some speak of pandemism mainly in referring to diseases and pandemics, and some as a term i
|
https://en.wikipedia.org/wiki/National%20Software%20Testing%20Laboratories
|
National Software Testing Laboratories (NSTL) was established by serial entrepreneur Joseph Segel in 1983 to test computer software. The company provides certification (such as WHQL and Microsoft Windows Mobile certification), quality assurance, and benchmarking services. NSTL was acquired by Intertek in 2007.
References
External links
Company website
Companies established in 1983
Software testing
|
https://en.wikipedia.org/wiki/Applied%20mechanics
|
Applied mechanics is the branch of science concerned with the motion of any substance that can be experienced or perceived by humans without the help of instruments. In short, when mechanics concepts surpass being theoretical and are applied and executed, general mechanics becomes applied mechanics. It is this stark difference that makes applied mechanics an essential understanding for practical everyday life. It has numerous applications in a wide variety of fields and disciplines, including but not limited to structural engineering, astronomy, oceanography, meteorology, hydraulics, mechanical engineering, aerospace engineering, nanotechnology, structural design, earthquake engineering, fluid dynamics, planetary sciences, and other life sciences. Connecting research between numerous disciplines, applied mechanics plays an important role in both science and engineering.
Pure mechanics describes the response of bodies (solids and fluids) or systems of bodies to external behavior of a body, in either a beginning state of rest or of motion, subjected to the action of forces. Applied mechanics bridges the gap between physical theory and its application to technology.
Composed of two main categories, Applied Mechanics can be split into classical mechanics; the study of the mechanics of macroscopic solids, and fluid mechanics; the study of the mechanics of macroscopic fluids. Each branch of applied mechanics contains subcategories formed through their own subsections as well. Classical mechanics, divided into statics and dynamics, are even further subdivided, with statics' studies split into rigid bodies and rigid structures, and dynamics' studies split into kinematics and kinetics. Like classical mechanics, fluid mechanics is also divided into two sections: statics and dynamics.
Within the practical sciences, applied mechanics is useful in formulating new ideas and theories, discovering and interpreting phenomena, and developing experimental and computational tools.
|
https://en.wikipedia.org/wiki/Motion%20planning
|
Motion planning, also path planning (also known as the navigation problem or the piano mover's problem) is a computational problem to find a sequence of valid configurations that moves the object from the source to destination. The term is used in computational geometry, computer animation, robotics and computer games.
For example, consider navigating a mobile robot inside a building to a distant waypoint. It should execute this task while avoiding walls and not falling down stairs. A motion planning algorithm would take a description of these tasks as input, and produce the speed and turning commands sent to the robot's wheels. Motion planning algorithms might address robots with a larger number of joints (e.g., industrial manipulators), more complex tasks (e.g. manipulation of objects), different constraints (e.g., a car that can only drive forward), and uncertainty (e.g. imperfect models of the environment or robot).
Motion planning has several robotics applications, such as autonomy, automation, and robot design in CAD software, as well as applications in other fields, such as animating digital characters, video game, architectural design, robotic surgery, and the study of biological molecules.
Concepts
A basic motion planning problem is to compute a continuous path that connects a start configuration S and a goal configuration G, while avoiding collision with known obstacles. The robot and obstacle geometry is described in a 2D or 3D workspace, while the motion is represented as a path in (possibly higher-dimensional) configuration space.
Configuration space
A configuration describes the pose of the robot, and the configuration space C is the set of all possible configurations. For example:
If the robot is a single point (zero-sized) translating in a 2-dimensional plane (the workspace), C is a plane, and a configuration can be represented using two parameters (x, y).
If the robot is a 2D shape that can translate and rotate, the workspace is still 2-dimen
|
https://en.wikipedia.org/wiki/Dolbear%27s%20law
|
Dolbear's law states the relationship between the air temperature and the rate at which crickets chirp. It was formulated by Amos Dolbear and published in 1897 in an article called "The Cricket as a Thermometer". Dolbear's observations on the relation between chirp rate and temperature were preceded by an 1881 report by Margarette W. Brooks, although this paper went unnoticed until after Dolbear's publication.
Dolbear did not specify the species of cricket which he observed, although subsequent researchers assumed it to be the snowy tree cricket, Oecanthus niveus. However, the snowy tree cricket was misidentified as O. niveus in early reports and the correct scientific name for this species is Oecanthus fultoni.
The chirping of the more common field crickets is not as reliably correlated to temperature—their chirping rate varies depending on other factors such as age and mating success. In many cases, though, the Dolbear's formula is a close enough approximation for field crickets, too.
Dolbear expressed the relationship as the following formula which provides a way to estimate the temperature in degrees Fahrenheit from the number of chirps per minute :
This formula is accurate to within a degree or so when applied to the chirping of the field cricket.
Counting can be sped up by simplifying the formula and counting the number of chirps produced in 15 seconds ():
Reformulated to give the temperature in degrees Celsius (°C), it is:
A shortcut method for degrees Celsius is to count the number of chirps in 8 seconds () and add 5 (this is fairly accurate between 5 and 30°C):
The above formulae are expressed in terms of integers to make them easier to remember—they are not intended to be exact.
In math classes
Math textbooks will sometimes cite this as a simple example of where mathematical models break down, because at temperatures outside of the range that crickets live in, the total of chirps is zero as the crickets are dead. You can apply algebra to the e
|
https://en.wikipedia.org/wiki/Colloid%20mill
|
A colloid mill is a machine that is used to reduce the particle size of a solid in suspension in a liquid, or to reduce the droplet size in emulsions. Colloid mills work on the rotor-stator principle: a rotor turns at high speeds (2000 - 18000 RPM). A high level of stress is applied on the fluid which results in disrupting and breaking down the structure. Colloid mills are frequently used to increase the stability of suspensions and emulsions, but can also be used to reduce the particle size of solids in suspensions. Higher shear rates lead to smaller droplets, down to approximately 1 μm which are more resistant to emulsion separation.
Application suitability
Colloid mills are used in the following industries:
Pharmaceutical
Cosmetic
Paint
Soap
Textile
Paper
Food
Grease
Rotor - stator construction
A colloidal mill consist of a high speed rotor and stator with a conical milling surfaces
1 stage toothed
3 stage toothed
Execution
fix gap
adjustable gap
References
See also
Homogenization (chemistry)
Chemical equipment
|
https://en.wikipedia.org/wiki/Out-of-band%20data
|
In computer networking, out-of-band data is the data transferred through a stream that is independent from the main in-band data stream. An out-of-band data mechanism provides a conceptually independent channel, which allows any data sent via that mechanism to be kept separate from in-band data. The out-of-band data mechanism should be provided as an inherent characteristic of the data channel and transmission protocol, rather than requiring a separate channel and endpoints to be established. The term "out-of-band data" probably derives from out-of-band signaling, as used in the telecommunications industry.
Example case
Consider a networking application that tunnels data from a remote data source to a remote destination. The data being tunneled may consist of any bit patterns. The sending end of the tunnel may at times have conditions that it needs to notify the receiving end about. However, it cannot simply insert a message to the receiving end because that end will not be able to distinguish the message from data sent by the data source. By using an out-of-band mechanism, the sending end can send the message to the receiving end out of band. The receiving end will be notified in some fashion of the arrival of out-of-band data, and it can read the out of band data and know that this is a message intended for it from the sending end, independent of the data from the data source.
Implementations
It is possible to implement out-of-band data transmission using a physically separate channel, but most commonly out-of-band data is a feature provided by a transmission protocol using the same channel as normal data. A typical protocol might divide the data to be transmitted into blocks, with each block having a header word that identifies the type of data being sent, and a count of the data bytes or words to be sent in the block. The header will identify the data as being in-band or out-of-band, along with other identification and routing information. At the rece
|
https://en.wikipedia.org/wiki/Ruby%20laser
|
A ruby laser is a solid-state laser that uses a synthetic ruby crystal as its gain medium. The first working laser was a ruby laser made by Theodore H. "Ted" Maiman at Hughes Research Laboratories on May 16, 1960.
Ruby lasers produce pulses of coherent visible light at a wavelength of 694.3 nm, which is a deep red color. Typical ruby laser pulse lengths are on the order of a millisecond.
Design
A ruby laser most often consists of a ruby rod that must be pumped with very high energy, usually from a flashtube, to achieve a population inversion. The rod is often placed between two mirrors, forming an optical cavity, which oscillate the light produced by the ruby's fluorescence, causing stimulated emission. Ruby is one of the few solid state lasers that produce light in the visible range of the spectrum, lasing at 694.3 nanometers, in a deep red color, with a very narrow linewidth of 0.53 nm.
The ruby laser is a three level solid state laser. The active laser medium (laser gain/amplification medium) is a synthetic ruby rod that is energized through optical pumping, typically by a xenon flashtube. Ruby has very broad and powerful absorption bands in the visual spectrum, at 400 and 550 nm, and a very long fluorescence lifetime of 3 milliseconds. This allows for very high energy pumping, since the pulse duration can be much longer than with other materials. While ruby has a very wide absorption profile, its conversion efficiency is much lower than other mediums.
In early examples, the rod's ends had to be polished with great precision, such that the ends of the rod were flat to within a quarter of a wavelength of the output light, and parallel to each other within a few seconds of arc. The finely polished ends of the rod were silvered; one end completely, the other only partially. The rod, with its reflective ends, then acts as a Fabry–Pérot etalon (or a Gires-Tournois etalon). Modern lasers often use rods with antireflection coatings, or with the ends cut and polish
|
https://en.wikipedia.org/wiki/Heating%20plant
|
A heating plant, also called a physical plant, or steam plant, generates thermal energy in the form of steam for use in district heating applications. Unlike combined heat and power installations which produce thermal energy as a by-product of electricity generation, heating plants are dedicated to generating heat for use in various processes.
Heating plants are commonly used at hospital or university campuses, military bases, office tower complexes, and public housing complexes. The plant will generate steam which is distributed to each building where it is used to make domestic hot water for human consumption, heating hot water in the case of hydronic heating systems, air conditioning through the use of absorption refrigeration units, air heating in HVAC units, humidification, industrial laundry systems, or sterilization at hospitals. The steam may be sold to each customer and billed through the use of a steam flow meter.
They feature boilers, either water tube or fire tube, which generate steam for various uses and demands. The plant also hosts all of the boiler auxiliaries such as water treatment equipment, air handling, fuel handling, controls, instrument air, and various other plant systems which support the production of steam.
The heating plant can use different fuels:
Natural gas
Heating oil
Biomass
Coal
Refuse
See also
Combined heat and power
Cogeneration
District heating
Power station
Boilers
Residential heating
|
https://en.wikipedia.org/wiki/The%20Rowett%20Institute
|
The Rowett Institute is a research centre for studies into food and nutrition, located in Aberdeen, Scotland.
History
The institute was founded in 1913 when the University of Aberdeen and the North of Scotland College of Agriculture agreed that an "Institute for Research into Animal Nutrition" should be established in Scotland. The first director was John Boyd Orr, later to become Lord Boyd Orr, who moved from Glasgow to "the wilds of Aberdeenshire" in 1914. Orr drew up some plans for a nutrition research institute. Orr also donated £5000 for the building of a granite laboratory building at Craibstone, not far from the Bucksburn site of the Rowett.
At the breakout of the Great War, Orr left the institute, but returned in 1919 with a staff of four to begin work in the new laboratory. Orr continued to push for a new research institute and finally the Government agreed to pay half the costs but stipulated that the other half was to be found from other sources. The extra money was donated by Dr John Quiller Rowett, a businessman and director of a wine and spirits merchants in London.
Rowett's donation allowed the purchase of 41 acres of land for the institute to be built on. Rowett also contributed £10,000 towards the cost of the buildings. The money was donated with one very important stipulation from Rowett—"if any work done at the institute on animal nutrition were found to have a bearing on human nutrition, the institute would be allowed to follow up this work." The institute was formally opened in 1922 by Queen Mary.
In 1927, the Rowett was given £5000 to carry out an investigation to test whether health could be improved by the consumption of milk. After some further tests on other groups, a bill was passed in the House of Commons enabling local authorities in Scotland to provide cheap or free milk to all school children. It was soon applied in England too. This helped reduce the surplus of milk at the time and also helped rescue the milk industry which was i
|
https://en.wikipedia.org/wiki/X-PLOR
|
X-PLOR is a computer software package for computational structural biology originally developed by Axel T. Brunger at Yale University. It was first published in 1987 as an offshoot of CHARMM - a similar program that ran on supercomputers made by Cray Inc. It is used in the fields of X-ray crystallography and nuclear magnetic resonance spectroscopy of proteins (NMR) analysis.
X-PLOR is a highly sophisticated program that provides an interface between theoretical foundations and experimental data in structural biology, with specific emphasis on X-ray crystallography and nuclear magnetic resonance spectroscopy in solution of biological macro-molecules. It is intended mainly for researchers and students in the fields of computational chemistry, structural biology, and computational molecular biology.
See also
Comparison of software for molecular mechanics modeling
Molecular mechanics
References
External links
The program's reference manual hosted at Oxford University
Molecular dynamics software
Computer libraries
|
https://en.wikipedia.org/wiki/Capacitance%E2%80%93voltage%20profiling
|
Capacitance–voltage profiling (or C–V profiling, sometimes CV profiling) is a technique for characterizing semiconductor materials and devices. The applied voltage is varied, and the capacitance is measured and plotted as a function of voltage. The technique uses a metal–semiconductor junction (Schottky barrier) or a p–n junction or a MOSFET to create a depletion region, a region which is empty of conducting electrons and holes, but may contain ionized donors and electrically active defects or traps. The depletion region with its ionized charges inside behaves like a capacitor. By varying the voltage applied to the junction it is possible to vary the depletion width. The dependence of the depletion width upon the applied voltage provides information on the semiconductor's internal characteristics, such as its doping profile and electrically active defect densities.,
Measurements may be done at DC, or using both DC and a small-signal AC signal (the conductance method
, ), or using a large-signal transient voltage.
Application
Many researchers use capacitance–voltage (C–V) testing to determine semiconductor parameters, particularly in MOSCAP and MOSFET structures. However, C–V measurements are also widely used to characterize other types of semiconductor devices and technologies, including bipolar junction transistors, JFETs, III–V compound devices, photovoltaic cells, MEMS devices, organic thin-film transistor (TFT) displays, photodiodes, and carbon nanotubes (CNTs).
These measurements' fundamental nature makes them applicable to a wide range of research tasks and disciplines. For example, researchers use them in university and semiconductor manufacturers' labs to evaluate new processes, materials, devices, and circuits. These measurements are extremely valuable to product and yield enhancement engineers who are responsible for improving processes and device performance. Reliability engineers also use these measurements to qualify the suppliers of the materials th
|
https://en.wikipedia.org/wiki/Hamming%20space
|
In statistics and coding theory, a Hamming space (named after American mathematician Richard Hamming) is usually the set of all binary strings of length N. It is used in the theory of coding signals and transmission.
More generally, a Hamming space can be defined over any alphabet (set) Q as the set of words of a fixed length N with letters from Q. If Q is a finite field, then a Hamming space over Q is an N-dimensional vector space over Q. In the typical, binary case, the field is thus GF(2) (also denoted by Z2).
In coding theory, if Q has q elements, then any subset C (usually assumed of cardinality at least two) of the N-dimensional Hamming space over Q is called a q-ary code of length N; the elements of C are called codewords. In the case where C is a linear subspace of its Hamming space, it is called a linear code. A typical example of linear code is the Hamming code. Codes defined via a Hamming space necessarily have the same length for every codeword, so they are called block codes when it is necessary to distinguish them from variable-length codes that are defined by unique factorization on a monoid.
The Hamming distance endows a Hamming space with a metric, which is essential in defining basic notions of coding theory such as error detecting and error correcting codes.
Hamming spaces over non-field alphabets have also been considered, especially over finite rings (most notably over Z4) giving rise to modules instead of vector spaces and ring-linear codes (identified with submodules) instead of linear codes. The typical metric used in this case the Lee distance. There exist a Gray isometry between (i.e. GF(22m)) with the Hamming distance and (also denoted as GR(4,m)) with the Lee distance.
References
Coding theory
Linear algebra
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.