source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Pool%20Registrar
|
In computing, a Pool Registrar (PR) is a component of the reliable server pooling (RSerPool) framework which manages a handlespace. PRs are also denoted as ENRP server or Name Server (NS).
The responsibilities of a PR are the following:
Register Pool Elements into a handlespace,
Deregister Pool Elements from a handlespace,
Monitor Pool Elements by keep-alive messages,
Provide handle resolution (i.e. server selection) to Pool Users,
Audit the consistency of a handlespace between multiple PRs,
Synchronize a handlespace with another PR.
Standards Documents
Aggregate Server Access Protocol (ASAP)
Endpoint Handlespace Redundancy Protocol (ENRP)
Aggregate Server Access Protocol (ASAP) and Endpoint Handlespace Redundancy Protocol (ENRP) Parameters
Reliable Server Pooling Policies
External links
Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page
IETF RSerPool Working Group
Internet protocols
Internet Standards
|
https://en.wikipedia.org/wiki/Pool%20User
|
A Pool User (PU) is a client in the Reliable Server Pooling (RSerPool) framework.
In order to use the service provided by a pool, a PU has to proceed the following steps:
Ask a Pool Registrar for server selection (the Pool Registrar will return a list of servers, called Pool Elements),
Select one Pool Element, establish a connection and use the actual service,
Repeat the server selection and connection establishment procedure in case of server failures,
Perform an application-specific session failover to a new server to resume an interrupted session,
Report failed servers to the Pool Registrar.
Standards Documents
Aggregate Server Access Protocol (ASAP)
Endpoint Handlespace Redundancy Protocol (ENRP)
Aggregate Server Access Protocol (ASAP) and Endpoint Handlespace Redundancy Protocol (ENRP) Parameters
Reliable Server Pooling Policies
External links
Thomas Dreibholz's Reliable Server Pooling (RSerPool) Page
IETF RSerPool Working Group
Internet protocols
Internet Standards
|
https://en.wikipedia.org/wiki/Compaq%20Presario
|
Presario is a discontinued line of consumer desktop computers and notebooks originally produced by Compaq. The Presario family of computers was introduced in September 1993.
In the mid-1990s, Compaq began manufacturing PC monitors under the Presario brand. A series of all-in-one units, containing both the PC and the monitor in the same case, were also released.
After Compaq merged with HP in 2002, the Presario line of desktops and laptops were sold concurrently with HP’s other products, such as the HP Pavilion. The Presario laptops subsequently replaced the then-discontinued HP OmniBook line of notebooks around that same year.
The Presario brand name continued to be used for low-end home desktops and laptops from 2002 up until the Compaq brand name was discontinued by HP in 2013.
Desktop PC series
Compaq Presario 2100
Compaq Presario 2200
Compaq Presario 2240
Compaq Presario 2254
Compaq Presario 2256
Compaq Presario 2285V
Compaq Presario 2286
Compaq Presario 2288
Compaq Presario 4108
Compaq Presario 4110
Compaq Presario 4160
Compaq Presario 4505
Compaq Presario 4508
Compaq Presario 4528
Compaq Presario 4532
Compaq Presario 4540
Compaq Presario 4600
Compaq Presario 4620
Compaq Presario 4712
Compaq Presario 4800
Compaq Presario 5000 series
Compaq Presario 5000
Compaq Presario 5006US
Compaq Presario 5008US
Compaq Presario 5000A
Compaq Presario 5000T
Compaq Presario 5000Z
Compaq Presario 5010
Compaq Presario 5030
Compaq Presario 5050
Compaq Presario 5080
Compaq Presario 5100 series
Compaq Presario 5150
Compaq Presario 5170
Compaq Presario 5184
Compaq Presario 5185
Compaq Presario 5190
Compaq Presario 5200 series
Compaq Presario 5202
Compaq Presario 5222
Compaq Presario 5240
Compaq Presario 5280
Compaq Presario 5285
Compaq Presario 5360
Compaq Presario 5400
Compaq Presario 5460
Compaq Presario 5477
Compaq Presario 5500
Compaq Presario 5520
Compaq Presario 5599
Compaq Presario 5600 series
Compaq Presario 5660
Compa
|
https://en.wikipedia.org/wiki/Flaming%20Gorge%20Dam
|
Flaming Gorge Dam is a concrete thin-arch dam on the Green River, a major tributary of the Colorado River, in northern Utah in the United States. Flaming Gorge Dam forms the Flaming Gorge Reservoir, which extends into southern Wyoming, submerging four distinct gorges of the Green River. The dam is a major component of the Colorado River Storage Project, which stores and distributes upper Colorado River Basin water.
The dam takes its name from a nearby section of the Green River canyon, named by John Wesley Powell in 1869. It was built by the U.S. Bureau of Reclamation between 1958 and 1964. The dam is high and long, and its reservoir has a capacity of more than , or about twice the annual flow of the upper Green. Operated to provide long-term storage for downstream water-rights commitments, the dam is also a major source of hydroelectricity and is the main flood-control facility for the Green River system.
The dam and reservoir have fragmented the upper Green River, blocking fish migration and significantly impacting many native species. Water released from the dam is generally cold and clear, as compared to the river's naturally warm and silty flow, further changing the local riverine ecology. However, the cold water from Flaming Gorge has transformed about of the Green into a "Blue Ribbon Trout Fishery". The Flaming Gorge Reservoir, largely situated in Flaming Gorge National Recreation Area, is also considered one of Utah and Wyoming's greatest fisheries.
History and location
Contrary to its namesake, Flaming Gorge, the dam actually lies in steep, rapid-strewn Red Canyon in northeastern Utah, close to where the Green River cuts through the Uinta Mountains. The canyon, for which the dam is named, is buried under the reservoir almost upstream. Red Canyon is the narrowest and deepest of the four on the Green in the area (Horseshoe, Kingfisher, Red and Flaming Gorge), which made it the best site for the building of a dam. Flaming Gorge, on the other hand, wa
|
https://en.wikipedia.org/wiki/Lauricella%20hypergeometric%20series
|
In 1893 Giuseppe Lauricella defined and studied four hypergeometric series FA, FB, FC, FD of three variables. They are :
for |x1| + |x2| + |x3| < 1 and
for |x1| < 1, |x2| < 1, |x3| < 1 and
for |x1|½ + |x2|½ + |x3|½ < 1 and
for |x1| < 1, |x2| < 1, |x3| < 1. Here the Pochhammer symbol (q)i indicates the i-th rising factorial of q, i.e.
where the second equality is true for all complex except .
These functions can be extended to other values of the variables x1, x2, x3 by means of analytic continuation.
Lauricella also indicated the existence of ten other hypergeometric functions of three variables. These were named FE, FF, ..., FT and studied by Shanti Saran in 1954 . There are therefore a total of 14 Lauricella–Saran hypergeometric functions.
Generalization to n variables
These functions can be straightforwardly extended to n variables. One writes for example
where |x1| + ... + |xn| < 1. These generalized series too are sometimes referred to as Lauricella functions.
When n = 2, the Lauricella functions correspond to the Appell hypergeometric series of two variables:
When n = 1, all four functions reduce to the Gauss hypergeometric function:
Integral representation of FD
In analogy with Appell's function F1, Lauricella's FD can be written as a one-dimensional Euler-type integral for any number n of variables:
This representation can be easily verified by means of Taylor expansion of the integrand, followed by termwise integration. The representation implies that the incomplete elliptic integral Π is a special case of Lauricella's function FD with three variables:
Finite-sum solutions of FD
Case 1 : , a positive integer
One can relate FD to the Carlson R function via
with the iterative sum
and
where it can be exploited that the Carlson R function with has an exact representation (see for more information).
The vectors are defined as
where the length of and is , while the vectors and have length .
Case 2: , a positive i
|
https://en.wikipedia.org/wiki/Algebraic%20specification
|
Algebraic specification is a software engineering technique for formally specifying system behavior. It was a very active subject of computer science research around 1980.
Overview
Algebraic specification seeks to systematically develop more efficient programs by:
formally defining types of data, and mathematical operations on those data types
abstracting implementation details, such as the size of representations (in memory) and the efficiency of obtaining outcome of computations
formalizing the computations and operations on data types
allowing for automation by formally restricting operations to this limited set of behaviors and data types.
An algebraic specification achieves these goals by defining one or more data types, and specifying a collection of functions that operate on those data types. These functions can be divided into two classes:
Constructor functions: Functions that create or initialize the data elements, or construct complex elements from simpler ones. The set of available constructor functions is implied by the specification's signature. Additionally, a specification can contain equations defining equivalences between the objects constructed by these functions. Whether the underlying representation is identical for different but equivalent constructions is implementation-dependent.
Additional functions: Functions that operate on the data types, and are defined in terms of the constructor functions.
Examples
Consider a formal algebraic specification for the boolean data type.
One possible algebraic specification may provide two constructor functions for the data-element: a true constructor and a false constructor. Thus, a boolean data element could be declared, constructed, and initialized to a value. In this scenario, all other connective elements, such as XOR and AND, would be additional functions. Thus, a data element could be instantiated with either "true" or "false" value, and additional functions could be used to perform an
|
https://en.wikipedia.org/wiki/Substrate%20coupling
|
In an integrated circuit, a signal can couple from one node to another via the substrate. This phenomenon is referred to as substrate coupling or substrate noise coupling.
The push for reduced cost, more compact circuit boards, and added customer features has provided
incentives for the inclusion of analog functions on primarily digital MOS integrated circuits (ICs) forming
mixed-signal ICs. In these systems, the speed of digital circuits is constantly increasing, chips are
becoming more densely packed, interconnect layers are added, and analog resolution is increased. In addition, recent increase in wireless applications and its growing market are introducing a new set of aggressive design goals for realizing mixed-signal systems.
Here, the designer integrates radio frequency
(RF) analog and base band digital circuitry on a single chip.
The goal is to make single-chip radio frequency
integrated circuits (RFICs) on silicon, where all the blocks are fabricated on the same chip.
One of the advantages of this integration is low power dissipation for portability due to a reduction in the number of package pins and associated bond wire capacitance.
Another reason that an integrated solution offers lower power consumption is that routing high-frequency signals off-chip often requires a 50Ω impedance match, which can result in higher power dissipation.
Other advantages include improved high-frequency performance due to reduced package interconnect parasitics, higher system reliability, smaller package count, and higher integration of RF components with VLSI-compatible digital circuits.
In fact, the single-chip transceiver is now a reality.
The design of such systems, however, is a complicated task. There are two main challenges in realizing
mixed-signal ICs. The first challenging task, specific to RFICs, is to fabricate good on-chip passive elements
such as high-Q inductors. The second challenging task, applicable to any mixed-signal IC and the subject
of this chap
|
https://en.wikipedia.org/wiki/Half-power%20point
|
The half-power point is the point at which the output power has dropped to half of its peak value; that is, at a level of approximately -3 dB.
In filters, optical filters, and electronic amplifiers, the half-power point is also known as half-power bandwidth and is a commonly used definition for the cutoff frequency.
In the characterization of antennas the half-power point is also known as half-power beamwidth and relates to measurement position as an angle and describes directionality.
Amplifiers and filters
This occurs when the output voltage has dropped to (~0.707) of the maximum output voltage and the power has dropped by half. A bandpass amplifier will have two half-power points, while a low-pass amplifier or a high-pass amplifier will have only one.
The bandwidth of a filter or amplifier is usually defined as the difference between the lower and upper half-power points. This is, therefore, also known as the 3 dB bandwidth. There is no lower half-power point for a low-pass amplifier, so the bandwidth is measured relative to DC, i.e., 0 Hz. There is no upper half-power point for an ideal high-pass amplifier, its bandwidth is theoretically infinite. In practice the stopband and transition band are used to characterize a high-pass.
Antenna beams
In antennas, the expression half-power point does not relate to frequency: instead, it describes the extent in space of an antenna beam. The half-power point is the angle off boresight at which the antenna gain first falls to half power (approximately -3 dB) from the peak. The angle between the points is known as the half-power beam width (or simply beam width).
Beamwidth is usually but not always expressed in degrees and for the horizontal plane.
It refers to the main lobe, when referenced to the peak effective radiated power of the main lobe.
Note that other definitions of beam width exist, such as the distance between nulls and distance between first side lobes.
Calculation
The beamwidth can be computed fo
|
https://en.wikipedia.org/wiki/Anti-Life%20Equation
|
The Anti-Life Equation is a fictional concept appearing in American comic books published by DC Comics. In Jack Kirby's Fourth World setting, the Anti-Life Equation is a formula for total control over the minds of sentient beings that is sought by Darkseid, who, for this reason, sends his forces to Earth, as he believes part of the equation exists in the subconsciousness of humanity. Various comics have defined the equation in different ways, but a common interpretation is that the equation may be seen as a mathematical proof of the futility of living, or of life as incarceration of spirit, per predominant religious and modern cultural suppositions.
History
Jack Kirby's original comics established the Anti-Life Equation as giving the being who learns it power to dominate the will of all sentient and sapient races. It is called the Anti-Life Equation because "if someone possesses absolute control over you — you're not really alive". Most stories featuring the Equation use this concept. The Forever People's Mother Box found the Anti-Life Equation in Sonny Sumo, but Darkseid, unaware of this, stranded him in ancient Japan. A man known as Billion-Dollar Bates had control over the Equation's power even without the Mother Box's aid, but was accidentally killed by one of his own guards.
When Metron and Swamp Thing attempt to breach the Source, which drives Swamp Thing temporarily mad, Darkseid discovers that part of the formula is love. Upon being told by the Dominators of their planned invasion of Earth, Darkseid promises not to interfere on the condition that the planet is not destroyed so his quest for the equation is not thwarted.
It is later revealed in Martian Manhunter (vol. 2) #33 that Darkseid first became aware of the equation approximately 300 years ago when he made contact with the people of Mars. Upon learning of the Martian philosophy that free will and spiritual purpose could be defined by a Life Equation, Darkseid postulated that there must exist a negat
|
https://en.wikipedia.org/wiki/Sony%20HDR-HC1
|
The Sony HDR-HC1, introduced in mid-2005 (MSRP US$1999), is the first consumer HDV camcorder to support 1080i.
The CMOS sensor has resolution of 1920x1440 for digital still pictures and captures video at 1440x1080 interlaced, which is the resolution defined for HDV 1080i. The camera may also use the extra pixels for digital image stabilization.
The camcorder can also convert the captured HDV data to DV data for editing the video using non-linear editing systems which do not support HDV or for creating edits which are viewable on non-HDTV television sets.
The HVR-A1 is the prosumer version of the HDR-HC1. It has more manual controls and XLR ports.
Unique features
Expanded focus
Expanded focus lets the user magnify the image temporarily to obtain better manual focus. Expanded focus works in pause mode only; it is not possible to magnify the frame during recording.
A similar feature, named Focus Assist, appeared on the Canon HV20, which was released two years after the HDR-HC1. Focus Assist on Canon camcorders also works only when recording is paused.
Spot meter and spot focus
Spot meter and Spot focus are possible thanks to a touch-sensitive LCD screen, employed on most modern Sony consumer camcorders.
The user can touch the screen to specify a specific region of the image; the camcorder automatically adjusts focus or exposure according to distance to the object and to illumination of the selected spot.
Depending on a scene, changing focus with Spot Focus can cause focus "breathing" or "hunting", when the subject goes in and out of focus several times before the image stabilizes.
Shot transition
Shot transition allows for a smooth automatic scene transition. In particular, it makes rack focus easy.
Two sets of focus and zoom can be preset and stored in "Store-A" and "Store-B" memory slots. The settings can then be gradually applied from one to another within 4 seconds. The transition time is not adjustable.
Presently, the HDR-HC1 is the only consumer c
|
https://en.wikipedia.org/wiki/Laser%20capture%20microdissection
|
Laser capture microdissection (LCM), also called microdissection, laser microdissection (LMD), or laser-assisted microdissection (LMD or LAM), is a method for isolating specific cells of interest from microscopic regions of tissue/cells/organisms (dissection on a microscopic scale with the help of a laser).
Principle
Laser-capture microdissection (LCM) is a method to procure subpopulations of tissue cells under direct microscopic visualization. LCM technology can harvest the cells of interest directly or can isolate specific cells by cutting away unwanted cells to give histologically pure enriched cell populations. A variety of downstream applications exist: DNA genotyping and loss of heterozygosity (LOH) analysis, RNA transcript profiling, cDNA library generation, proteomics discovery and signal-pathway profiling. The total time required to carry out this protocol is typically 1–1.5 h.
Extraction
A laser is coupled into a microscope and focuses onto the tissue on the slide. By movement of the laser by optics or the stage the focus follows a trajectory which is predefined by the user. This trajectory, also called element, is then cut out and separated from the adjacent tissue. After the cutting process, an extraction process has to follow if an extraction process is desired. More recent technologies utilize non-contact microdissection.
There are several ways to extract tissue from a microscope slide with a histopathology sample on it. Press a sticky surface onto the sample and tear out. This extracts the desired region, but can also remove particles or unwanted tissue on the surface, because the surface is not selective. Melt a plastic membrane onto the sample and tear out. The heat is introduced, for example, by a red or infrared (IR) laser onto a membrane stained with an absorbing dye. As this adheres the desired sample onto the membrane, as with any membrane that is put close to the histopathology sample surface, there might be some debris extracted. Another
|
https://en.wikipedia.org/wiki/LaSalle%27s%20invariance%20principle
|
LaSalle's invariance principle (also known as the invariance principle, Barbashin-Krasovskii-LaSalle principle, or Krasovskii-LaSalle principle) is a criterion for the asymptotic stability of an autonomous (possibly nonlinear) dynamical system.
Global version
Suppose a system is represented as
where is the vector of variables, with
If a (see Smoothness) function can be found such that
for all (negative semidefinite),
then the set of accumulation points of any trajectory is contained in where is the union of complete trajectories contained entirely in the set .
If we additionally have that the function is positive definite, i.e.
, for all
and if contains no trajectory of the system except the trivial trajectory for , then the origin is asymptotically stable.
Furthermore, if is radially unbounded, i.e.
, as
then the origin is globally asymptotically stable.
Local version
If
, when
hold only for in some neighborhood of the origin, and the set
does not contain any trajectories of the system besides the trajectory , then the local version of the invariance principle states that the origin is locally asymptotically stable.
Relation to Lyapunov theory
If is negative definite, then the global asymptotic stability of the origin is a consequence of Lyapunov's second theorem. The invariance principle gives a criterion for asymptotic stability in the case when is only negative semidefinite.
Examples
Simple example
Example taken from.
Consider the vector field in the plane. The function satisfies , and is radially unbounded, showing that the origin is globally asymptotically stable.
Pendulum with friction
This section will apply the invariance principle to establish the local asymptotic stability of a simple system, the pendulum with friction. This system can be modeled with the differential equation
where is the angle the pendulum makes with the vertical normal, is the mass of the pendulum, is the
|
https://en.wikipedia.org/wiki/IBM%20TPNS
|
Teleprocessing Network Simulator (TPNS) is an IBM licensed program, first released in 1976 as a test automation tool to simulate the end-user activity of network terminal(s) to a mainframe computer system, for functional testing, regression testing, system testing, capacity management, benchmarking and stress testing.
In 2002, IBM re-packaged TPNS and released
Workload Simulator for z/OS and S/390 (WSim) as a successor product.
History
Teleprocessing Network Simulator (TPNS) Version 1 Release 1 (V1R1) was introduced as Program Product 5740-XT4 in February 1976, followed by four additional releases up to V1R5 (1981).
In August 1981, IBM announced TPNS Version 2 Release 1 () as Program Product 5662-262, followed by three additional releases up to V2R4 (1987).
In January 1989, IBM announced TPNS Version 3 Release 1 () as Program Product 5688-121, followed by four additional releases up to (1996).
In December 1997, IBM announced a Service Level 9711 Functional and Service Enhancements release.
In September 1998, IBM announced the TPNS Test Manager (for ) as a usability enhancement to automate the test process further in order to improve productivity through a logical flow, and to streamline TPNS-based testing of IBM 3270 applications or CPI-C transaction programs.
In December 2001, IBM announced a Service Level 0110 Functional and Service Enhancements release.
In August 2002, IBM announced Workload Simulator for z/OS and S/390 (WSim) V1.1 as Program Number 5655-I39, a re-packaged successor product to TPNS, alongside the WSim Test Manager V1.1, a re-packaged successor to the TPNS Test Manager.
In November 2012, IBM announced a maintenance update of Workload Simulator for z/OS and S/390 (WSim) V1.1, to simplify the installation of updates to the product.
In December 2015, IBM announced enhancements to Workload Simulator for z/OS and S/390 (WSim) V1.1, providing new utilities for TCP/IP data capture and script generation.
Features
Simulation support
Telep
|
https://en.wikipedia.org/wiki/Pakistan%20Atomic%20Energy%20Commission
|
Pakistan Atomic Energy Commission (PAEC) (Urdu: ) is a federally funded independent governmental agency, concerned with research and development of nuclear power, promotion of nuclear science, energy conservation and the peaceful usage of nuclear technology.
Since its establishment in 1956, the PAEC has overseen the extensive development of nuclear infrastructure to support the economical uplift of Pakistan by founding institutions that focus on development on food irradiation and on nuclear medicine radiation therapy for cancer treatment. The PAEC organizes conferences and directs research at the country's leading universities.
Since the 1960s, the PAEC is also a scientific research partner and sponsor of the European Organization for Nuclear Research (CERN), where Pakistani scientists have contributed to developing particle accelerators and research on high-energy physics. PAEC scientists regularly pay visits to CERN while taking part in projects led by CERN.
Until 2001, the PAEC was the civilian federal oversight agency that manifested the control of atomic radiation, development of nuclear weapons, and their testing. These functions were eventually taken over by the Nuclear Regulatory Authority (NRA), and the National Command Authority under the Prime Minister of Pakistan.
Overview
Early history
Following the partition of the British Indian Empire by the United Kingdom in 1947, Pakistan emerged as a Muslim-dominated state. The turbulent nature of its emergence critically influenced the scientific development of Pakistan.
The establishment of the Council of Scientific and Industrial Research (PCSIR) in 1951 began Pakistan's research on physical sciences. In 1953, U.S. President Dwight Eisenhower announced the Atoms for Peace program, of which Pakistan became its earliest partner. Research at PAEC initially followed a strict non-weapon policy issued by then-Foreign Minister Sir Zafarullah Khan. In 1955, the Government of Pakistan established a committee of
|
https://en.wikipedia.org/wiki/FTPmail
|
FTPmail is the term used for the practice of using an FTPmail server to gain access to various files over the Internet. An FTPmail server is a proxy server which (asynchronously) connects to remote FTP servers in response to email requests, returning the downloaded files as an email attachment. This service might be useful to users who cannot themselves initiate an FTP session—for example, because they are constrained by restrictions on their Internet access.
History
During the early years of the Internet, Internet access was limited to a few locations. High speed links were not available for most users, and online connectivity was rare and expensive. Download of large files (then considered to be over a few megabytes) was nearly impossible due to bandwidth limitations, as well as frequent errors and lost connections. The original FTP specification did not allow for a session to be resumed, and the transmission had to restart from the beginning.
FTPmail gateways allowed people to retrieve such files. The file was broken into smaller pieces and encoded using a popular format such as uuencode. The receiver of the email messages would later reassemble the original file and decode it. As the file was broken into smaller pieces, the chances of losing the transmission was much smaller. In case of loss of connectivity, the transmission could be restarted from that part. The process was slower but much more reliable. It also allowed people who accessed the Internet only through email using dial-up lines to download files that were located remotely. Unlike FTP, files could be transferred through FTPmail even if the user did not have an online Internet connection (for example, using BBSes or other specialized e-mail software).
Servers located at universities, such as and , were popular. Some of these servers hosted software archives containing early versions of Linux and other GNU software. Access to these repositories via FTPmail was instrumental in allowing people fr
|
https://en.wikipedia.org/wiki/Level%20sensor
|
Level sensors detect the level of liquids and other fluids and fluidized solids, including slurries, granular materials, and powders that exhibit an upper free surface. Substances that flow become essentially horizontal in their containers (or other physical boundaries) because of gravity whereas most bulk solids pile at an angle of repose to a peak. The substance to be measured can be inside a container or can be in its natural form (e.g., a river or a lake). The level measurement can be either continuous or point values. Continuous level sensors measure level within a specified range and determine the exact amount of substance in a certain place, while point-level sensors only indicate whether the substance is above or below the sensing point. Generally the latter detect levels that are excessively high or low.
There are many physical and application variables that affect the selection of the optimal level monitoring method for industrial and commercial processes. The selection criteria include the physical: phase (liquid, solid or slurry), temperature, pressure or vacuum, chemistry, dielectric constant of medium, density (specific gravity) of medium, agitation (action), acoustical or electrical noise, vibration, mechanical shock, tank or bin size and shape. Also important are the application constraints: price, accuracy, appearance, response rate, ease of calibration or programming, physical size and mounting of the instrument, monitoring or control of continuous or discrete (point) levels.
In short, level sensors are one of the very important sensors and play very important role in a variety of consumer/ industrial applications. As with other types of sensors, level sensors are available or can be designed using a variety of sensing principles. Selection of an appropriate type of sensor suiting to the application requirement is very important.
Point and continuous level detection for solids
A variety of sensors are available for point level detection of solids
|
https://en.wikipedia.org/wiki/Perfect%20power
|
In mathematics, a perfect power is a natural number that is a product of equal natural factors, or, in other words, an integer that can be expressed as a square or a higher integer power of another integer greater than one. More formally, n is a perfect power if there exist natural numbers m > 1, and k > 1 such that mk = n. In this case, n may be called a perfect kth power. If k = 2 or k = 3, then n is called a perfect square or perfect cube, respectively. Sometimes 0 and 1 are also considered perfect powers (0k = 0 for any k > 0, 1k = 1 for any k).
Examples and sums
A sequence of perfect powers can be generated by iterating through the possible values for m and k. The first few ascending perfect powers in numerical order (showing duplicate powers) are :
The sum of the reciprocals of the perfect powers (including duplicates such as 34 and 92, both of which equal 81) is 1:
which can be proved as follows:
The first perfect powers without duplicates are:
(sometimes 0 and 1), 4, 8, 9, 16, 25, 27, 32, 36, 49, 64, 81, 100, 121, 125, 128, 144, 169, 196, 216, 225, 243, 256, 289, 324, 343, 361, 400, 441, 484, 512, 529, 576, 625, 676, 729, 784, 841, 900, 961, 1000, 1024, ...
The sum of the reciprocals of the perfect powers p without duplicates is:
where μ(k) is the Möbius function and ζ(k) is the Riemann zeta function.
According to Euler, Goldbach showed (in a now-lost letter) that the sum of over the set of perfect powers p, excluding 1 and excluding duplicates, is 1:
This is sometimes known as the Goldbach–Euler theorem.
Detecting perfect powers
Detecting whether or not a given natural number n is a perfect power may be accomplished in many different ways, with varying levels of complexity. One of the simplest such methods is to consider all possible values for k across each of the divisors of n, up to . So if the divisors of are then one of the values must be equal to n if n is indeed a perfect power.
This method can immediately be simplified by instead
|
https://en.wikipedia.org/wiki/Fair%20queuing
|
Fair queuing is a family of scheduling algorithms used in some process and network schedulers. The algorithm is designed to achieve fairness when a limited resource is shared, for example to prevent flows with large packets or processes that generate small jobs from consuming more throughput or CPU time than other flows or processes.
Fair queuing is implemented in some advanced network switches and routers.
History
The term fair queuing was coined by John Nagle in 1985 while proposing round-robin scheduling in the gateway between a local area network and the internet to reduce network disruption from badly-behaving hosts.
A byte-weighted version was proposed by Alan Demers, Srinivasan Keshav and Scott Shenker in 1989, and was based on the earlier Nagle fair queuing algorithm. The byte-weighted fair queuing algorithm aims to mimic a bit-per-bit multiplexing by computing theoretical departure date for each packet.
The concept has been further developed into weighted fair queuing, and the more general concept of traffic shaping, where queuing priorities are dynamically controlled to achieve desired flow quality of service goals or accelerate some flows.
Principle
Fair queuing uses one queue per packet flow and services them in rotation, such that each flow can "obtain an equal fraction of the resources".
The advantage over conventional first in first out (FIFO) or priority queuing is that a high-data-rate flow, consisting of large packets or many data packets, cannot take more than its fair share of the link capacity.
Fair queuing is used in routers, switches, and statistical multiplexers that forward packets from a buffer. The buffer works as a queuing system, where the data packets are stored temporarily until they are transmitted.
With a link data-rate of R, at any given time the N active data flows (the ones with non-empty queues) are serviced each with an average data rate of R/N. In a short time interval the data rate may fluctuate around this value si
|
https://en.wikipedia.org/wiki/Land%20use%2C%20land-use%20change%2C%20and%20forestry
|
Land use, land-use change, and forestry (LULUCF), also referred to as Forestry and other land use (FOLU) or Agriculture, Forestry and Other Land Use (AFOLU), is defined as a "greenhouse gas inventory sector that covers emissions and removals of greenhouse gases resulting from direct human-induced land use such as settlements and commercial uses, land-use change, and forestry activities."
LULUCF has impacts on the global carbon cycle and as such, these activities can add or remove carbon dioxide (or, more generally, carbon) from the atmosphere, influencing climate. LULUCF has been the subject of two major reports by the Intergovernmental Panel on Climate Change (IPCC), but is difficult to measure. Additionally, land use is of critical importance for biodiversity.
A related term in the context of climate change mitigation is AFOLU which stands for "agriculture, forestry and other land use".
Development
The United Nations Framework Convention on Climate Change (UNFCCC) Article 4(1)(a) requires all Parties to "develop, periodically update, publish and make available to the Conference of the Parties" as well as "national inventories of anthropogenic emissions by sources" "removals by sinks of all greenhouse gases not controlled by the Montreal Protocol."
Under the UNFCCC reporting guidelines, human-induced greenhouse emissions must be reported in six sectors: energy (including stationary energy and transport); industrial processes; solvent and other product use; agriculture; waste; and land use, land use change and forestry (LULUCF).
The rules governing accounting and reporting of greenhouse gas emissions from LULUCF under the Kyoto Protocol are contained in several decisions of the Conference of Parties under the UNFCCC.
LULUCF has been the subject of two major reports by the Intergovernmental Panel on Climate Change (IPCC).
The Kyoto Protocol article 3.3 thus requires mandatory LULUCF accounting for afforestation (no forest for last 50 years), reforestation (n
|
https://en.wikipedia.org/wiki/White%20noise%20machine
|
A white noise machine is a device that produces a noise that calms the listener, which in many cases sounds like a rushing waterfall or wind blowing through trees, and other serene or nature-like sounds. Often such devices do not produce actual white noise, which has a harsh sound, but pink noise, whose power rolls off at higher frequencies, or other colors of noise.
Use
White noise devices are available from numerous manufacturers in many forms, for a variety of different uses, including audio testing, sound masking, sleep-aid, and power-napping. Sleep-aid and nap machine products may also produce other soothing sounds, such as music, rain, wind, highway traffic and ocean waves mixed with—or modulated by—white noise. Electric fans are a common alternative, although some Asian communities historically avoided using fans due to the superstition that a fan could suffocate them while sleeping. White noise generators are often used by people with tinnitus to mask their symptoms. The sounds generated by digital machines are not always truly random. Rather, they are short prerecorded audio-tracks which continuously repeat at the end of the track.
Manufacturers of sound-masking devices recommend that the volume of white noise machines be initially set at a comfortable level, even if it does not provide the desired level of privacy. As the ear becomes accustomed to the new sound and learns to tune it out, the volume can be gradually increased to increase privacy. Manufacturers of sleeping aids and power-napping devices recommend that the volume level be set slightly louder than normal music listening level, but always in a comfortable listening range.
Sound and noise have their own measurement and color coding techniques, which allows specialized users to identify noise and sound according to their respective needs and utilization. These specialized needs are dependent on certain professions and needs, e.g. a psychiatrist who needs certain sounds for therapies and trea
|
https://en.wikipedia.org/wiki/IBM%20LPFK
|
The Lighted Program Function Keyboard (LPFK) is a computer input device manufactured by IBM that presents an array of buttons associated with lights.
Each button is associated to a function in supporting software, and according to the availability of that function in current context of the application, the light is switched on or off, giving the user a graphical feedback on the set of available functions. Usually the button to function mapping is customizable.
External links
http://brutman.com/IBM_LPFK/IBM_LPFK.html
Computer keyboards
LPFK
|
https://en.wikipedia.org/wiki/Kaplansky%27s%20conjectures
|
The mathematician Irving Kaplansky is notable for proposing numerous conjectures in several branches of mathematics, including a list of ten conjectures on Hopf algebras. They are usually known as Kaplansky's conjectures.
Group rings
Let be a field, and a torsion-free group. Kaplansky's zero divisor conjecture states:
The group ring does not contain nontrivial zero divisors, that is, it is a domain.
Two related conjectures are known as, respectively, Kaplansky's idempotent conjecture:
does not contain any non-trivial idempotents, i.e., if , then or .
and Kaplansky's unit conjecture (which was originally made by Graham Higman and popularized by Kaplansky):
does not contain any non-trivial units, i.e., if in , then for some in and in .
The zero-divisor conjecture implies the idempotent conjecture and is implied by the unit conjecture. As of 2021, the zero divisor and idempotent conjectures are open. The unit conjecture, however, was disproved for fields of positive characteristic by Giles Gardam in February 2021: he published a preprint on the arXiv that constructs a counterexample. The field is of characteristic 2. (see also: Fibonacci group)
There are proofs of both the idempotent and zero-divisor conjectures for large classes of groups. For example, the zero-divisor conjecture is known for all torsion-free elementary amenable groups (a class including all virtually solvable groups), since their group algebras are known to be Ore domains. It follows that the conjecture holds more generally for all residually torsion-free elementary amenable groups. Note that when is a field of characteristic zero, then the zero-divisor conjecture is implied by the Atiyah conjecture, which has also been established for large classes of groups.
The idempotent conjecture has a generalisation, the Kadison idempotent conjecture, also known as the Kadison–Kaplansky conjecture, for elements in the reduced group C*-algebra. In this setting, it is known that if the Fa
|
https://en.wikipedia.org/wiki/Reflection%20formula
|
In mathematics, a reflection formula or reflection relation for a function f is a relationship between f(a − x) and f(x). It is a special case of a functional equation, and it is very common in the literature to use the term "functional equation" when "reflection formula" is meant.
Reflection formulas are useful for numerical computation of special functions. In effect, an approximation that has greater accuracy or only converges on one side of a reflection point (typically in the positive half of the complex plane) can be employed for all arguments.
Known formulae
The even and odd functions satisfy by definition simple reflection relations around a = 0. For all even functions,
and for all odd functions,
A famous relationship is Euler's reflection formula
for the gamma function , due to Leonhard Euler.
There is also a reflection formula for the general n-th order polygamma function ψ(n)(z),
which springs trivially from the fact that the polygamma functions are defined as the derivatives of and thus inherit the reflection formula.
The Riemann zeta function ζ(z) satisfies
and the Riemann Xi function ξ(z) satisfies
References
Calculus
|
https://en.wikipedia.org/wiki/Relation%20construction
|
In logic and mathematics, relation construction and relational constructibility have to do with the ways that one relation is determined by an indexed family or a sequence of other relations, called the relation dataset. The relation in the focus of consideration is called the faciendum. The relation dataset typically consists of a specified relation over sets of relations, called the constructor, the factor, or the method of construction, plus a specified set of other relations, called the faciens, the ingredients, or the makings.
Relation composition and relation reduction are special cases of relation constructions.
See also
Projection
Relation
Relation composition
Mathematical relations
|
https://en.wikipedia.org/wiki/Edwin%20J.%20Houston
|
Edwin James Houston (July 9, 1847 – March 1, 1914) was an American electrical engineer, academic, businessman, inventor and writer.
Biography
Houston was born July 9, 1847, to John Mason and Mary (Lamour) Houston in Alexandria, Virginia. He graduated from Central High School of Philadelphia (a degree-granting institution rather than an ordinary high school) in 1864. He received both his Bachelor of Arts and master's degree from the same Central High School, where he then became professor of civil engineering for a short period before holding its chair of Natural Philosophy and Physical Geography. Princeton University awarded him an honorary doctoral degree. He also served as emeritus professor of physics at the Franklin Institute and professor of physics at the Medico-Chirurgical College.
While teaching physics at Central High School in Philadelphia, he helped design an arc light generator with his former student colleague Elihu Thomson. Together, they created the Thomson-Houston Electric Company in 1882 which soon after moved to Lynn, Massachusetts. He served as chief electrician of Philadelphia's International Electrical Exhibition in 1884.
In 1892, Thomson-Houston merged with the Edison General Electric Company to form General Electric, with management from Thomson-Houston largely running the new company. In 1894, Houston formed a consulting firm in electrical engineering with Arthur Kennelly. He and Kennelly had also jointly published a series called "Primers of Electricity" in 1884.
Houston was twice president of the American Institute of Electrical Engineers (1893–1895). He was a member of the United States Electrical Commission, the American Institute of Mining Engineers, the American Philosophical Society and many others. He also authored books for a series called "The Wonder Books of Science" to include The Wonder Book of Volcanoes and Earthquakes, The Wonder Book of the Atmosphere, The Wonder Book of Light, and the Wonder Book of Magnetism. He died
|
https://en.wikipedia.org/wiki/Electroluminescent%20display
|
Electroluminescent Displays (ELDs) are a type of flat panel display created by sandwiching a layer of electroluminescent material such as Gallium arsenide between two layers of conductors. When current flows, the layer of material emits radiation in the form of visible light. Electroluminescence (EL) is an optical and electrical phenomenon where a material emits light in response to an electric current passed through it, or to a strong electric field. The term "electroluminescent display" describes displays that use neither LED nor OLED devices, that instead use traditional electroluminescent materials. Beneq is the only manufacturer of TFEL (Thin Film Electroluminescent Display) and TAESL displays, which are branded as LUMINEQ Displays. The structure of a TFEL is similar to that of a passive matrix LCD or OLED display, and TAESL displays are essentially transparent TEFL displays with transparent electrodes. TAESL displays can have a transparency of 80%. Both TEFL and TAESL displays use chip-on-glass technology, which mounts the display driver IC directly on one of the edges of the display. TAESL displays can be embedded onto glass sheets. Unlike LCDs, TFELs are much more rugged and can operate at temperatures from −60 to 105°C and unlike OLEDs, TFELs can operate for 100,000 hours without considerable burn-in, retaining about 85% of their initial brightness. The electroluminescent material is deposited using atomic layer deposition, which is a process that deposits one 1-atom thick layer at a time.
Mechanism
EL works by exciting atoms by passing an electric current through them, causing them to emit photons. By varying the material being excited, the colour of the light emitted can be changed. The actual ELD is constructed using flat, opaque electrode strips running parallel to each other, covered by a layer of electroluminescent material, followed by another layer of electrodes, running perpendicular to the bottom layer. This top layer must be transparent in order
|
https://en.wikipedia.org/wiki/Persistence%20length
|
The persistence length is a basic mechanical property quantifying the bending stiffness of a polymer.
The molecule behaves like a flexible elastic rod/beam (beam theory). Informally, for pieces of the polymer that are shorter than the persistence length, the molecule behaves like a rigid rod, while for pieces of the polymer that are much longer than the persistence length, the properties can only be described statistically, like a three-dimensional random walk.
Formally, the persistence length, P, is defined as the length over which correlations in the direction of the tangent are lost. In a more chemical based manner it can also be defined as the average sum of the projections of all bonds j ≥ i on bond i in an infinitely long chain.
Let us define the angle θ between a vector that is tangent to the polymer at position 0 (zero) and a tangent vector at a distance L away from position 0, along the contour of the chain. It can be shown that the expectation value of the cosine of the angle falls off exponentially with distance,
where P is the persistence length and the angled brackets denote the average over all starting positions.
The persistence length is considered to be one half of the Kuhn length, the length of hypothetical segments that the chain can be considered as freely joined. The persistence length equals the average projection of the end-to-end vector on the tangent to the chain contour at a chain end in the limit of infinite chain length.
The persistence length can be also expressed using the bending stiffness , the Young's modulus E and knowing the section of the polymer
chain.
where is the Boltzmann constant and T is the temperature.
In the case of a rigid and uniform rod, I can be expressed as:
where a is the radius.
For charged polymers the persistence length depends on the surrounding salt concentration due to electrostatic screening. The persistence length of a charged polymer is described by the OSF (Odijk, Skolnick and Fixman) model.
Ex
|
https://en.wikipedia.org/wiki/Colony-forming%20unit
|
In microbiology, colony-forming unit (CFU, cfu or Cfu) is a unit which estimates the number of microbial cells (bacteria, fungi, viruses etc.) in a sample that are viable, able to multiply via binary fission under the controlled conditions. Counting with colony-forming units requires culturing the microbes and counts only viable cells, in contrast with microscopic examination which counts all cells, living or dead. The visual appearance of a colony in a cell culture requires significant growth, and when counting colonies, it is uncertain if the colony arose from one cell or a group of cells. Expressing results as colony-forming units reflects this uncertainty.
Theory
The purpose of plate counting is to estimate the number of cells present based on their ability to give rise to colonies under specific conditions of nutrient medium, temperature and time. Theoretically, one viable cell can give rise to a colony through replication. However, solitary cells are the exception in nature, and most likely the progenitor of the colony was a mass of cells deposited together. In addition, many bacteria grow in chains (e.g. Streptococcus) or clumps (e.g., Staphylococcus). Estimation of microbial numbers by CFU will, in most cases, undercount the number of living cells present in a sample for these reasons. This is because the counting of CFU assumes that every colony is separate and founded by a single viable microbial cell.
The plate count is linear for E. coli over the range of 30 to 300 CFU on a standard sized Petri dish. Therefore, to ensure that a sample will yield CFU in this range requires dilution of the sample and plating of several dilutions. Typically, ten-fold dilutions are used, and the dilution series is plated in replicates of 2 or 3 over the chosen range of dilutions. Often 100µl are plated but also larger amounts up to 1ml are used. Higher plating volumes increase drying times but often don't result in higher accuracy, since additional dilution steps may be
|
https://en.wikipedia.org/wiki/ATSC%20tuner
|
An ATSC (Advanced Television Systems Committee) tuner, often called an ATSC receiver or HDTV tuner, is a type of television tuner that allows reception of digital television (DTV) television channels that use ATSC standards, as transmitted by television stations in North America, parts of Central America, and South Korea. Such tuners are usually integrated into a television set, VCR, digital video recorder (DVR), or set-top box which provides audio/video output connectors of various types.
Another type of television tuner is a digital television adapter (DTA) with an analog passthrough.
Technical overview
The terms "tuner" and "receiver" are used loosely, and it is perhaps more appropriately called an ATSC receiver, with the tuner being part of the receiver (see Metonymy). The receiver generates the audio and video (AV) signals needed for television, and performs the following tasks: demodulation; error correction; MPEG transport stream demultiplexing; decompression; AV synchronization; and media reformatting to match what is optimal input for one's TV. Examples of media reformatting include: interlace to progressive scan or vice versa; picture resolutions; aspect ratio conversions (16:9 to or from 4:3); frame rate conversion; and image scaling. Zooming is an example of resolution change. It is commonly used to convert a low-resolution picture to a high-resolution display. This lets the user eliminate letterboxing or pillarboxing by stretching or cropping the picture. Some ATSC receivers, mostly those in HDTV TV sets, will stretch automatically, either by detecting black bars or by reading the Active Format Descriptor (AFD).
Operation
An ATSC tuner works by generating audio and video signals that are picked up from over-the-air broadcast television. ATSC tuners provide the following functions: selective tuning; demodulation; transport stream demultiplexing; decompression; error correction; analog-to-digital conversion; AV synchronization; and media reformatting
|
https://en.wikipedia.org/wiki/Generalized%20polygon
|
In mathematics, a generalized polygon is an incidence structure introduced by Jacques Tits in 1959. Generalized n-gons encompass as special cases projective planes (generalized triangles, n = 3) and generalized quadrangles (n = 4). Many generalized polygons arise from groups of Lie type, but there are also exotic ones that cannot be obtained in this way. Generalized polygons satisfying a technical condition known as the Moufang property have been completely classified by Tits and Weiss. Every generalized n-gon with n even is also a near polygon.
Definition
A generalized 2-gon (or a digon) is an incidence structure with at least 2 points and 2 lines where each point is incident to each line.
For a generalized n-gon is an incidence structure (), where is the set of points, is the set of lines and is the incidence relation, such that:
It is a partial linear space.
It has no ordinary m-gons as subgeometry for .
It has an ordinary n-gon as a subgeometry.
For any there exists a subgeometry () isomorphic to an ordinary n-gon such that .
An equivalent but sometimes simpler way to express these conditions is: consider the bipartite incidence graph with the vertex set and the edges connecting the incident pairs of points and lines.
The girth of the incidence graph is twice the diameter n of the incidence graph.
From this it should be clear that the incidence graphs of generalized polygons are Moore graphs.
A generalized polygon is of order (s,t) if:
all vertices of the incidence graph corresponding to the elements of have the same degree s + 1 for some natural number s; in other words, every line contains exactly s + 1 points,
all vertices of the incidence graph corresponding to the elements of have the same degree t + 1 for some natural number t; in other words, every point lies on exactly t + 1 lines.
We say a generalized polygon is thick if every point (line) is incident with at least three lines (points). All thick generalized polygons have an
|
https://en.wikipedia.org/wiki/Homotopy%20extension%20property
|
In mathematics, in the area of algebraic topology, the homotopy extension property indicates which homotopies defined on a subspace can be extended to a homotopy defined on a larger space. The homotopy extension property of cofibrations is dual to the homotopy lifting property that is used to define fibrations.
Definition
Let be a topological space, and let . We say that the pair has the homotopy extension property if, given a homotopy and a map such that then there exists an extension of to a homotopy such that .
That is, the pair has the homotopy extension property if any map can be extended to a map (i.e. and agree on their common domain).
If the pair has this property only for a certain codomain , we say that has the homotopy extension property with respect to .
Visualisation
The homotopy extension property is depicted in the following diagram
If the above diagram (without the dashed map) commutes (this is equivalent to the conditions above), then pair (X,A) has the homotopy extension property if there exists a map which makes the diagram commute. By currying, note that homotopies expressed as maps are in natural bijection with expressions as maps .
Note that this diagram is dual to (opposite to) that of the homotopy lifting property; this duality is loosely referred to as Eckmann–Hilton duality.
Properties
If is a cell complex and is a subcomplex of , then the pair has the homotopy extension property.
A pair has the homotopy extension property if and only if is a retract of
Other
If has the homotopy extension property, then the simple inclusion map is a cofibration.
In fact, if you consider any cofibration , then we have that is homeomorphic to its image under . This implies that any cofibration can be treated as an inclusion map, and therefore it can be treated as having the homotopy extension property.
See also
Homotopy lifting property
References
Homotopy theory
Algebraic topology
|
https://en.wikipedia.org/wiki/Shebang%20%28Unix%29
|
In computing, a shebang is the character sequence consisting of the characters number sign and exclamation mark () at the beginning of a script. It is also called sharp-exclamation, sha-bang, hashbang, pound-bang, or hash-pling.
When a text file with a shebang is used as if it is an executable in a Unix-like operating system, the program loader mechanism parses the rest of the file's initial line as an interpreter directive. The loader executes the specified interpreter program, passing to it as an argument the path that was initially used when attempting to run the script, so that the program may use the file as input data. For example, if a script is named with the path path/to/script, and it starts with the line #!/bin/sh, then the program loader is instructed to run the program /bin/sh, passing path/to/script as the first argument.
The shebang line is usually ignored by the interpreter, because the "#" character is a comment marker in many scripting languages; some language interpreters that do not use the hash mark to begin comments still may ignore the shebang line in recognition of its purpose.
Syntax
The form of a shebang interpreter directive is as follows:
#! interpreter [optional-arg]
in which interpreter is a path to an executable program. The space between and interpreter is optional. There could be any number of spaces or tabs either before or after interpreter. The optional-arg will include any extra spaces up to the end-of-line.
In Linux, the file specified by interpreter can be executed if it has the execute rights and is one of the following:
a native executable, such as an ELF binary
any kind of file for which an interpreter was registered via the binfmt_misc mechanism (such as for executing Microsoft .exe binaries using wine)
another script starting with a shebang
On Linux and Minix, an interpreter can also be a script. A chain of shebangs and wrappers yields a directly executable file that gets the encountered scripts as parameters in
|
https://en.wikipedia.org/wiki/Resistor%20ladder
|
A resistor ladder is an electrical circuit made from repeating units of resistors, in specific configurations.
An R–2R ladder configuration is a simple and inexpensive way to perform digital-to-analog conversion (DAC), using repetitive arrangements of precise resistor networks in a ladder-like configuration. A string resistor ladder configuration implements the non-repetitive reference network.
History
A 1953 paper "Coding by Feedback Methods" describes "decoding networks" that convert numbers (in any base) represented by voltage sources or current sources connected to resistor networks in a "shunt resistor decoding network" (which in base 2 corresponds to the binary-weighted configuration) or in a "ladder resistor decoding network" (which in base 2 corresponds to R–2R configuration) into a single voltage output. The paper gives an advantage of R–2R that impedances seen by the sources are more equal.
Another historic description is in US Patent 3108266, filed in 1955, "Signal Conversion Apparatus".
String resistor ladder network
A string of many resistors connected between two reference voltages is called a "resistor string". The resistors act as voltage dividers between the referenced voltages. A Kelvin divider or string DAC is a string of equal valued resistors.
Analog-to-digital conversion
Each tap of the string generates a different voltage, which can be compared with another voltage: this is the basic principle of a flash ADC (analog-to-digital converter). Often a voltage is converted to a current, enabling the possibility to use an R–2R ladder network.
Disadvantage: for an n-bit ADC, the number of resistors grows exponentially, as resistors are required, while the R–2R resistor ladder only increases linearly with the number of bits, as it needs only resistors.
Advantage: higher impedance values can be reached using the same number of components.
Digital-to-analog conversion
A string resistor can function as a DAC by having the bits of the binary
|
https://en.wikipedia.org/wiki/Advanced%20Resource%20Connector
|
Advanced Resource Connector (ARC) is a grid computing middleware introduced by NorduGrid. It provides a common interface for submission of computational tasks to different distributed computing systems and thus can enable grid infrastructures of varying size and complexity. The set of services and utilities providing the interface is known as ARC Computing Element (ARC-CE). ARC-CE functionality includes data staging and caching, developed in order to support data-intensive distributed computing. ARC is an open source software distributed under the Apache License 2.0.
History
ARC appeared (and is still often referred to) as the NorduGrid middleware, originally proposed as an architecture on top of the Globus Toolkit optimized for the needs of High-Energy Physics computing for the Large Hadron Collider experiments. First deployment of ARC at the NorduGrid testbed took place in summer 2002, and by 2003 it was used to support complex computations.
The first stable release of ARC (version 0.4) came out in April 2004 under the GNU General Public License. The name "Advanced Resource Connector" was introduced for this release to distinguish the middleware from the infrastructure. In the same year, the Swedish national Grid project Swegrid became the first large cross-discipline infrastructure to be based on ARC.
In 2005, NorduGrid was formally established as a collaboration to support and coordinate ARC development. In 2006 two closely related projects were launched: the Nordic Data Grid Facility, deploying a pan-Nordic e-Science infrastructure based on ARC, and KnowARC, focused on transforming ARC into a next generation Grid middleware.
ARC v0.6 was released in May 2007, becoming the second stable release. Its key feature was introduction of the client library enabling easy development of higher-level applications. It was also the first ARC release making use of open standards, as it included support for JSDL. Later that year, the first technology preview of the next
|
https://en.wikipedia.org/wiki/T%20puzzle
|
The T puzzle is a tiling puzzle consisting of four polygonal shapes which can be put together to form a capital T. The four pieces are usually one isosceles right triangle, two right trapezoids and an irregular shaped pentagon.
Despite its apparent simplicity, it is a surprisingly hard puzzle of which the crux is the positioning of the irregular shaped piece. The earliest T puzzles date from around 1900 and were distributed as promotional giveaways. From the 1920s wooden specimen were produced and made available commercially. Most T puzzles come with a leaflet with additional figures to be constructed. Which shapes can be formed depends on the relative proportions of the different pieces.
Origins and early history
The Latin Cross
The Latin cross puzzle consists of a reassembling a five-piece dissection of the cross with three isosceles right triangles, one right trapezoids and an irregular shaped six-sized piece (see figure). When the pieces of the cross puzzle have the right dimensions, they can also be put together as a rectangle. From Chinese origin, the oldest examples date from the first half of the nineteenth century. One of the earliest published descriptions of the puzzle appeared in 1826 in the 'Sequel to the Endless Amusement'. Many other references of the cross puzzle can be found in amusement, puzzle and magicians books throughout the 19th century. The T puzzle is based on the cross puzzle, but without head and has therefore only four pieces. Another difference is that in the dissection of the T, one of the triangles is usually elongated as a right trapezoid. These changes make the puzzle more difficult and clever than the cross puzzle.
Advertising premiums
The T-puzzle became very popular in the beginning of the 20th century as a giveaway item, with hundreds of different companies using it to promote their business or product. The pieces were made from paper or cardboard and served as trade cards, with advertisement printed on them. They usually c
|
https://en.wikipedia.org/wiki/Reciprocal%20difference
|
In mathematics, the reciprocal difference of a finite sequence of numbers on a function is defined inductively by the following formulas:
See also
Divided differences
References
Finite differences
|
https://en.wikipedia.org/wiki/Single-loss%20expectancy
|
Single-loss expectancy (SLE) is the monetary value expected from the occurrence of a risk on an asset. It is related to risk management and risk assessment.
Single-loss expectancy is mathematically expressed as:
Where the exposure factor is represented in the impact of the risk over the asset, or percentage of asset lost. As an example, if the asset value is reduced by two thirds, the exposure factor value is 0.66. If the asset is completely lost, the exposure factor is 1.
The result is a monetary value in the same unit as the single-loss expectancy is expressed (euros, dollars, yens, etc.):
exposure factor is the subjective, potential percentage of loss to a specific asset if a specific threat is realized. The exposure factor is a subjective value that the person assessing risk must define.
See also
Information assurance
Risk assessment
Annualized loss expectancy
References
External links
Information Security Risk Analysis Paper from Digital Threat
Data security
Financial risk
|
https://en.wikipedia.org/wiki/Thiele%27s%20interpolation%20formula
|
In mathematics, Thiele's interpolation formula is a formula that defines a rational function from a finite set of inputs and their function values . The problem of generating a function whose graph passes through a given set of function values is called interpolation. This interpolation formula is named after the Danish mathematician Thorvald N. Thiele. It is expressed as a continued fraction, where ρ represents the reciprocal difference:
Be careful that the -th level in Thiele's interpolation formula is
while the -th reciprocal difference is defined to be
.
The two terms are different and can not be cancelled!
References
Finite differences
Articles with example ALGOL 68 code
Interpolation
|
https://en.wikipedia.org/wiki/Materialized%20view
|
In computing, a materialized view is a database object that contains the results of a query. For example, it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary using an aggregate function.
The process of setting up a materialized view is sometimes called materialization. This is a form of caching the results of a query, similar to memoization of the value of a function in functional languages, and it is sometimes described as a form of precomputation. As with other forms of precomputation, database users typically use materialized views for performance reasons, i.e. as a form of optimization.
Materialized views that store data based on remote tables were also known as snapshots (deprecated Oracle terminology).
In any database management system following the relational model, a view is a virtual table representing the result of a database query. Whenever a query or an update addresses an ordinary view's virtual table, the DBMS converts these into queries or updates against the underlying base tables. A materialized view takes a different approach: the query result is cached as a concrete ("materialized") table (rather than a view as such) that may be updated from the original base tables from time to time. This enables much more efficient access, at the cost of extra storage and of some data being potentially out-of-date. Materialized views find use especially in data warehousing scenarios, where frequent queries of the actual base tables can be expensive.
In a materialized view, indexes can be built on any column. In contrast, in a normal view, it's typically only possible to exploit indexes on columns that come directly from (or have a mapping to) indexed columns in the base tables; often this functionality is not offered at all.
Implementations
Oracle
Materialized views were implemented first by the Oracle Database: the Query rewrite feature was added from version 8i.
Ex
|
https://en.wikipedia.org/wiki/Certified%20Quality%20Engineer
|
Certified Quality Engineer, often abbreviated CQE, is a certification given by the American Society for Quality (ASQ). These engineers are professionally educated in quality engineering and quality control.
They are trained in researching and preventing unnecessary costs through lack of quality, lost production costs, lost market share due to poor quality, etc. They possess the knowledge needed to set up quality control circles, assess potential quality risks, and evaluate human factors and natural process variation.
Scope
CQE training includes the following topics:
Management Systems
Project Management
Quality Information Systems
Leadership Principles and Techniques
Training
Cost of Quality
Quality Philosophies & Approaches
History of Quality
Total Quality Management
Customer Relations
Quality Deployment
Supplier Qualification & Certification Systems
Quality Systems
Documentation Systems
Configuration Management
Planning, Controlling and Assuring Product and Process Quality
Design Inputs and Design Review
Validation and Qualification Methods
Process Capability
Interpretation of Technical Drawings and Specifications
Material Control
Acceptance Sampling
Calibration Systems
Measurement Systems
Measurement System Analysis
Gage Repeatability and Reproducibility (Gage R & R)
Destructive and Nondestructive Testing and Measuring
Traceability to Standards
Reliability and Risk Management
Design of Systems for Reliability
Failure Mode and Effects Analysis (FMEA)
Fault Tree Analysis (FTA)
Management and Planning Tools
Corrective Action
Preventive Action
Overcoming Barriers to Quality Improvement
Concepts of Probability and Statistics
Properties and Applications of Probability Distributions
Tests for Means, Variances, and Proportions
Statistical Decision Making
Drawing Valid Statistical Conclusions
Statistical Process Control
Control Charts
Design of Experiments
Techniques
Some techniques that Quality Engineers use in quality engineering/assurance include:
Statistical Proc
|
https://en.wikipedia.org/wiki/Garbage%20%28computer%20science%29
|
In computer science, garbage includes data, objects, or other regions of the memory of a computer system (or other system resources), which will not be used in any future computation by the system, or by a program running on it. Because every computer system has a finite amount of memory, and most software produces garbage, it is frequently necessary to deallocate memory that is occupied by garbage and return it to the heap, or memory pool, for reuse.
Classification
Garbage is generally classified into two types: syntactic garbage, any object or data which is within a program's memory space but unreachable from the program's root set; and semantic garbage, any object or data which is never accessed by a running program for any combination of program inputs. Objects and data which are not garbage are said to be live.
Casually stated, syntactic garbage is data that cannot be reached, and semantic garbage is data that will not be reached. More precisely, syntactic garbage is data that is unreachable due to the reference graph (there is no path to it), which can be determined by many algorithms, as discussed in tracing garbage collection, and only requires analyzing the data, not the code. Semantic garbage is data that will not be accessed, either because it is unreachable (hence also syntactic garbage), or is reachable but will not be accessed; this latter requires analysis of the code, and is in general an undecidable problem.
Syntactic garbage is a (usually strict) subset of semantic garbage, as it is entirely possible for an object to hold a reference to another object without ever using that object.
Example
In the following simple stack implementation in Java, each element popped from the stack becomes semantic garbage once there are no outside references to it:
public class Stack {
private Object[] elements;
private int size;
public Stack(int capacity) {
elements = new Object[capacity];
}
public void push(Object e) {
elemen
|
https://en.wikipedia.org/wiki/Diode%20logic
|
Diode logic (or diode-resistor logic) constructs AND and OR logic gates with diodes and resistors.
An active device (vacuum tubes in early computers, then transistors in diode–transistor logic) is additionally required to provide logical inversion (NOT) for functional completeness and amplification for voltage level restoration, which diode logic alone can't provide.
Since voltage levels weaken with each diode logic stage, multiple stages can't easily be cascaded, limiting diode logic's usefulness. However, diode logic has the advantage of utilizing only cheap passive components.
Background
Logic gates
Logic gates evaluate Boolean algebra, typically using electronic switches controlled by logical inputs connected in parallel or series. Diode logic can only implement OR and AND, because inverters (NOT gates) require an active device.
Logic voltage levels
Main article:
Binary logic uses two distinct logic levels of voltage signals that may be labeled high and low. In this discussion, voltages close to +5 volts are high, and voltages close to 0 volts (ground) are low. The exact magnitude of the voltage is not critical, provided that inputs are driven by strong enough sources so that output voltages lie within detectably different ranges.
For active-high or positive logic, high represents logic 1 (true) and low represents logic 0 (false). However, the assignment of logical 1 and logical 0 to high or low is arbitrary and is reversed in active-low or negative logic, where low is logical 1 while high is logical 0. The following diode logic gates work in both active-high or active-low logic, however the logical function they implement is different depending on what voltage level is considered active. Switching between active-high and active-low is commonly used to achieve a more efficient logic design.
Diode biasing
Forward-biased diodes have low impedance approximating a short circuit with a small voltage drop, while reverse-biased diodes have a very high imped
|
https://en.wikipedia.org/wiki/Method%20ringing
|
Method ringing (also known as scientific ringing) is a form of change ringing in which the ringers commit to memory the rules for generating each change of sequence, and pairs of bells are affected. This creates a form of bell music which is continually changing, but which cannot be discerned as a conventional melody. It is a way of sounding continually changing mathematical permutations.
It is distinct from call changes, where the ringers are instructed on how to generate each new change by calls from a conductor, and strictly, only two adjacent bells swap their position at each change.
In method ringing, the ringers are guided from permutation to permutation by following the rules of a method. Ringers typically learn a particular method by studying its "blue line", a diagram which shows its structure.
The underlying mathematical basis of method ringing is intimately linked to group theory. The basic building block of method ringing is plain hunt.
The first method, Grandsire, was designed around 1650, probably by Robert Roan who became master of the College Youths change ringing society in 1652. Details of the method on five bells appeared in print in 1668 in Tintinnalogia (Fabian Stedman with Richard Duckworth) and Campanalogia (1677 – written solely by Stedman), which are the first two publications on the subject.
The practice originated in England and remains most popular there today; in addition to bells in church towers, it is also often performed on handbells.
Fundamentals
There are thousands of different methods, a few of which are the below.
Plain hunt
Plain hunt is the simplest form of generating changing permutations continuously, and is a fundamental building-block of change ringing methods. It can be extended to any number of bells.
It consists of a plain undeviating course of a bell between the first and last places in the striking order, with two strikes in the first and last position to enable a turn-around.
Thus each bell moves one positi
|
https://en.wikipedia.org/wiki/Densely%20packed%20decimal
|
Densely packed decimal (DPD) is an efficient method for binary encoding decimal digits.
The traditional system of binary encoding for decimal digits, known as binary-coded decimal (BCD), uses four bits to encode each digit, resulting in significant wastage of binary data bandwidth (since four bits can store 16 states and are being used to store only 10), even when using packed BCD. Densely packed decimal is a more efficient code that packs three digits into ten bits using a scheme that allows compression from, or expansion to, BCD with only two or three hardware gate delays.
The densely packed decimal encoding is a refinement of Chen–Ho encoding; it gives the same compression and speed advantages, but the particular arrangement of bits used confers additional advantages:
Compression of one or two digits (into the optimal four or seven bits respectively) is achieved as a subset of the three-digit encoding. This means that arbitrary numbers of decimal digits (not only multiples of three digits) can be encoded efficiently. For example, 38 = 12 × 3 + 2 decimal digits can be encoded in 12 × 10 + 7 = 127 bits – that is, 12 sets of three decimal digits can be encoded using 12 sets of ten binary bits and the remaining two decimal digits can be encoded using a further seven binary bits.
The subset encoding mentioned above is simply the rightmost bits of the standard three-digit encoding; the encoded value can be widened simply by adding leading 0 bits.
All seven-bit BCD numbers (0 through 79) are encoded identically by DPD. This makes conversions of common small numbers trivial. (This must break down at 80, because that requires eight bits for BCD, but the above property requires that the DPD encoding must fit into seven bits.)
The low-order bit of each digit is copied unmodified. Thus, the non-trivial portion of the encoding can be considered a conversion from three base-5 digits to seven binary bits. Further, digit-wise logical values (in which each digit is either
|
https://en.wikipedia.org/wiki/Terrestrial%20locomotion
|
Terrestrial locomotion has evolved as animals adapted from aquatic to terrestrial environments. Locomotion on land raises different problems than that in water, with reduced friction being replaced by the increased effects of gravity.
As viewed from evolutionary taxonomy, there are three basic forms of animal locomotion in the terrestrial environment:
legged – moving by using appendages
limbless locomotion – moving without legs, primarily using the body itself as a propulsive structure.
rolling – rotating the body over the substrate
Some terrains and terrestrial surfaces permit or demand alternative locomotive styles. A sliding component to locomotion becomes possible on slippery surfaces (such as ice and snow), where location is aided by potential energy, or on loose surfaces (such as sand or scree), where friction is low but purchase (traction) is difficult. Humans, especially, have adapted to sliding over terrestrial snowpack and terrestrial ice by means of ice skates, snow skis, and toboggans.
Aquatic animals adapted to polar climates, such as ice seals and penguins also take advantage of the slipperiness of ice and snow as part of their locomotion repertoire. Beavers are known to take advantage of a mud slick known as a "beaver slide" over a short distance when passing from land into a lake or pond. Human locomotion in mud is improved through the use of cleats. Some snakes use an unusual method of movement known as sidewinding on sand or loose soil. Animals caught in terrestrial mudflows are subject to involuntary locomotion; this may be beneficial to the distribution of species with limited locomotive range under their own power. There is less opportunity for passive locomotion on land than by sea or air, though parasitism (hitchhiking) is available toward this end, as in all other habitats.
Many species of monkeys and apes use a form of arboreal locomotion known as brachiation, with forelimbs as the prime mover. Some elements of the gymnastic sport of une
|
https://en.wikipedia.org/wiki/Professional%20audio
|
Professional audio, abbreviated as pro audio, refers to both an activity and a category of high-quality, studio-grade audio equipment. Typically it encompasses sound recording, sound reinforcement system setup and audio mixing, and studio music production by trained sound engineers, audio engineers, record producers, and audio technicians who work in live event support and recording using mixing consoles, recording equipment and sound reinforcement systems. Professional audio is differentiated from consumer- or home-oriented audio, which are typically geared toward listening in a non-commercial environment.
Professional audio can include, but is not limited to broadcast radio, audio mastering in a recording studio, television studio, and sound reinforcement such as a live concert, DJ performances, audio sampling, public address system set up, sound reinforcement in movie theatres, and design and setup of piped music in hotels and restaurants. Professional audio equipment is sold at professional audio stores and music stores.
Definition
The term professional audio has no precise definition, but it typically includes:
Operations carried out by trained audio engineers
The capturing of sound with one or more microphones
Balancing, mixing and adjusting sound signals from multitrack recording devices using a mixing console
The control of audio levels using standardized types of metering
Sound signals passing through lengthy signal chains involving processes at different times and places, involving a variety of skills
Compliance with organizational, national and international practices and standards established by such bodies as the International Telecommunication Union, Audio Engineering Society and European Broadcasting Union
Setting up or designing sound reinforcement systems or recording studios
Stores
A professional audio store is a retail establishment that sells, and in many cases rents, expensive, high-end sound recording equipment (microphones, audio m
|
https://en.wikipedia.org/wiki/Institute%20of%20Professional%20Sound
|
The Institute of Professional Sound, previously the Institute of Broadcast Sound, is an organisation for audio professionals. The organisation provides opportunities for training and conferencing to assist in maintaining high standards in all areas of professional audio operations. The organisation is based in the UK.
The organisation was founded in 1977 by sound balancers in BBC Television and Radio and Independent TV, when its membership comprised audio practitioners working in all areas of broadcast audio including radio, location, and post-production sound. On 1 January 2012 the Institute of Professional Sound was adopted as the new name of the organisation, in order to attract a wider membership which is not exclusively from broadcasting.
History
The Institute of Professional Sound was established in 1977 as the Institute of Broadcast Sound, by individuals working in radio and television, who recognised a need for a coordinated means for the exchange of innovative ideas between practitioners in the field of broadcast audio. The organisation serves as a catalyst to promote collaborative initiatives between manufacturers of digital audio recording and editing equipment. Listed among the successes of the organisation is the File Exchange Initiative, from which the iXML specification was established, setting an open standard for the inclusion of location sound metadata in Broadcast Wave audio files.
Projects
Mentoring
The Institute of Professional Sound offers mentoring and career enhancement opportunities for entry-level employees, college graduates, and seasoned professionals. The mentoring program is designed to coordinate members who desire to assist their colleagues progress and succeed in their career with individuals seeking the advice and support of more experienced practitioners in the industry.
Training
The organisation provides training forums and conferences for its members which introduce members to emerging technologies, along with seminars on
|
https://en.wikipedia.org/wiki/Thermus%20thermophilus
|
Thermus thermophilus is a Gram-negative bacterium used in a range of biotechnological applications, including as a model organism for genetic manipulation, structural genomics, and systems biology. The bacterium is extremely thermophilic, with an optimal growth temperature of about . Thermus thermophilus was originally isolated from a thermal vent within a hot spring in Izu, Japan by Tairo Oshima and Kazutomo Imahori. The organism has also been found to be important in the degradation of organic materials in the thermogenic phase of composting.
T. thermophilus is classified into several strains, of which HB8 and HB27 are the most commonly used in laboratory environments. Genome analyses of these strains were independently completed in 2004. Thermus also displays the highest frequencies of natural transformation known to date.
Cell structure
Thermus thermophilus is a Gram-negative bacterium with an outer membrane that is composed of phospholipids and lipopolysaccharides. This bacterium also has a thin peptidoglycan (also known as murein) layer, in this layer there are 29 muropeptides which account for more than 85% of the total murein layer. The presence of Ala, Glu, Gly, Orn, N-acetyl glucosamine and N-acetylmuramic were found in the murein layer of this bacterium. Another unique feature of this murein layer is that the N-terminal Gly is substituted with phenylacetic acid. This is the first instance of phenylacetic acid found in the murein of bacterial cells. The composition and peptide cross-bridges found in this murein layer are typical of Gram-positive bacterium, but the amount, the degree of the cross-linkage and length of the glycan chain gives this bacterium its Gram-negative properties.
Survival mechanisms
Thermus thermophilus was originally found within a thermal vent in Japan. These bacteria can be found in a variety of geothermal environments. These Thermophiles require a more stringent DNA repair system, as DNA becomes unstable at high temperatures.
|
https://en.wikipedia.org/wiki/Pui%20Ching%20Invitational%20Mathematics%20Competition
|
Pui Ching Invitational Mathematics Competition (Traditional Chinese: 培正數學邀請賽), is held yearly by Pui Ching Middle School since 2002. It was formerly named as Pui Ching Middle School Invitational Mathematics Competition for the first three years. At present, more than 130 secondary schools send teams to participate in the competition.
See also
List of mathematics competitions
Education in Hong Kong
External links
Official website (in Traditional Chinese)
Site with past papers (in Traditional Chinese and English)
Competitions in Hong Kong
Mathematics competitions
Recurring events established in 2002
2002 establishments in Hong Kong
|
https://en.wikipedia.org/wiki/Project%20Kahu
|
Project Kahu was a major upgrade for the A-4K Skyhawk attack aircraft operated by the Royal New Zealand Air Force (RNZAF) in the mid-1980s. (The project was named after the Māori-language name for the New Zealand swamp harrier.)
History
In 1986, the RNZAF initiated this project to improve the capabilities of its Douglas A-4 Skyhawk fleet. The upgrade included the installation of a Westinghouse AN/APG-66 radar optimized for maritime tracking, HOTAS controls and a glass cockpit (2 large CRT screens), MIL-STD 1553B databus, Litton Industries LN-93 inertial navigation system, Ferranti 4510 wide-angle HUD, the Vinten airborne video recording system, the General Instrument ALR-66 radar warning receiver, and a Tracor ALR-39 chaff/flare dispenser.
The contract covered the upgrade of all 22 of the RNZAF's Skyhawk fleet, which at the time comprised the surviving 12 (of 14) K-model aircraft of the RNZAF's original order plus the 10 G-models acquired from the Royal Australian Navy in 1984. However, only 21 were completed as one (NZ6210) was lost in 1989 before it was upgraded.
Parts of the wings were reskinned and some structural elements rebuilt, and the aircraft wiring replaced. Because of advances in miniaturization, it was possible to incorporate these additional electronics items entirely within the fuselage without requiring the use of the dorsal hump. The Kahu-modified Skyhawk could be recognized by a blade-like ILS aerial antenna on the leading edge of the vertical stabilizer. The aircraft also received armament upgrades including the capability to fire AIM-9L Sidewinders, AGM-65 Mavericks and GBU-16 Paveway II laser-guided bombs.
TA-4K NZ6254 was the first aircraft to be completed and undertook an extensive test programme conducted by Flight Lieutenant Steve Moore, who had recently become only the second RNZAF pilot to complete and graduate from the Empire Test Pilot School in the United Kingdom. The programme was completed in June 1991 when the final aircraft, N
|
https://en.wikipedia.org/wiki/Audio%20restoration
|
Audio restoration is the process of removing imperfections (such as hiss, impulse noise, crackle, wow and flutter, background noise, and mains hum) from sound recordings. Audio restoration can be performed directly on the recording medium (for example, washing a gramophone record with a cleansing solution), or on a digital representation of the recording using a computer (such as an AIFF or WAV file). Record restoration
is a particular form of audio restoration that seeks to repair the sound of damaged gramophone records.
Modern audio restoration techniques are usually performed by digitizing an audio source from analog media, such as lacquer recordings, optical sources and magnetic tape. Once in the digital realm, recordings can be restored and cleaned up using dedicated, standalone digital processing units such as declickers, decracklers, dehissers and dialogue noise suppressors, or using digital audio workstations (DAWs). DAWs can perform various automated techniques to remove anomalies using algorithms to accomplish broadband denoising, declicking and decrackling, as well as removing buzzes and hums. Often audio engineers and sound editors use DAWs to manually remove "pops and ticks" from recordings, and the latest spectrographic 'retouching' techniques allow for the suppression or removal of discrete unwanted sounds. DAWs are capable of removing the smallest of anomalies, often without leaving artifacts and other evidence of their removal. Although fully automated solutions exist, audio restoration is sometimes a time-consuming process that requires skilled audio engineers with specific experience in music and film recording techniques.
Overview
The majority of audio restoration done today is done for music sound recordings and soundtracks for motion picture and television programs. The demand for restored audio has been fueled by new media consumer technologies such as CD and DVD. Modern audio reproduction systems require that sound sources be in the best
|
https://en.wikipedia.org/wiki/Media%20processor
|
A media processor, mostly used as an image/video processor, is a microprocessor-based system-on-a-chip which is designed to deal with digital streaming data in real-time (e.g. display refresh) rates. These devices can also be considered a class of digital signal processors (DSPs).
Unlike graphics processing units (GPUs), which are used for computer displays, media processors are targeted at digital televisions and set-top boxes.
The streaming digital media classes include:
uncompressed video
compressed digital video - e.g. MPEG-1, MPEG-2, MPEG-4
digital audio- e.g. PCM, AAC
Such SOCs are composed of:
a microprocessor optimized to deal with these media datatypes
a memory interface
streaming media interfaces
specialized functional units to help deal with the various digital media codecs
The microprocessor might have these optimizations:
vector processing or SIMD functional units to efficiently deal with these media datatypes
DSP-like features
Previous to media processors, these streaming media datatypes were processed using fixed-function, hardwired ASICs, which could not be updated in the field. This was a big disadvantage when any of the media standards were changed. Since media processors are software programmed devices, the
processing done on them could be updated with new software releases. This allowed new generations of systems to be created without hardware redesign. For set-top boxes this even allows for the possibility of in-the-field upgrade by downloading of new software through cable or satellite networks.
Companies that pioneered the idea of media processors (and created the marketing term of media processor) included:
MicroUnity MediaProcessor - Cancelled in 1996 before introduction
IBM Mfast - Described at the Microprocessor Forum in 1995, planned to ship in mid-1997 but was cancelled before introduction
Equator Semiconductor BSP line - their processors are used in Hitachi televisions, company acquired by Pixelworks
Chromatic
|
https://en.wikipedia.org/wiki/Unigine
|
UNIGINE is a proprietary cross-platform game engine developed by UNIGINE Company used in simulators, virtual reality systems, serious games and visualization. It supports OpenGL 4, Vulkan and DirectX 12.
UNIGINE Engine is a core technology for a lineup of benchmarks (CPU, GPU, power supply, cooling system), which are used by overclockers and technical media such as Tom's Hardware, Linus Tech Tips, PC Gamer, and JayzTwoCents. UNIGINE benchmarks are also included as part of the Phoronix Test Suite for benchmarking purposes on Linux and other systems.
UNIGINE 1
The first public release was the 0.3 version on May 4, 2005.
Platforms
UNIGINE 1 supported Microsoft Windows, Linux, OS X, PlayStation 3, Android, and iOS. Experimental support for WebGL existed but was not included into the official SDK. UNIGINE 1 supported DirectX 9, DirectX 10, DirectX 11, OpenGL, OpenGL ES and PlayStation 3, while initial versions (v0.3x) only supported OpenGL.
UNIGINE 1 provided C++, C#, and UnigineScript APIs for developers. It also supported the shading languages GLSL and HLSL.
Game features
UNIGINE 1 had support for large virtual scenarios and specific hardware required by professional simulators and enterprise VR systems, often called serious games.
Support for large virtual worlds was implemented via double precision of coordinates (64-bit per axis), zone-based background data streaming, and optional operations in geographic coordinate system (latitude, longitude, and elevation instead of X, Y, Z).
Display output was implemented via multi-channel rendering (network-synchronized image generation of a single large image with several computers), which typical for professional simulators. The same system enabled support of multiple output devices with asymmetric projections (e.g. CAVE). Curved screens with multiple projectors were also supported. UNIGINE 1 had stereoscopic output support for anaglyph rendering, separate images output, Nvidia 3D Vision, and virtual reality headsets
|
https://en.wikipedia.org/wiki/Track%20and%20trace
|
In the distribution and logistics of many types of products, track and trace or tracking and tracing concerns a process of determining the current and past locations (and other information) of a unique item or property. Mass serialization is the process that manufacturers go through to assign and mark each of their products with a unique identifier such as an Electronic Product Code (EPC) for track and trace purposes. The marking or "tagging" of products is usually completed within the manufacturing process through the use of various combinations of human readable or machine readable technologies such as DataMatrix barcodes or RFID.
The track and trace concept can be supported by means of reckoning and reporting of the position of vehicles and containers with the property of concern, stored, for example, in a real-time database. This approach leaves the task to compose a coherent depiction of the subsequent status reports.
Another approach is to report the arrival or departure of the object and recording the identification of the object, the location where observed, the time, and the status. This approach leaves the task to verify the reports regarding consistency and completeness. An example of this method might be the package tracking provided by shippers, such as the United States Postal Service, Deutsche Post, Royal Mail, United Parcel Service, AirRoad, or FedEx.
Technology
The international standards organization EPCglobal under GS1 has ratified the EPC network standards (esp. the EPC information services EPCIS standard) which codify the syntax and semantics for supply chain events and the secure method for selectively sharing supply chain events with trading partners. These standards for Tracking and Tracing have been used in successful deployments in many industries and there are now a wide range of products that are certified as being compatible with these standards.
In response to a growing number of recall incidents (food, pharmaceutical, toys, etc.)
|
https://en.wikipedia.org/wiki/Genotyping
|
Genotyping is the process of determining differences in the genetic make-up (genotype) of an individual by examining the individual's DNA sequence using biological assays and comparing it to another individual's sequence or a reference sequence. It reveals the alleles an individual has inherited from their parents. Traditionally genotyping is the use of DNA sequences to define biological populations by use of molecular tools. It does not usually involve defining the genes of an individual.
Techniques
Current methods of genotyping include restriction fragment length polymorphism identification (RFLPI) of genomic DNA, random amplified polymorphic detection (RAPD) of genomic DNA, amplified fragment length polymorphism detection (AFLPD), polymerase chain reaction (PCR), DNA sequencing, allele specific oligonucleotide (ASO) probes, and hybridization to DNA microarrays or beads. Genotyping is important in research of genes and gene variants associated with disease. Due to current technological limitations, almost all genotyping is partial. That is, only a small fraction of an individual's genotype is determined, such as with (epi)GBS (Genotyping by sequencing) or RADseq. New mass-sequencing technologies promise to provide whole-genome genotyping (or whole genome sequencing) in the future.
Applications
Genotyping applies to a broad range of individuals, including microorganisms. For example, viruses and bacteria can be genotyped. Genotyping in this context may help in controlling the spreading of pathogens, by tracing the origin of outbreaks. This area is often referred to as molecular epidemiology or forensic microbiology.
Human genotyping
Humans can also be genotyped. For example, when testing fatherhood or motherhood, scientists typically only need to examine 10 or 20 genomic regions (like single-nucleotide polymorphism (SNPs)), which represent a tiny fraction of the human genome.
When genotyping transgenic organisms, a single genomic region may be all t
|
https://en.wikipedia.org/wiki/Maupertuis%27s%20principle
|
In classical mechanics, Maupertuis's principle (named after Pierre Louis Maupertuis) states that the path followed by a physical system is the one of least length (with a suitable interpretation of path and length). It is a special case of the more generally stated principle of least action. Using the calculus of variations, it results in an integral equation formulation of the equations of motion for the system.
Mathematical formulation
Maupertuis's principle states that the true path of a system described by generalized coordinates between two specified states and is a stationary point (i.e., an extremum (minimum or maximum) or a saddle point) of the abbreviated action functional
where are the conjugate momenta of the generalized coordinates, defined by the equation
where is the Lagrangian function for the system. In other words, any first-order perturbation of the path results in (at most) second-order changes in . Note that the abbreviated action is a functional (i.e. a function from a vector space into its underlying scalar field), which in this case takes as its input a function (i.e. the paths between the two specified states).
Jacobi's formulation
For many systems, the kinetic energy is quadratic in the generalized velocities
although the mass tensor may be a complicated function of the generalized coordinates . For such systems, a simple relation relates the kinetic energy, the generalized momenta and the generalized velocities
provided that the potential energy does not involve the generalized velocities. By defining a normalized distance or metric in the space of generalized coordinates
one may immediately recognize the mass tensor as a metric tensor. The kinetic energy may be written in a massless form
or,
Therefore, the abbreviated action can be written
since the kinetic energy equals the (constant) total energy minus the potential energy . In particular, if the potential energy is a constant, then Jacobi's principle re
|
https://en.wikipedia.org/wiki/Steering%20ratio
|
Steering ratio refers to the ratio between the turn of the steering wheel (in degrees) or handlebars and the turn of the wheels (in degrees).
The steering ratio is the ratio of the number of degrees of turn of the steering wheel to the number of degrees the wheel(s) turn as a result. In motorcycles, delta tricycles and bicycles, the steering ratio is always 1:1, because the steering wheel is fixed to the front wheel. A steering ratio of x:y means that a turn of the steering wheel x degree(s) causes the wheel(s) to turn y degree(s). In most passenger cars, the ratio is between 12:1 and 20:1. For example, if one and a half turns of the steering wheel, 540 degrees, causes the inner & outer wheel to turn 35 and 30 degrees respectively, due to Ackermann steering geometry, the ratio is then 540:((35+30)/2) = 16.6:1.
A higher steering ratio means that the steering wheel is turned more to get the wheels turning, but it will be easier to turn the steering wheel. A lower steering ratio means that the steering wheel is turned less to get the wheels turning, but it will be harder to turn the steering wheel. Larger and heavier vehicles will often have a higher steering ratio, which will make the steering wheel easier to turn. If a truck had a low steering ratio, it would be very hard to turn the steering wheel. In normal and lighter cars, the wheels are easier to turn, so the steering ratio doesn't have to be as high. In race cars the ratio is typically very low, because the vehicle must respond to steering input much faster than in normal cars. The steering wheel is therefore harder to turn.
Variable-ratio steering
Variable-ratio steering is a system that uses different ratios on the rack in a rack and pinion steering system. At the center of the rack, the space between the teeth are smaller and the space becomes larger as the pinion moves down the rack. In the middle of the rack there is a higher ratio and the ratio becomes lower as the steering wheel is turned towards loc
|
https://en.wikipedia.org/wiki/Referer%20spoofing
|
In HTTP networking, typically on the World Wide Web, referer spoofing (based on a canonised misspelling of "referrer") sends incorrect referer information in an HTTP request in order to prevent a website from obtaining accurate data on the identity of the web page previously visited by the user.
Overview
Referer spoofing is typically done for data privacy reasons, in testing, or in order to request information (without genuine authority) which some web servers may only supply in response to requests with specific HTTP referers.
To improve their privacy, individual browser users may replace accurate referer data with inaccurate data, though many simply suppress their browser's sending of any referer data. Sending no referrer information is not technically spoofing, though sometimes also described as such.
In software, systems and networks testing, and sometimes penetration testing, referer spoofing is often just part of a larger procedure of transmitting both accurate and inaccurate as well as expected and unexpected input to the HTTPD system being tested and observing the results.
While many websites are configured to gather referer information and serve different content depending on the referer information obtained, exclusively relying on HTTP referer information for authentication and authorization purposes is not a genuine computer security measure. HTTP referer information is freely alterable and interceptable, and is not a password, though some poorly configured systems treat it as such.
Application
Some websites, especially many image hosting sites, use referer information to secure their materials: only browsers arriving from their web pages are served images. Additionally a site may want users to click through pages with advertisements before directly being able to access a downloadable file – using the referring page or referring site information can help a site redirect unauthorized users to the landing page the site would like to use.
If attacker
|
https://en.wikipedia.org/wiki/X.3
|
X.3 is an ITU-T standard indicating what functions are to be performed by a Packet Assembler/Disassembler (PAD) when connecting character-mode data terminal equipment (DTE), such as a computer terminal, to a packet switched network such as an X.25 network, and specifying the parameters that control this operation.
The following is list of X.3 parameters associated with a PAD:
1 PAD recall using a character
2 Echo
3 Selection of data forwarding character
4 Selection of idle timer delay
5 Ancillary device control
6 Control of PAD service signals
7 Operation on receipt of break signal
8 Discard output
9 Padding after carriage return
10 Line folding
11 DTE speed
12 Flow control of the PAD
13 Linefeed insertion after carriage return
14 Padding after linefeed
15 Editing
16 Character delete
17 Line delete
18 Line display
19 Editing PAD service signals
20 Echo mask
21 Parity treatment
22 Page wait
References
External links
X.3 standard at ITU site
Cisco Web Page Definition of X.3 parameters
Networking standards
X.25
|
https://en.wikipedia.org/wiki/Noweb
|
Noweb, stylised in lowercase as noweb, is a literate programming tool, created in 1989–1999 by Norman Ramsey, and designed to be simple, easily extensible and language independent.
As in WEB and CWEB, the main components of Noweb are two programs: "notangle", which extracts 'machine' source code from the source texts, and "noweave", which produces nicely-formatted printable documentation.
Noweb supports TeX, LaTeX, HTML, and troff back ends and works with any programming language. Besides simplicity this is the main advantage over WEB, which needs different versions to support programming languages other than Pascal. (Thus the necessity of CWEB, which supports C and similar languages.)
Noweb's input
A Noweb input text contains program source code interleaved with documentation. It consists of so-called chunks that are either documentation chunks or code chunks.
A documentation chunk begins with a line that starts with an at sign (@) followed by a space or newline. A documentation chunk has no name. Documentation chunks normally contain LaTeX, but Noweb is also used with HTML, plain TeX, and troff.
Code chunks are named. A code chunk begins with
<<chunk name>>=
on a line by itself. The double left angle bracket (<<) must be in the first column.
Each chunk is terminated by the beginning of another chunk. If the first line in the file does not mark the beginning of a chunk, it is assumed to be the first line of a documentation chunk.
Code chunks aren't treated specially by Noweb's tools—they may be placed in any order and, when needed, they are just concatenated. Chunk references in code are dereferenced and the whole requested source code is extracted.
Example of a simple Noweb program
This is an example of a "hello world" program with documentation:
\section{Hello world}
Today I awoke and decided to write
some code, so I started to write Hello World in \textsf C.
<<hello.c>>=
/*
<<license>>
*/
#include <stdio.h>
int main(int argc, char *argv[]) {
|
https://en.wikipedia.org/wiki/Churchill%20Barriers
|
The Churchill Barriers are four causeways in the Orkney islands with a total length of . They link the Orkney Mainland in the north to the island of South Ronaldsay via Burray and the two smaller islands of Lamb Holm and Glimps Holm.
The barriers were built between May 1940 and September 1944, primarily as naval defences to protect the anchorage at Scapa Flow, but since 12 May 1945 they serve as road links between the islands. The two southern barriers, Glimps Holm to Burray and Burray to South Ronaldsay, are Category A listed.
History
On 14 October 1939, the Royal Navy battleship HMS Royal Oak was sunk at her moorings within the natural harbour of Scapa Flow, by the under the command of Günther Prien. U-47 had entered Scapa Flow through Holm Sound, one of several eastern entrances to Scapa Flow.
The eastern passages were protected by measures including sunken block ships, booms and anti-submarine nets, but U-47 entered at night at high tide by navigating between the block ships.
To prevent further attacks, First Lord of the Admiralty Winston Churchill ordered the construction of permanent barriers. Work began in May 1940 and the barriers were completed in September 1944 but were not officially opened until 12 May 1945, four days after Victory in Europe Day.
Construction
The contract for building the barriers was awarded to Balfour Beatty, although part of the southernmost barrier (between Burray and South Ronaldsay) was sub-contracted to William Tawse & Co. The first Resident Superintending Civil Engineer was E K Adamson, succeeded in 1942 by G Gordon Nicol.
Preparatory work on the site began in May 1940, while experiments on models for the design were undertaken at Whitworth Engineering Laboratories at the University of Manchester.
The bases of the barriers were built from gabions enclosing 250,000 tonnes of broken rock, from quarries on Orkney. The gabions were dropped into place from overhead cableways into waters up to deep. The bases were then
|
https://en.wikipedia.org/wiki/Higgs%20sector
|
In particle physics, the Higgs sector is the collection of quantum fields and/or particles that are responsible for the Higgs mechanism, i.e. for the spontaneous symmetry breaking of the Higgs field. The word "sector" refers to a subgroup of the total set of fields and particles.
See also
Higgs boson
Hidden sector
References
Standard Model
Symmetry
|
https://en.wikipedia.org/wiki/Recursion%20%28computer%20science%29
|
In computer science, recursion is a method of solving a computational problem where the solution depends on solutions to smaller instances of the same problem. Recursion solves such recursive problems by using functions that call themselves from within their own code. The approach can be applied to many types of problems, and recursion is one of the central ideas of computer science.
Most computer programming languages support recursion by allowing a function to call itself from within its own code. Some functional programming languages (for instance, Clojure) do not define any looping constructs but rely solely on recursion to repeatedly call code. It is proved in computability theory that these recursive-only languages are Turing complete; this means that they are as powerful (they can be used to solve the same problems) as imperative languages based on control structures such as and .
Repeatedly calling a function from within itself may cause the call stack to have a size equal to the sum of the input sizes of all involved calls. It follows that, for problems that can be solved easily by iteration, recursion is generally less efficient, and, for large problems, it is fundamental to use optimization techniques such as tail call optimization.
Recursive functions and algorithms
A common algorithm design tactic is to divide a problem into sub-problems of the same type as the original, solve those sub-problems, and combine the results. This is often referred to as the divide-and-conquer method; when combined with a lookup table that stores the results of previously solved sub-problems (to avoid solving them repeatedly and incurring extra computation time), it can be referred to as dynamic programming or memoization.
Base case
A recursive function definition has one or more base cases, meaning input(s) for which the function produces a result trivially (without recurring), and one or more recursive cases, meaning input(s) for which the program recurs (calls itsel
|
https://en.wikipedia.org/wiki/Programming%20model
|
A programming model is an execution model coupled to an API or a particular pattern of code. In this style, there are actually two execution models in play: the execution model of the base programming language and the execution model of the programming model. An example is Spark where Java is the base language, and Spark is the programming model. Execution may be based on what appear to be library calls. Other examples include the POSIX Threads library and Hadoop's MapReduce. In both cases, the execution model of the programming model is different from that of the base language in which the code is written. For example, the C programming language has no behavior in its execution model for input/output or thread behavior. But such behavior can be invoked from C syntax, by making what appears to be a call to a normal C library.
What distinguishes a programming model from a normal library is that the behavior of the call cannot be understood in terms of the language the program is written in. For example, the behavior of calls to the POSIX thread library cannot be understood in terms of the C language. The reason is that the call invokes an execution model that is different from the execution model of the language. This invocation of an outside execution model is the defining characteristic of a programming model, in contrast to a programming language.
In parallel computing, the execution model often must expose features of the hardware in order to achieve high performance. The large amount of variation in parallel hardware causes a concurrent need for a similarly large number of parallel execution models. It is impractical to make a new language for each execution model, hence it is a common practice to invoke the behaviors of the parallel execution model via an API. So, most of the programming effort is done via parallel programming models rather than parallel languages. The terminology around such programming models tends to focus on the details of the hardwar
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.