source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/The%20Elements%20of%20Programming%20Style
|
The Elements of Programming Style, by Brian W. Kernighan and P. J. Plauger, is a study of programming style, advocating the notion that computer programs should be written not only to satisfy the compiler or personal programming "style", but also for "readability" by humans, specifically software maintenance engineers, programmers and technical writers. It was originally published in 1974.
The book pays explicit homage, in title and tone, to The Elements of Style, by Strunk & White and is considered a practical template promoting Edsger Dijkstra's structured programming discussions. It has been influential and has spawned a series of similar texts tailored to individual languages, such as The Elements of C Programming Style, The Elements of C# Style, The Elements of Java(TM) Style, The Elements of MATLAB Style, etc.
The book is built on short examples from actual, published programs in programming textbooks. This results in a practical treatment rather than an abstract or academic discussion. The style is diplomatic and generally sympathetic in its criticism, and unabashedly honest as well— some of the examples with which it finds fault are from the authors' own work (one example in the second edition is from the first edition).
Lessons
Its lessons are summarized at the end of each section in pithy maxims, such as "Let the machine do the dirty work":
Write clearly – don't be too clever.
Say what you mean, simply and directly.
Use library functions whenever feasible.
Avoid too many temporary variables.
Write clearly – don't sacrifice clarity for efficiency.
Let the machine do the dirty work.
Replace repetitive expressions by calls to common functions.
Parenthesize to avoid ambiguity.
Choose variable names that won't be confused.
Avoid unnecessary branches.
If a logical expression is hard to understand, try transforming it.
Choose a data representation that makes the program simple.
Write first in easy-to-understand pseudo language; then translate into
|
https://en.wikipedia.org/wiki/List%20of%20battery%20sizes
|
This is a list of the sizes, shapes, and general characteristics of some common primary and secondary battery types in household, automotive and light industrial use.
The complete nomenclature for a battery specifies size, chemistry, terminal arrangement, and special characteristics. The same physically interchangeable cell size or battery size may have widely different characteristics; physical interchangeability is not the sole factor in substituting a battery.
The full battery designation identifies not only the size, shape and terminal layout of the battery but also the chemistry (and therefore the voltage per cell) and the number of cells in the battery. For example, a CR123 battery is always LiMnO2 ('Lithium') chemistry, in addition to its unique size.
The following tables give the common battery chemistry types for the current common sizes of batteries. See Battery chemistry for a list of other electrochemical systems.
Cylindrical batteries
Rectangular batteries
Camera batteries
As well as other types, digital and film cameras often use specialized primary batteries to produce a compact product. Flashlights and portable electronic devices may also use these types.
Button cells – coin, watch
Lithium cells
Coin-shaped cells are thin compared to their diameter. Polarity is usually stamped on the metal casing.
The IEC prefix "CR" denotes lithium manganese dioxide chemistry. Since LiMnO2 cells produce 3 volts there are no widely available alternative chemistries for a lithium coin battery. The "BR" prefix indicates a round lithium/carbon monofluoride cell. See lithium battery for discussion of the different performance characteristics. One LiMnO2 cell can replace two alkaline or silver-oxide cells.
IEC designation numbers indicate the physical dimensions of the cylindrical cell. Cells less than one centimeter in height are assigned four-digit numbers, where the first two digits are the diameter in millimeters, while the last two digits are the height in
|
https://en.wikipedia.org/wiki/Glaisher%E2%80%93Kinkelin%20constant
|
In mathematics, the Glaisher–Kinkelin constant or Glaisher's constant, typically denoted , is a mathematical constant, related to the -function and the Barnes -function. The constant appears in a number of sums and integrals, especially those involving gamma functions and zeta functions. It is named after mathematicians James Whitbread Lee Glaisher and Hermann Kinkelin.
Its approximate value is:
= ... .
The Glaisher–Kinkelin constant can be given by the limit:
where is the hyperfactorial. This formula displays a similarity between and which is perhaps best illustrated by noting Stirling's formula:
which shows that just as is obtained from approximation of the factorials, can also be obtained from a similar approximation to the hyperfactorials.
An equivalent definition for involving the Barnes -function, given by where is the gamma function is:
.
The Glaisher–Kinkelin constant also appears in evaluations of the derivatives of the Riemann zeta function, such as:
where is the Euler–Mascheroni constant. The latter formula leads directly to the following product found by Glaisher:
An alternative product formula, defined over the prime numbers, reads
where denotes the th prime number.
The following are some integrals that involve this constant:
A series representation for this constant follows from a series for the Riemann zeta function given by Helmut Hasse.
References
(Provides a variety of relationships.)
External links
The Glaisher–Kinkelin constant to 20,000 decimal places
Mathematical constants
Number theory
Glaisher family
|
https://en.wikipedia.org/wiki/AES%20key%20schedule
|
AES uses a key schedule to expand a short key into a number of separate round keys. The three AES variants have a different number of rounds. Each variant requires a separate 128-bit round key for each round plus one more. The key schedule produces the needed round keys from the initial key.
Round constants
The round constant for round of the key expansion is the 32-bit word:
where is an eight-bit value defined as :
where is the bitwise XOR operator and constants such as and are given in hexadecimal. Equivalently:
where the bits of are treated as the coefficients of an element of the finite field , so that e.g. represents the polynomial .
AES uses up to for AES-128 (as 11 round keys are needed), up to for AES-192, and up to for AES-256.
The key schedule
Define:
as the length of the key in 32-bit words: 4 words for AES-128, 6 words for AES-192, and 8 words for AES-256
, , ... as the 32-bit words of the original key
as the number of round keys needed: 11 round keys for AES-128, 13 keys for AES-192, and 15 keys for AES-256
, , ... as the 32-bit words of the expanded key
Also define as a one-byte left circular shift:
and as an application of the AES S-box to each of the four bytes of the word:
Then for :
Notes
References
FIPS PUB 197: the official AES standard (PDF file)
External links
Description of Rijndael's key schedule
schematic view of the key schedule for 128 and 256 bit keys for 160-bit keys on Cryptography Stack Exchange
Advanced Encryption Standard
Key management
|
https://en.wikipedia.org/wiki/Baire%20space%20%28set%20theory%29
|
In set theory, the Baire space is the set of all infinite sequences of natural numbers with a certain topology. This space is commonly used in descriptive set theory, to the extent that its elements are often called "reals". It is denoted NN, ωω, by the symbol or also ωω, not to be confused with the countable ordinal obtained by ordinal exponentiation.
The Baire space is defined to be the Cartesian product of countably infinitely many copies of the set of natural numbers, and is given the product topology (where each copy of the set of natural numbers is given the discrete topology). The Baire space is often represented using the tree of finite sequences of natural numbers.
The Baire space can be contrasted with Cantor space, the set of infinite sequences of binary digits.
Topology and trees
The product topology used to define the Baire space can be described more concretely in terms of trees. The basic open sets of the product topology are cylinder sets, here characterized as:
If any finite set of natural number coordinates I={i} is selected, and for each i a particular natural number value vi is selected, then the set of all infinite sequences of natural numbers that have value vi at position i is a basic open set. Every open set is a countable union of a collection of these.
Using more formal notation, one can define the individual cylinders as
for a fixed integer location n and integer value v. The cylinders are then the generators for the cylinder sets: the cylinder sets then consist of all intersections of a finite number of cylinders. That is, given any finite set of natural number coordinates and corresponding natural number values for each , one considers the intersection of cylinders
This intersection is called a cylinder set, and the set of all such cylinder sets provides a basis for the product topology. Every open set is a countable union of such cylinder sets.
By moving to a different basis for the same topology, an alternate charac
|
https://en.wikipedia.org/wiki/CO-OPN
|
The CO-OPN (Concurrent Object-Oriented Petri Nets) specification language is based on both algebraic specifications and algebraic Petri nets formalisms. The former formalism represent the data structures aspects, while the latter stands for the behavioral and concurrent aspects of systems. In order to deal with large specifications some structuring capabilities have been introduced. The object-oriented paradigm has been adopted, which means that a CO-OPN specification is a collection of objects which interact concurrently. Cooperation between the objects is achieved by means of a synchronization mechanism, i.e., each object event may request to be synchronized with some methods (parameterized events) of one or a group of partners by means of a synchronization expression.
A CO-OPN specification consists of a collection of two different modules: the abstract data type modules and the object modules. The abstract data type modules concern the data structure component of the specifications, and many sorted algebraic specifications are used when describing these modules. Furthermore, the object modules represent the concept of encapsulated entities that possess an internal state and provide the exterior with various services. For this second sort of modules, an algebraic net formalism has been adopted. Algebraic Petri nets, a kind of high level nets, are a great improvement over the Petri nets, i.e. Petri nets tokens are replaced with data structures which are described by means of algebraic abstract data types. For managing visibility, both abstract data type modules and object modules are composed of an interface (which allows some operations to be visible from the outside) and a body (which mainly encapsulates the operations properties and some operation which are used for building the model). In the case of the objects modules, the state
and the behavior of the objects remain concealed within the body section.
To develop models using the CO-OPN language it is poss
|
https://en.wikipedia.org/wiki/Spacecraft%20design
|
The design of spacecraft covers a broad area, including the design of both robotic spacecraft (satellites and planetary probes), and spacecraft for human spaceflight (spaceships and space stations).
Origin
Spacecraft design was born as a discipline in the 1950s and 60s with the advent of American and Soviet space exploration programs. Since then it has progressed, although typically less than comparable terrestrial technologies. This is for a large part due to the challenging space environment, but also to the lack of basic R&D, and to other cultural factors within the design community. On the other hand, another reason for slow space travel application design is the high energy cost, and low efficiency, for achieving orbit. This cost might be seen as too high a "start-up-cost."
Areas of engineering involved
Spacecraft design brings together aspects of various disciplines, namely:
Astronautics for mission design and derivation of the design requirements,
Systems engineering for maintaining the design baseline and derivation of subsystem requirements,
Communications engineering for the design of the subsystems which communicate with the ground (e.g. telemetry) and perform ranging.
Computer engineering for the design of the on-board computers and computer buses. This subsystem is mainly based on terrestrial technologies, but unlike most of them, it must: cope with space environment, be highly autonomous and provide higher fault-tolerance.
It may incorporate space qualified radiation-hardened components.
Software engineering for the on-board software which runs all the on-board applications, as well as low-level control software. This subsystem is very similar to terrestrial real-time and embedded software designs,
Electrical engineering for the design of the power subsystem, which generates, stores and distributes the electrical power to all the on-board equipment,
Control theory for the design of the attitude and orbit control subsystem, which points t
|
https://en.wikipedia.org/wiki/Ettingshausen%20effect
|
The Ettingshausen effect (named for Albert von Ettingshausen) is a thermoelectric (or thermomagnetic) phenomenon that affects the electric current in a conductor when a magnetic field is present.
Ettingshausen and his PhD student Walther Nernst were studying the Hall effect in bismuth, and noticed an unexpected perpendicular current flow when one side of the sample was heated. This is known as the Nernst effect. Conversely, when applying a current (along the y-axis) and a perpendicular magnetic field (along the z-axis) a temperature gradient appears along the x-axis. This is known as the Ettingshausen effect. Because of the Hall effect, electrons are forced to move perpendicular to the applied current. Due to the accumulation of electrons on one side of the sample, the number of collisions increases and a heating of the material occurs.
This effect is quantified by the Ettingshausen coefficient P, which is defined as:
where is the temperature gradient that results from the y-component Jy of an electric current density (in ) and the z-component Bz of a magnetic field.
In most metals like copper, silver and gold P is on the order of and thus difficult to observe in common magnetic fields. In bismuth the Ettingshausen coefficient is several orders of magnitude larger because of its poor thermal conductivity, .
See also
Hall effect
References
Walther Nernst
Thermoelectricity
Electrodynamics
|
https://en.wikipedia.org/wiki/Lemniscate%20constant
|
In mathematics, the lemniscate constant is a transcendental mathematical constant that is the ratio of the perimeter of Bernoulli's lemniscate to its diameter, analogous to the definition of for the circle. Equivalently, the perimeter of the lemniscate is . The lemniscate constant is closely related to the lemniscate elliptic functions and approximately equal to 2.62205755. The symbol is a cursive variant of ; see Pi § Variant pi.
Gauss's constant, denoted by G, is equal to .
John Todd named two more lemniscate constants, the first lemniscate constant and the second lemniscate constant .
Sometimes the quantities or are referred to as the lemniscate constant.
History
Gauss's constant is named after Carl Friedrich Gauss, who calculated it via the arithmetic–geometric mean as . By 1799, Gauss had two proofs of the theorem that where is the lemniscate constant.
The lemniscate constant and first lemniscate constant were proven transcendental by Theodor Schneider in 1937 and the second lemniscate constant and Gauss's constant were proven transcendental by Theodor Schneider in 1941. In 1975, Gregory Chudnovsky proved that the set is algebraically independent over , which implies that and are algebraically independent as well. But the set (where the prime denotes the derivative with respect to the second variable) is not algebraically independent over . In fact,
Forms
Usually, is defined by the first equality below.
where is the complete elliptic integral of the first kind with modulus , is the beta function, is the gamma function and is the Riemann zeta function.
The lemniscate constant can also be computed by the arithmetic–geometric mean ,
Moreover,
which is analogous to
where is the Dirichlet beta function and is the Riemann zeta function.
Gauss's constant is typically defined as the reciprocal of the arithmetic–geometric mean of 1 and the square root of 2, after his calculation of published in 1800:
Gauss's constant is equal t
|
https://en.wikipedia.org/wiki/Undeletion
|
Undeletion is a feature for restoring computer files which have been removed from a file system by file deletion. Deleted data can be recovered on many file systems, but not all file systems provide an undeletion feature. Recovering data without an undeletion facility is usually called data recovery, rather than undeletion. Undeletion can both help prevent users from accidentally losing data, or can pose a computer security risk, since users may not be aware that deleted files remain accessible.
Support
Not all file systems or operating systems support undeletion. Undeletion is possible on all FAT file systems, with undeletion utilities provided since MS-DOS 5.0 and DR DOS 6.0 in 1991. It is not supported by most modern UNIX file systems, though AdvFS is a notable exception. The ext2 file system has an add-on program called e2undel which allows file undeletion. The similar ext3 file system does not officially support undeletion, but utilities like ext4magic, extundelete, PhotoRec and ext3grep were written to automate the undeletion on ext3 volumes. Undelete was proposed in ext4, but is yet to be implemented. However, a trash bin feature was posted as a patch on December 4, 2006. The Trash bin feature uses undelete
attributes in ext2/3/4 and Reiser file systems.
Command-line tools
Norton Utilities
Norton UNERASE was an important component in Norton Utilities version 1.0 in 1982.
MS-DOS
Microsoft included a similar UNDELETE program in versions 5.0 to 6.22 of MS-DOS, but applied the Recycle Bin approach instead in later operating systems using FAT.
DR DOS
DR DOS 6.0 and higher support UNDELETE as well, but optionally offer additional protection utilizing the FAT snapshot utility DISKMAP and the resident DELWATCH deletion tracking component, which actively maintains deleted files' date and time stamps and keeps the contents of deleted files from being overwritten unless running out of disk space. DELWATCH also supports undeletion of remote files on file servers.
|
https://en.wikipedia.org/wiki/Cauchy%20index
|
In mathematical analysis, the Cauchy index is an integer associated to a real rational function over an interval. By the Routh–Hurwitz theorem, we have the following interpretation: the Cauchy index of
r(x) = p(x)/q(x)
over the real line is the difference between the number of roots of f(z) located in the right half-plane and those located in the left half-plane. The complex polynomial f(z) is such that
f(iy) = q(y) + ip(y).
We must also assume that p has degree less than the degree of q.
Definition
The Cauchy index was first defined for a pole s of the rational function r by Augustin-Louis Cauchy in 1837 using one-sided limits as:
A generalization over the compact interval [a,b] is direct (when neither a nor b are poles of r(x)): it is the sum of the Cauchy indices of r for each s located in the interval. We usually denote it by .
We can then generalize to intervals of type since the number of poles of r is a finite number (by taking the limit of the Cauchy index over [a,b] for a and b going to infinity).
Examples
Consider the rational function:
We recognize in p(x) and q(x) respectively the Chebyshev polynomials of degree 3 and 5. Therefore, r(x) has poles , , , and , i.e. for . We can see on the picture that and . For the pole in zero, we have since the left and right limits are equal (which is because p(x) also has a root in zero).
We conclude that since q(x) has only five roots, all in [−1,1]. We cannot use here the Routh–Hurwitz theorem as each complex polynomial with f(iy) = q(y) + ip(y) has a zero on the imaginary line (namely at the origin).
External links
The Cauchy Index
Mathematical analysis
|
https://en.wikipedia.org/wiki/Rail%20India%20Technical%20and%20Economic%20Service
|
RITES Ltd, formerly known as Rail India Technical and Economic Service Limited a Navaratna, Central Public Sector Undertaking under the Ministry of Railways, incorporated on April 26, 1974 is a multidisciplinary engineering and consultancy organization, providing a comprehensive range of services from concept to commissioning in all facets of transport infrastructure and related technologies.
RITES is a leading player in the transport consultancy and engineering sector in India and uniquely placed in terms of diversification of services and geographical reach in various sectors such as railways, highways, urban engineering (metros) & sustainability, airports, ports, ropeways, institutional buildings, inland waterways, and renewable energy. The company is the only export arm of Indian Railways for providing rolling stock except Thailand, Malaysia, and Indonesia.
RITES business spans over 49 years covering more than 55 countries across Asia, Africa, Latin America, South America, and Middle East regions.
RITES became a listed company in July 2018, and it has made it to the Top-500 listed Indian companies based on its market capitalization.
Key Services
Consultancy services which includes:-
(1)Techno-Economic Viability
(2)Project Management Consultancy
(3)Detailed Project Reports
(4)Construction Supervision
(5)Design Engineering
(6)Quality Assurance & Inspection Services
(7)Transaction Advisory
Export of rolling stock & spares
Turnkey Construction Projects
Locomotive Leasing Services
Urban Infrastructure and Sustainability
Railway projects
Major railway companies and projects that have had projects with RITES as a consultant include:
SNTF, Algeria, Consulting
Luanda Railway (CFL), Angola, Feasibility study for rehabilitation
Bangladesh Railway, Bangladesh, Consultation
Belize Railways, Belize, Planning
Belmopan Commuter Rail, Belize, Planning
Botswana Railways, Botswana, Management Support and Consultation
Cambodian Railways, Cambodia, Rehabil
|
https://en.wikipedia.org/wiki/Carrier%20generation%20and%20recombination
|
In the solid-state physics of semiconductors, carrier generation and carrier recombination are processes by which mobile charge carriers (electrons and electron holes) are created and eliminated. Carrier generation and recombination processes are fundamental to the operation of many optoelectronic semiconductor devices, such as photodiodes, light-emitting diodes and laser diodes. They are also critical to a full analysis of p-n junction devices such as bipolar junction transistors and p-n junction diodes.
The electron–hole pair is the fundamental unit of generation and recombination in inorganic semiconductors, corresponding to an electron transitioning between the valence band and the conduction band where generation of electron is a transition from the valence band to the conduction band and recombination leads to a reverse transition.
Overview
Like other solids, semiconductor materials have an electronic band structure determined by the crystal properties of the material. Energy distribution among electrons is described by the Fermi level and the temperature of the electrons. At absolute zero temperature, all of the electrons have energy below the Fermi level; but at non-zero temperatures the energy levels are filled following a Fermi-Dirac distribution.
In undoped semiconductors the Fermi level lies in the middle of a forbidden band or band gap between two allowed bands called the valence band and the conduction band. The valence band, immediately below the forbidden band, is normally very nearly completely occupied. The conduction band, above the Fermi level, is normally nearly completely empty. Because the valence band is so nearly full, its electrons are not mobile, and cannot flow as electric current.
However, if an electron in the valence band acquires enough energy to reach the conduction band (as a result of interaction with other electrons, holes, photons, or the vibrating crystal lattice itself), it can flow freely among the nearly empty conduction
|
https://en.wikipedia.org/wiki/Chkrootkit
|
Chkrootkit (Check Rootkit) is a widely used Unix-based utility designed to aid system administrators in examining their systems for rootkits. Operating as a shell script, it leverages common Unix/Linux tools such as the strings and grep command. The primary purpose is to scan core system programs for identifying signatures and to compare data obtained from traversal the /proc with the output derived from the ps (process status) command, aiming to identify inconsistencies. It offers flexibility in execution, allowing it to function from a rescue disc, often a live CD, and provides an optional alternative directory for executing its commands. These approaches enhance chkrootkit's reliance on the commands it employs.
It's crucial to recognize the inherent limitations of any program that strives to detect compromises, including rootkits and malware. Modern rootkits might deliberately attempt to identify and target copies of the chkrootkit program, or adopt other strategies to elude detection by it.
See also
Host-based intrusion detection system comparison
Hardening (computing)
Linux malware
MalwareMustDie
rkhunter
Lynis
OSSEC
Samhain (software)
References
External links
Computer security software
Unix security-related software
Rootkit detection software
|
https://en.wikipedia.org/wiki/DShield
|
DShield is a community-based collaborative firewall log correlation system. It receives logs from volunteers worldwide and uses them to analyze attack trends. It is used as the data collection engine behind the SANS Internet Storm Center (ISC). DShield was officially launched end of November 2000 by Johannes Ullrich. Since then, it has grown to be a dominating attack correlation engine with worldwide coverage.
DShield is regularly used by the media to cover current events. Analysis provided by DShield has been used in the early detection of several worms, like "Ramen", Code Red, "Leaves", "SQL Snake" and more. DShield data is regularly used by researchers to analyze attack patterns.
The goal of the DShield project is to allow access to its correlated information to the public at no charge to raise awareness and provide accurate and current snapshots of internet attacks. Several data feeds are provided to users to either include in their own web sites or to use as an aide to analyze events.
See also
SANS Institute (SysAdmin, Audit, Network and Security – SANS)
Comparison of network monitoring systems
ShieldsUP
SPEWS
References
Further reading
External links
Alert measurement systems
Computer security procedures
Internet properties established in 2000
Internet safety
Web log analysis software
|
https://en.wikipedia.org/wiki/Latch-up
|
In electronics, a latch-up is a type of short circuit which can occur in an integrated circuit (IC). More specifically, it is the inadvertent creation of a low-impedance path between the power supply rails of a MOSFET circuit, triggering a parasitic structure which disrupts proper functioning of the part, possibly even leading to its destruction due to overcurrent. A power cycle is required to correct this situation.
The parasitic structure is usually equivalent to a thyristor (or SCR), a PNPN structure which acts as a PNP and an NPN transistor stacked next to each other. During a latch-up when one of the transistors is conducting, the other one begins conducting too. They both keep each other in saturation for as long as the structure is forward-biased and some current flows through it - which usually means until a power-down. The SCR parasitic structure is formed as a part of the totem-pole PMOS and NMOS transistor pair on the output drivers of the gates.
The latch-up does not have to happen between the power rails - it can happen at any place where the required parasitic structure exists. A common cause of latch-up is a positive or negative voltage spike on an input or output pin of a digital chip that exceeds the rail voltage by more than a diode drop. Another cause is the supply voltage exceeding the absolute maximum rating, often from a transient spike in the power supply. It leads to a breakdown of an internal junction. This frequently happens in circuits which use multiple supply voltages that do not come up in the required sequence on power-up, leading to voltages on data lines exceeding the input rating of parts that have not yet reached a nominal supply voltage. Latch-ups can also be caused by an electrostatic discharge event.
Another common cause of latch-ups is ionizing radiation which makes this a significant issue in electronic products designed for space (or very high-altitude) applications. A single event latch-up is a latch-up caused by a si
|
https://en.wikipedia.org/wiki/Mathematical%20psychology
|
Mathematical psychology is an approach to psychological research that is based on mathematical modeling of perceptual, thought, cognitive and motor processes, and on the establishment of law-like rules that relate quantifiable stimulus characteristics with quantifiable behavior (in practice often constituted by task performance). The mathematical approach is used with the goal of deriving hypotheses that are more exact and thus yield stricter empirical validations. There are five major research areas in mathematical psychology: learning and memory, perception and psychophysics, choice and decision-making, language and thinking, and measurement and scaling.
Although psychology, as an independent subject of science, is a more recent discipline than physics, the application of mathematics to psychology has been done in the hope of emulating the success of this approach in the physical sciences, which dates back to at least the seventeenth century. Mathematics in psychology is used extensively roughly in two areas: one is the mathematical modeling of psychological theories and experimental phenomena, which leads to mathematical psychology, the other is the statistical approach of quantitative measurement practices in psychology, which leads to psychometrics.
As quantification of behavior is fundamental in this endeavor, the theory of measurement is a central topic in mathematical psychology. Mathematical psychology is therefore closely related to psychometrics. However, where psychometrics is concerned with individual differences (or population structure) in mostly static variables, mathematical psychology focuses on process models of perceptual, cognitive and motor processes as inferred from the 'average individual'. Furthermore, where psychometrics investigates the stochastic dependence structure between variables as observed in the population, mathematical psychology almost exclusively focuses on the modeling of data obtained from experimental paradigms and is ther
|
https://en.wikipedia.org/wiki/Hardware%20overlay
|
In computing, hardware overlay, a type of video overlay, provides a method of rendering an image to a display screen with a dedicated memory buffer inside computer video hardware. The technique aims to improve the display of a fast-moving video image — such as a computer game, a DVD, or the signal from a TV card. Most video cards manufactured since about 1998 and most media players support hardware overlay.
The overlay is a dedicated buffer into which one app can render (typically video), without incurring the significant performance cost of checking for clipping and overlapping rendering by other apps. The framebuffer has hardware support for importing and rendering the buffer contents without going through the GPU.
Overview
The use of a hardware overlay is important for several reasons:
In a graphical user interface (GUI) operating system such as Windows, one display-device can typically display multiple applications simultaneously.
Consider how a display works without a hardware overlay. When each application draws to the screen, the operating system's graphical subsystem must constantly check to ensure that the objects being drawn appear on the appropriate location on the screen, and that they don't collide with overlapping and neighboring windows. The graphical subsystem must clip objects while they are being drawn when a collision occurs. This constant checking and clipping ensures that different applications can cooperate with one another in sharing a display, but also consumes a significant proportion of computing power.
A computer draws on its display by writing a bitmapped representation of the graphics into a special portion of its memory known as video memory. Without any hardware overlays, only one chunk of video memory exists which all applications must share - and the location of a given application's video memory moves whenever the user changes the position of the application's window. With shared video memory, an application must constan
|
https://en.wikipedia.org/wiki/Chapman%20function
|
A Chapman function describes the integration of atmospheric absorption along a slant path on a spherical earth, relative to the vertical case. It applies to any quantity with a concentration decreasing exponentially with increasing altitude. To a first approximation, valid at small zenith angles, the Chapman function for optical absorption is equal to
where z is the zenith angle and sec denotes the secant function.
The Chapman function is named after Sydney Chapman, who introduced the function in 1931.
Definition
In an isothermal model of the atmosphere, the density varies exponentially with altitude according to the Barometric formula:
,
where denotes the density at sea level () and the so-called scale height.
The total amount of matter traversed by a vertical ray starting at altitude towards infinity is given by the integrated density ("column depth")
.
For inclined rays having a zenith angle , the integration is not straight-forward due to the non-linear relationship between altitude and path length when considering the
curvature of Earth. Here, the integral reads
,
where we defined ( denotes the Earth radius).
The Chapman function is defined as the ratio between slant depth and vertical column depth . Defining , it can be written as
.
Representations
A number of different integral representations have been developed in the literature. Chapman's original representation reads
.
Huestis developed the representation
,
which does not suffer from numerical singularities present in Chapman's representation.
Special cases
For (horizontal incidence), the Chapman function reduces to
.
Here, refers to the modified Bessel function of the second kind of the first order. For large values of , this can further be approximated by
.
For and , the Chapman function converges to the secant function:
.
In practical applications related to the terrestrial atmosphere, where , is a good approximation for zenith angles up to 60° to 70°, depending on the accuracy
|
https://en.wikipedia.org/wiki/KPDX
|
KPDX (channel 49) is a television station licensed to Vancouver, Washington, United States, serving the Portland, Oregon area as an affiliate of MyNetworkTV. It is the only major commercial station in Portland that is licensed to the Washington side of the market.
KPDX is owned by Gray Television alongside Fox affiliate KPTV (channel 12). Both stations share studios on NW Greenbrier Parkway in Beaverton, while KPDX's transmitter is located in the Sylvan-Highlands section of Portland. KPDX's signal is relayed in Central Oregon through translator station KUBN-LD (channel 9) in Bend, making the station available in about two-thirds of the state.
Since February 2018, KPDX has been branded as Fox 12 Plus, an extension of the branding used by KPTV.
History
As an independent station
In August 1980, the local KLRK Broadcasting Corporation filed an application to construct a new TV station on channel 49 at Vancouver. The construction permit was granted by the Federal Communications Commission (FCC) on January 5, 1981, and took the KLRK call letters, representing Clark County. KLRK foresaw an independent station emphasizing Southwest Washington sports and news. However, work came to a halt when KLRK ran out of money to build the facility. In late 1981, Camellia City Telecasters, the owner of KTXL-TV in Sacramento, California, filed to buy the construction permit, an action decried by newly built KECH-TV (channel 22 in Salem) and Cascade Video, applicant for a station on channel 40. Camellia's entry in the Portland market was significant because it bought rights to $10 million of films and syndicated programs, which particularly harmed KECH.
Channel 49 would miss several planned launch dates due to multiple factors. The station was forced by Multnomah County to allow other interested broadcasters to rent tower space, and Oregon Public Broadcasting's KOAP-TV and KOPB-FM used the opportunity to consolidate their transmission facilities with the new transmitter. There were a
|
https://en.wikipedia.org/wiki/B%C3%A9zout%20matrix
|
In mathematics, a Bézout matrix (or Bézoutian or Bezoutiant) is a special square matrix associated with two polynomials, introduced by and and named after Étienne Bézout. Bézoutian may also refer to the determinant of this matrix, which is equal to the resultant of the two polynomials. Bézout matrices are sometimes used to test the stability of a given polynomial.
Definition
Let and be two complex polynomials of degree at most n,
(Note that any coefficient or could be zero.) The Bézout matrix of order n associated with the polynomials f and g is
where the entries result from the identity
It is an n × n complex matrix, and its entries are such that if we let for each , then:
To each Bézout matrix, one can associate the following bilinear form, called the Bézoutian:
Examples
For n = 3, we have for any polynomials f and g of degree (at most) 3:
Let and be the two polynomials. Then:
The last row and column are all zero as f and g have degree strictly less than n (which is 4). The other zero entries are because for each , either or is zero.
Properties
is symmetric (as a matrix);
;
;
is a bilinear function;
is a real matrix if f and g have real coefficients;
is nonsingular with if and only if f and g have no common roots.
with has determinant which is the resultant of f and g.
Applications
An important application of Bézout matrices can be found in control theory. To see this, let f(z) be a complex polynomial of degree n and denote by q and p the real polynomials such that f(iy) = q(y) + ip(y) (where y is real). We also denote r for the rank and σ for the signature of . Then, we have the following statements:
f(z) has n − r roots in common with its conjugate;
the left r roots of f(z) are located in such a way that:
(r + σ)/2 of them lie in the open left half-plane, and
(r − σ)/2 lie in the open right half-plane;
f is Hurwitz stable if and only if is positive definite.
The third statement gives a necessary and sufficient
|
https://en.wikipedia.org/wiki/WESH
|
WESH (channel 2) is a television station licensed to Daytona Beach, Florida, United States, serving the Orlando area as an affiliate of NBC. It is owned by Hearst Television alongside Clermont-licensed CW affiliate WKCF (channel 18). The stations share studios on North Wymore Road in Eatonville (using a Winter Park address), while WESH's transmitter is located near Christmas, Florida.
WESH formerly served as a default NBC affiliate for the Gainesville market as the station's analog transmitter provided a city-grade off-air signal in Gainesville proper (and also provided Grade B signal coverage in the fringes of the Tampa Bay and Jacksonville markets). However, since January 1, 2009, Gainesville has been served by an in-market affiliate, WNBW (channel 9); although Cox Communications continues to carry WESH on its Gainesville area system.
History
WESH-TV first signed on the air on June 11, 1956. At first, it ran as an independent station, but on October 27, 1957, it became an NBC affiliate, and has been with NBC ever since. Businessman W. Wright Esch (for whom the station is named) won the license, but sold it to Perry Publications of Palm Beach just before the station made its debut. The station's original studios were located on Corporation Street in Holly Hill, near Daytona Beach.
The station's original transmitter tower was only high, which was tiny even by 1950s' standards, and limited channel 2's signal coverage to Volusia County. As such, it shared the NBC affiliation in Central Florida with primary CBS affiliate WDBO-TV (channel 6, now WKMG-TV). It finally became the market's exclusive NBC affiliate on November 5, 1957, when WDBO-TV relinquished its secondary affiliation with the network. On that day, the station activated a new transmitter tower in Orange City. The tower was located farther north than the other major Orlando stations' transmitters because of Federal Communications Commission (FCC) rules at the time that required a station's transmitter t
|
https://en.wikipedia.org/wiki/Beck%27s%20monadicity%20theorem
|
In category theory, a branch of mathematics, Beck's monadicity theorem gives a criterion that characterises monadic functors, introduced by in about 1964. It is often stated in dual form for comonads. It is sometimes called the Beck tripleability theorem because of the older term triple for a monad.
Beck's monadicity theorem asserts that a functor
is monadic if and only if
U has a left adjoint;
U reflects isomorphisms (if U(f) is an isomorphism then so is f); and
C has coequalizers of U-split parallel pairs (those parallel pairs of morphisms in C, which U sends to pairs having a split coequalizer in D), and U preserves those coequalizers.
There are several variations of Beck's theorem: if U has a left adjoint then any of the following conditions ensure that U is monadic:
U reflects isomorphisms and C has coequalizers of reflexive pairs (those with a common right inverse) and U preserves those coequalizers. (This gives the crude monadicity theorem.)
Every diagram in C which is by U sent to a split coequalizer sequence in D is itself a coequalizer sequence in C. In different words, U creates (preserves and reflects) U-split coequalizer sequences.
Another variation of Beck's theorem characterizes strictly monadic functors: those for which the comparison functor is an isomorphism rather than just an equivalence of categories. For this version the definitions of what it means to create coequalizers is changed slightly: the coequalizer has to be unique rather than just unique up to isomorphism.
Beck's theorem is particularly important in its relation with the descent theory, which plays a role in sheaf and stack theory, as well as in the Alexander Grothendieck's approach to algebraic geometry. Most cases of faithfully flat descent of algebraic structures (e.g. those in FGA and in SGA1) are special cases of Beck's theorem. The theorem gives an exact categorical description of the process of 'descent', at this level. In 1970 the Grothendieck approach via fibere
|
https://en.wikipedia.org/wiki/Beck%27s%20theorem%20%28geometry%29
|
In discrete geometry, Beck's theorem is any of several different results, two of which are given below. Both appeared, alongside several other important theorems, in a well-known paper by József Beck. The two results described below primarily concern lower bounds on the number of lines determined by a set of points in the plane. (Any line containing at least two points of point set is said to be determined by that point set.)
Erdős–Beck theorem
The Erdős–Beck theorem is a variation of a classical result by L. M. Kelly and W. O. J. Moser involving configurations of n points of which at most n − k are collinear, for some 0 < k < O(). They showed that if n is sufficiently large, relative to k, then the configuration spans at least kn − (1/2)(3k + 2)(k − 1) lines.
Elekes and Csaba Toth noted that the Erdős–Beck theorem does not easily extend to higher dimensions. Take for example a set of 2n points in R3 all lying on two skew lines. Assume that these two lines are each incident to n points. Such a configuration of points spans only 2n planes. Thus, a trivial extension to the hypothesis for point sets in Rd is not sufficient to obtain the desired result.
This result was first conjectured by Erdős, and proven by Beck. (See Theorem 5.2 in.)
Statement
Let S be a set of n points in the plane. If no more than n − k points lie on any line for some 0 ≤ k < n − 2, then there exist Ω(nk) lines determined by the points of S.
Proof
Beck's theorem
Beck's theorem says that finite collections of points in the plane fall into one of two extremes; one where a large fraction of points lie on a single line, and one where a large number of lines are needed to connect all the points.
Although not mentioned in Beck's paper, this result is implied by the Erdős–Beck theorem.
Statement
The theorem asserts the existence of positive constants C, K such that given any n points in the plane, at least one of the following statements is true:
There is a line which contains at least n/C of
|
https://en.wikipedia.org/wiki/Parallels%20Workstation
|
Parallels Workstation is the first commercial software product released by Parallels, Inc., a developer of desktop and server virtualization software. The Workstation software consists of a virtual machine suite for Intel x86-compatible computers (running Microsoft Windows or Linux) (for Mac version, see Parallels Desktop for Mac) which allows the simultaneous creation and execution of multiple x86 virtual computers. The product is distributed as a download package. Parallels Workstation has been discontinued for Windows and Linux as of 2013.
Implementation
Like other virtualization software, Parallels Workstation uses hypervisor technology, which is a thin software layer between Primary OS and host computer. The hypervisor directly controls some of the host machine's hardware resources and provides an interface to it for both virtual machine monitors and primary OS. This allows virtualization software to reduce overhead. Parallels Workstation's hypervisor also supports hardware virtualization technologies like Intel VT-x and AMD-V.
Features
Parallels Workstation is a hardware emulation virtualization software, in which a virtual machine engine enables each virtual machine to work with its own processor, RAM, floppy drive, CD drive, I/O devices, and hard disk – everything a physical computer contains. Parallels Workstation virtualizes all devices within the virtual environment, including the video adapter, network adapter, and hard disk adapters. It also provides pass-through drivers for parallel port and USB devices.
Because all guest virtual machines use the same hardware drivers irrespective of the actual hardware on the host computer, virtual machine instances are highly portable between computers. For example, a running virtual machine can be stopped, copied to another physical computer, and restarted.
Parallels Workstation is able to virtualize a full set of standard PC hardware, including:
A 64-bit processor with NX and AES-NI instructions.
A generic
|
https://en.wikipedia.org/wiki/D-Bus
|
D-Bus (short for "Desktop Bus")
is a message-oriented middleware mechanism that allows communication between multiple processes running concurrently on the same machine. D-Bus was developed as part of the freedesktop.org project, initiated by GNOME developer Havoc Pennington to standardize services provided by Linux desktop environments such as GNOME and KDE.
The freedesktop.org project also developed a free and open-source software library called libdbus, as a reference implementation of the specification. This library should not be confused with D-Bus itself, as other implementations of the D-Bus specification also exist, such as GDBus (GNOME), QtDBus (Qt/KDE), dbus-java and sd-bus (part of systemd).
Overview
D-Bus is an inter-process communication (IPC) mechanism initially designed to replace the software component communications systems used by the GNOME and KDE Linux desktop environments (CORBA and DCOP respectively). The components of these desktop environments are normally distributed in many processes, each one providing only a few—usually one—services. These services may be used by regular client applications or by other components of the desktop environment to perform their tasks.
Due to the large number of processes involved—adding up processes providing the services and clients accessing them—establishing one-to-one IPC between all of them becomes an inefficient and quite unreliable approach. Instead, D-Bus provides a software-bus abstraction that gathers all the communications between a group of processes over a single shared virtual channel. Processes connected to a bus do not know how it is internally implemented, but D-Bus specification guarantees that all processes connected to the bus can communicate with each other through it.
Linux desktop environments take advantage of the D-Bus facilities by instantiating multiple buses, notably:
a single system bus, available to all users and processes of the system, that provides access to system servic
|
https://en.wikipedia.org/wiki/Navigation%20mesh
|
A navigation mesh, or navmesh, is an abstract data structure used in artificial intelligence applications to aid agents in pathfinding through complicated spaces. This approach has been known since at least the mid-1980s in robotics, where it has been called a meadow map, and was popularized in video game AI in 2000.
Description
A navigation mesh is a collection of two-dimensional convex polygons (a polygon mesh) that define which areas of an environment are traversable by agents. In other words, a character in a game could freely walk around within these areas unobstructed by trees, lava, or other barriers that are part of the environment. Adjacent polygons are connected to each other in a graph.
Pathfinding within one of these polygons can be done trivially in a straight line because the polygon is convex and traversable. Pathfinding between polygons in the mesh can be done with one of the large number of graph search algorithms, such as A*. Agents on a navmesh can thus avoid computationally expensive collision detection checks with obstacles that are part of the environment.
Representing traversable areas in a 2D-like form simplifies calculations that would otherwise need to be done in the "true" 3D environment, yet unlike a 2D grid it allows traversable areas that overlap above and below at different heights. The polygons of various sizes and shapes in navigation meshes can represent arbitrary environments with greater accuracy than regular grids can.
Creation
Navigation meshes can be created manually, automatically, or by some combination of the two. In video games, a level designer might manually define the polygons of the navmesh in a level editor. This approach can be quite labor intensive. Alternatively, an application could be created that takes the level geometry as input and automatically outputs a navmesh.
It is commonly assumed that the environment represented by a navmesh is static – it does not change over time – and thus the navmesh can be crea
|
https://en.wikipedia.org/wiki/Going%20up%20and%20going%20down
|
In commutative algebra, a branch of mathematics, going up and going down are terms which refer to certain properties of chains of prime ideals in integral extensions.
The phrase going up refers to the case when a chain can be extended by "upward inclusion", while going down refers to the case when a chain can be extended by "downward inclusion".
The major results are the Cohen–Seidenberg theorems, which were proved by Irvin S. Cohen and Abraham Seidenberg. These are known as the going-up and going-down theorems.
Going up and going down
Let A ⊆ B be an extension of commutative rings.
The going-up and going-down theorems give sufficient conditions for a chain of prime ideals in B, each member of which lies over members of a longer chain of prime ideals in A, to be able to be extended to the length of the chain of prime ideals in A.
Lying over and incomparability
First, we fix some terminology. If and are prime ideals of A and B, respectively, such that
(note that is automatically a prime ideal of A) then we say that lies under and that lies over . In general, a ring extension A ⊆ B of commutative rings is said to satisfy the lying over property if every prime ideal of A lies under some prime ideal of B.
The extension A ⊆ B is said to satisfy the incomparability property if whenever and are distinct primes of B lying over a prime in A, then ⊈ and ⊈ .
Going-up
The ring extension A ⊆ B is said to satisfy the going-up property if whenever
is a chain of prime ideals of A and
is a chain of prime ideals of B with m < n and such that lies over for 1 ≤ i ≤ m, then the latter chain can be extended to a chain
such that lies over for each 1 ≤ i ≤ n.
In it is shown that if an extension A ⊆ B satisfies the going-up property, then it also satisfies the lying-over property.
Going-down
The ring extension A ⊆ B is said to satisfy the going-down property if whenever
is a chain of prime ideals of A and
is a chain of prime ideals of B with m < n an
|
https://en.wikipedia.org/wiki/Slipstream%20%28computer%20science%29
|
A slipstream processor is an architecture designed to reduce the length of a running program by removing the non-essential instructions.
It is a form of speculative computing.
Non-essential instructions include such things as results that are not written to memory, or compare operations that will always return true. Also as statistically most branch instructions will be taken it makes sense to assume this will always be the case.
Because of the speculation involved slipstream processors are generally described as having two parallel executing streams. One is an optimized faster A-stream (advanced stream) executing the reduced code, the other is the slower R-stream (redundant stream), which runs behind the A-stream and executes the full code. The R-stream runs faster than if it were a single stream due to data being prefetched by the A-stream effectively hiding memory latency, and due to the A-stream's assistance with branch prediction. The two streams both complete faster than a single stream would. As of 2005, theoretical studies have shown that this configuration can lead to a speedup of around 20%.
The main problem with this approach is accuracy: as the A-stream becomes more accurate and less speculative, the overall system runs slower. Furthermore, a large enough distance is needed between the A-stream and the R-stream so that cache misses generated by the A-stream do not slow down the R-stream.
References
Z. Purser, K. Sundaramoorthy and E. Rotenberg, "A Study of Slipstream Processors", Proc. 33rd Ann. Int'l Symp. Microarchitecture, Monterey, CA, Dec. 2000.
Instruction processing
|
https://en.wikipedia.org/wiki/Colortrak
|
Colortrak was a trademark used on several RCA color televisions beginning in the 1970s and lasting into the 1990s. After RCA was acquired by General Electric in 1986, GE began marketing sets identical to those from RCA. GE sold both RCA and GE consumer electronics lines to Thomson SA in 1988. RCA televisions with the Colortrak branding were mid-range models; positioned above the low-end XL-100 series but below the high-end Dimensia and Colortrak 2000 series. RCA discontinued the Colortrak name in the late 1990s, with newer models badged as the Entertainment Series.
Design quirks
During the early 1980s, RCA responded to increased demand for component televisions with monitor capabilities by adding composite and S-video inputs to the Colortrak lineup. These inputs allowed owners to easily connect a stereo audio/video source, like a Video Cassette Recorder, LaserDisc player, or with use of an RCA SelectaVision CED videodisc player to the television. For example, early composite video-equipped RCA sets were to coincidentally be tuned to Non-broadcast channel 91 to display a composite video signal, if a set was equipped with more than one input, subsequent inputs are designated to channels 92 to 95, which are usually accessed from the remote control. Later models abandoned this design, favoring A/V inputs which were accessible by pressing the channel up/down buttons, or A/V inputs which were controlled by their own button.
Tuner Issues
After Thomson SA acquired the GE and RCA brand names, they began designing a new chassis for RCA and GE televisions, which debuted in 1993 models. Instead of using a tuner module soldered to the circuit board, Thomson decided to integrate the tuner into the board itself. Due to the heating and cooling cycles of the circuit board and tuner from normal use, the solder connections between the tuner and the board would fail, causing an intermittent picture or no signal from the coaxial connector. This is easily repairable by desoldering the
|
https://en.wikipedia.org/wiki/Lazarus%20%28software%29
|
Lazarus is a free, cross-platform, integrated development environment (IDE) for rapid application development (RAD) using the Free Pascal compiler. Its goal is to provide an easy-to-use development environment for programmers developing with the Object Pascal language, which is as close as possible to Delphi.
Software developers use Lazarus to create native-code console and graphical user interface (GUI) applications for the desktop, and also for mobile devices, web applications, web services, visual components and function libraries for a number of different platforms, including Mac, Linux and Windows.
A project created by using Lazarus on one platform can be compiled on any other one which Free Pascal compiler supports. For desktop applications a single source can target macOS, Linux, and Windows, with little or no modification. An example is the Lazarus IDE itself, created from a single code base and available on all major platforms including the Raspberry Pi.
Features
Lazarus provides a WYSIWYG development environment for the creation of rich user interfaces, application logic, and other supporting code artifacts, similar to Borland Delphi. Along with project management features, the Lazarus IDE also provides:
A visual windows layout designer
GUI widgets or visual components such as edit boxes, buttons, dialogs, menus, etc.
Non-visual components for common behaviors such as persistence of application settings
Data-connectivity components for MySQL, PostgreSQL, FireBird, Oracle, SQLite, Sybase, and others
Data-aware widget set that allows the developer to see data in visual components in the designer to assist with development
Interactive debugger
Code completion
Code templates
Syntax highlighting
Context-sensitive help
Text resource manager for internationalization
Automatic code formatting
Extensibility via custom components
Cross-platform development
Lazarus uses Free Pascal as its back-end compiler. As Free Pascal supports cross-compiling
|
https://en.wikipedia.org/wiki/OpenID
|
OpenID is an open standard and decentralized authentication protocol promoted by the non-profit OpenID Foundation. It allows users to be authenticated by co-operating sites (known as relying parties, or RP) using a third-party identity provider (IDP) service, eliminating the need for webmasters to provide their own ad hoc login systems, and allowing users to log in to multiple unrelated websites without having to have a separate identity and password for each. Users create accounts by selecting an OpenID identity provider, and then use those accounts to sign on to any website that accepts OpenID authentication. Several large organizations either issue or accept OpenIDs on their websites.
The OpenID standard provides a framework for the communication that must take place between the identity provider and the OpenID acceptor (the "relying party"). An extension to the standard (the OpenID Attribute Exchange) facilitates the transfer of user attributes, such as name and gender, from the OpenID identity provider to the relying party (each relying party may request a different set of attributes, depending on its requirements). The OpenID protocol does not rely on a central authority to authenticate a user's identity. Moreover, neither services nor the OpenID standard may mandate a specific means by which to authenticate users, allowing for approaches ranging from the common (such as passwords) to the novel (such as smart cards or biometrics).
The final version of OpenID is OpenID 2.0, finalized and published in December 2007. The term OpenID may also refer to an identifier as specified in the OpenID standard; these identifiers take the form of a unique Uniform Resource Identifier (URI), and are managed by some "OpenID provider" that handles authentication.
Adoption
, there are over 1 billion OpenID-enabled accounts on the Internet (see below) and approximately 1,100,934 sites have integrated OpenID consumer support: AOL, Flickr, Google, Amazon.com, Canonical (provider
|
https://en.wikipedia.org/wiki/DrayTek
|
DrayTek () is a network equipment manufacturer of broadband CPE (Customer Premises Equipment), including firewalls, VPN devices, routers, managed switches and wireless LAN devices. The company was founded in 1997. The earliest products included ISDN based solutions, the first being the ISDN Vigor128, a USB terminal adaptor for Windows and Mac OS. This was followed by the ISDN Vigor204 ISDN terminal adaptor/PBX and the Vigor2000, its first router. The head office is located in Hsinchu, Taiwan with regional offices and distributors worldwide.
DrayTek's products cover a wide solution range such as firewall, VPN, VoIP, xDSL/broadband devices, and management software to meet the market trend, go above and beyond customers' expectations.
DrayTek was one of the first manufacturers to bring VPN technology to low cost routers, increasing the viability of remote work. In 2004, DrayTek released the first of its VoIP (Voice-Over-IP) products. In 2006, new products for companies debuted, including larger scale firewalls and Unified Threat Management (UTM) firewalls products however the UTM Firewalls did not sell in sufficient volume and the UTM products ceased development and production.
DrayTek's product line offers business and consumer DSL modems with support for the PPPoA standard compared to the more widely supported PPPoE for use with full-featured home routers and home computers without more expensive ATM hardware. PPPoA is used primarily in the UK for ADSL lines. Most Vigor routers provide a virtual private network (VPN) feature, provides LAN-to-LAN and Remote-Dial-In Connections. In 2011, DrayTek embedded SSL VPN facilities into VigorRouter Series.
DrayTek's Initial Public Offering (IPO) on the Taiwan Stock Exchange occurred in 2004.
March 2021 DrayTek releases new WiFi 6 Access Point to market - DrayTek AP1060C
August 2021 DrayTek announces 2 new WiFi 6 Routers - DrayTek Vigor 2927ax & DrayTek Vigor 2865ax
References
Taiwanese companies established in 1997
Net
|
https://en.wikipedia.org/wiki/Test%20and%20test-and-set
|
In computer architecture, the test-and-set CPU instruction (or instruction sequence) is designed to implement
mutual exclusion in multiprocessor environments. Although a correct lock can be implemented with test-and-set, the test and test-and-set optimization lowers resource contention caused by bus locking, especially cache coherency protocol overhead on contended locks.
Given a lock:
boolean locked := false // shared lock variable
the entry protocol is:
procedure EnterCritical() {
do {
while ( locked == true )
skip // spin using normal instructions until the lock is free
} while ( TestAndSet(locked) == true ) // attempt actual atomic locking using the test-and-set instruction
}
and the exit protocol is:
procedure ExitCritical() {
locked := false
}
The difference to the simple test-and-set protocol is the additional spin-loop (the test in test and test-and-set) at the start of the entry protocol, which utilizes ordinary load instructions. The load in this loop executes with less overhead compared to an atomic operation (resp. a load-exclusive instruction). E.g., on a system utilizing the MESI cache coherency protocol, the cache line being loaded is moved to the Shared state, whereas a test-and-set instruction or a load-exclusive instruction moves it into the Exclusive state.
This is particularly advantageous if multiple processors are contending for the same lock: whereas an atomic instruction or load-exclusive instruction requires a coherency-protocol transaction to give that processor exclusive access to the cache line (causing that line to ping-pong between the involved processors), ordinary loads on a line in Shared state require no protocol transactions at all: processors spinning in the inner loop operate purely locally.
Cache-coherency protocol transactions are used only in the outer loop, after the initial check has ascertained that they have a reasonable likelihood of success.
If the programming language used support
|
https://en.wikipedia.org/wiki/Direct%20sum
|
The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently, but analogously, for different kinds of structures. To see how the direct sum is used in abstract algebra, consider a more elementary kind of structure, the abelian group. The direct sum of two abelian groups and is another abelian group consisting of the ordered pairs where and . To add ordered pairs, we define the sum to be ; in other words addition is defined coordinate-wise. For example, the direct sum , where is real coordinate space, is the Cartesian plane, . A similar process can be used to form the direct sum of two vector spaces or two modules.
We can also form direct sums with any finite number of summands, for example , provided and are the same kinds of algebraic structures (e.g., all abelian groups, or all vector spaces). This relies on the fact that the direct sum is associative up to isomorphism. That is, for any algebraic structures , , and of the same kind. The direct sum is also commutative up to isomorphism, i.e. for any algebraic structures and of the same kind.
The direct sum of finitely many abelian groups, vector spaces, or modules is canonically isomorphic to the corresponding direct product. This is false, however, for some algebraic objects, like nonabelian groups.
In the case where infinitely many objects are combined, the direct sum and direct product are not isomorphic, even for abelian groups, vector spaces, or modules. As an example, consider the direct sum and direct product of (countably) infinitely many copies of the integers. An element in the direct product is an infinite sequence, such as (1,2,3,...) but in the direct sum, there is a requirement that all but finitely many coordinates be zero, so the sequence (1,2,3,...) would be an element of the direct product but not of the direct sum, while (1,2,0,0,0,...) would be an element of both. Often, if a + sign is used, all but finitely many c
|
https://en.wikipedia.org/wiki/Diffusing%20update%20algorithm
|
The diffusing update algorithm (DUAL) is the algorithm used by Cisco's EIGRP routing protocol to ensure that a given route is recalculated globally whenever it might cause a routing loop. It was developed by J.J. Garcia-Luna-Aceves at SRI International. The full name of the algorithm is DUAL finite-state machine (DUAL FSM). EIGRP is responsible for the routing within an autonomous system, and DUAL responds to changes in the routing topology and dynamically adjusts the routing tables of the router automatically.
EIGRP uses a feasibility condition to ensure that only loop-free routes are ever selected. The feasibility condition is conservative: when the condition is true, no loops can occur, but the condition might under some circumstances reject all routes to a destination although some are loop-free.
When no feasible route to a destination is available, the DUAL algorithm invokes a diffusing computation to ensure that all traces of the problematic route are eliminated from the network. At which point the normal Bellman–Ford algorithm is used to recover a new route.
Operation
DUAL uses three separate tables for the route calculation. These tables are created using information exchanged between the EIGRP routers. The information is different than that exchanged by link-state routing protocols. In EIGRP, the information exchanged includes the routes, the "metric" or cost of each route, and the information required to form a neighbor relationship (such as AS number, timers, and K values). The three tables and their functions in detail are as follows:
Neighbor table contains information on all other directly connected routers. A separate table exists for each supported protocol (IP, IPX, etc.). Each entry corresponds to a neighbour with the description of network interface and address. In addition, a timer is initialized to trigger the periodic detection of whether the connection is alive. This is achieved through "Hello" packets. If a "Hello" packet is not recei
|
https://en.wikipedia.org/wiki/Hermite%20interpolation
|
In numerical analysis, Hermite interpolation, named after Charles Hermite, is a method of polynomial interpolation, which generalizes Lagrange interpolation. Lagrange interpolation allows computing a polynomial of degree less than that takes the same value at given points as a given function. Instead, Hermite interpolation computes a polynomial of degree less than such that the polynomial and its first derivatives have the same values at given points as a given function and its first derivatives.
Hermite's method of interpolation is closely related to the Newton's interpolation method, in that both are derived from the calculation of divided differences. However, there are other methods for computing a Hermite interpolating polynomial. One can use linear algebra, by taking the coefficients of the interpolating polynomial as unknowns, and writing as linear equations the constraints that the interpolating polynomial must satisfy. For another method, see .
Statement of the problem
Hermite interpolation consists of computing a polynomial of degree as low as possible that matches an unknown function both in observed value, and the observed value of its first derivatives. This means that values
must be known. The resulting polynomial has a degree less than . (In a more general case, there is no need for to be a fixed value; that is, some points may have more known derivatives than others. In this case the resulting polynomial has a degree less than the number of data points.)
Let us consider a polynomial of degree less than with indeterminate coefficients; that is, the coefficients of are new variables. Then, by writing the constraints that the interpolating polynomial must satisfy, one gets a system of linear equations in unknowns.
In general, such a system has exactly one solution. Charles Hermite proved that this is effectively the case here, as soon as the are pairwise different, and provided a method for computing it, which is described below.
|
https://en.wikipedia.org/wiki/Practical%20Action
|
Practical Action (previously known as the Intermediate Technology Development Group, ITDG) is a development charity registered in the United Kingdom which works directly in four regions of the developing world – Latin America, East Africa, Southern Africa and South Asia, with particular concentration on Peru, Bolivia, Kenya, Sudan, Zimbabwe, Bangladesh and Nepal.
In these countries, Practical Action works with poor communities to develop appropriate technologies in renewable energy, food production, agro-processing, water, sanitation, small enterprise development, building and shelter, climate change adaptation and disaster risk reduction.
History
In 1965, economist and philosopher E. F. Schumacher had an article published in The Observer, pointing out the limitations of aid based on the transfer of large-scale technologies to developing countries which did not have the resources to accommodate them. He argued that there should be a shift in emphasis towards intermediate technologies based on the needs and skills possessed by the people of developing countries.
Schumacher and a few of his associates, including George McRobie, Julia Porter, Alfred Latham-Koenig and Professor Mansur Hoda, decided to create an "advisory centre" to promote the use of efficient labour-intensive techniques, and in 1966 the Intermediate Technology Development Group (ITDG) was born.
From its origins as a technical enquiry service, ITDG began to take a greater direct involvement in local projects. Following initial successes in farming, it developed working groups on energy, building materials and rural health, and soon grew to become an international organisation. The group now has seven regional offices, working on over 100 projects around the world, with a head office in the UK.
In July 2005, ITDG changed its working name to Practical Action, and since 2008 this has been its legal name. The organisation produces a monthly magazine entitled 'Small World'.
See also
Biofuel
Hydro po
|
https://en.wikipedia.org/wiki/Flag%20%28programming%29
|
In computer programming, flag can refer to one or more bits that are used to store a binary value or a Boolean variable for signaling special code conditions, such as file empty or full queue statuses.
Flags may be found as members of a defined data structure, such as a database record, and the meaning of the value contained in a flag will generally be defined in relation to the data structure it is part of. In many cases, the binary value of a flag will be understood to represent one of several possible states or statuses. In other cases, the binary values may represent one or more attributes in a bit field, often related to abilities or permissions, such as "can be written to" or "can be deleted". However, there are many other possible meanings that can be assigned to flag values. One common use of flags is to mark or designate data structures for future processing.
Within microprocessors and other logic devices, flags are commonly used to control or indicate the intermediate or final state or outcome of different operations. Microprocessors typically have, for example, a status register that is composed of such flags, and the flags are used to indicate various post-operation conditions, such as when there has been an arithmetic overflow. The flags can be utilized in subsequent operations, such as in processing conditional jump instructions. For example a je (Jump if Equal) instruction in the X86 assembly language will result in a jump if the Z (zero) flag was set by some previous operation.
A command line switch is also referred to as a flag. Command line programs often start with an option parser that translates command line switches into flags in the sense of this article.
See also
Bit field
Control register
Enumerated type
FLAGS register (computing)
Flag byte
Program status word
Semaphore (programming)
Status register
References
Programming idioms
Operating system technology
Central processing unit
Digital registers
da:Flag (computer)
de:Flag (In
|
https://en.wikipedia.org/wiki/Perpendicular%20recording
|
Perpendicular recording (or perpendicular magnetic recording, PMR), also known as conventional magnetic recording (CMR), is a technology for data recording on magnetic media, particularly hard disks. It was first proven advantageous in 1976 by Shun-ichi Iwasaki, then professor of the Tohoku University in Japan, and first commercially implemented in 2005. The first industry-standard demonstration showing unprecedented advantage of PMR over longitudinal magnetic recording (LMR) at nanoscale dimensions was made in 1998 at IBM Almaden Research Center in collaboration with researchers of Data Storage Systems Center (DSSC) – a National Science Foundation (NSF) Engineering Research Center (ERCs) at Carnegie Mellon University (CMU).
Advantages
Perpendicular recording can deliver more than three times the storage density of traditional longitudinal recording. In 1986, Maxell announced a floppy disk using perpendicular recording that could store . Perpendicular recording was later used by Toshiba in 3.5" floppy disks in 1989 to permit 2.88 MB of capacity (ED or extra-high density), but they failed to succeed in the marketplace. Since about 2005, the technology has come into use for hard disk drives. Hard disk technology with longitudinal recording has an estimated limit of due to the superparamagnetic effect, though this estimate is constantly changing. Perpendicular recording is predicted to allow information densities of up to around . , drives with densities of were available commercially. In 2016 the commercially available density was at least . In late 2021 the Seagate disk with the highest density was a consumer-targeted 2.5" BarraCuda. It used density. Other disks from the manufacturer used and .
Technology
The main challenge in designing magnetic information storage media is to retain the magnetization of the medium despite thermal fluctuations caused by the superparamagnetic limit. If the thermal energy is too high, there may be enough energy to reverse th
|
https://en.wikipedia.org/wiki/Springburn%20Museum
|
Springburn Museum was set up in the reading room of the Springburn Library, Glasgow, Scotland, as the first independent community museum in the city, presenting material on the industrial heritage of the area.
The Museum was opened by Tom Weir in 1986. It continued to provide a community based resource for historical reference throughout the 1990s. After encountering financial difficulties, the Museum closed in 2001. Subsequently, a more limited display in Springburn Library is complemented by an online entity.
Plans for redeveloping Springburn Winter Gardens announced in 2020 would include some artefacts formerly exhibited at the Museum.
References
External links
Springburn Museum website
Museums in Glasgow
Virtual museums
Defunct museums in Scotland
1988 establishments in Scotland
Museums established in 1988
2003 disestablishments in Scotland
Museums disestablished in 2003
Springburn
|
https://en.wikipedia.org/wiki/Integrated%20amplifier
|
An integrated amplifier (pre/main amp) is an electronic device containing an audio preamplifier and power amplifier in one unit, as opposed to separating the two. Most modern audio amplifiers are integrated and have several inputs for devices such as CD players, DVD players, and auxiliary sources.
Vintage integrated amplifiers commonly have dedicated inputs for phonograph, tuner, tape recorder and/or an auxiliary input. Except for the phono input, all of the inputs are line level, thus, they are interchangeable. The phono preamplifier stage provides RIAA equalization.
See also
Audiophile
High-end audio
High fidelity
Valve audio amplifier
References
Sources
Queen's University ENPH333 Notes- Prof. J.L. Mason
Audio amplifiers
|
https://en.wikipedia.org/wiki/Fibred%20category
|
Fibred categories (or fibered categories) are abstract entities in mathematics used to provide a general framework for descent theory. They formalise the various situations in geometry and algebra in which inverse images (or pull-backs) of objects such as vector bundles can be defined. As an example, for each topological space there is the category of vector bundles on the space, and for every continuous map from a topological space X to another topological space Y is associated the pullback functor taking bundles on Y to bundles on X. Fibred categories formalise the system consisting of these categories and inverse image functors. Similar setups appear in various guises in mathematics, in particular in algebraic geometry, which is the context in which fibred categories originally appeared. Fibered categories are used to define stacks, which are fibered categories (over a site) with "descent". Fibrations also play an important role in categorical semantics of type theory, and in particular that of dependent type theories.
Fibred categories were introduced by , and developed in more detail by .
Background and motivations
There are many examples in topology and geometry where some types of objects are considered to exist on or above or over some underlying base space. The classical examples include vector bundles, principal bundles, and sheaves over topological spaces. Another example is given by "families" of algebraic varieties parametrised by another variety. Typical to these situations is that to a suitable type of a map between base spaces, there is a corresponding inverse image (also called pull-back) operation taking the considered objects defined on to the same type of objects on . This is indeed the case in the examples above: for example, the inverse image of a vector bundle on is a vector bundle on .
Moreover, it is often the case that the considered "objects on a base space" form a category, or in other words have maps (morphisms) between them. In
|
https://en.wikipedia.org/wiki/Ames%20strain
|
The Ames strain is one of 89 known strains of the anthrax bacterium (Bacillus anthracis). It was isolated from a diseased 14-month-old Beefmaster heifer that died in Sarita, Texas in 1981. The strain was isolated at the Texas Veterinary Medical Diagnostic Laboratory and a sample was sent to the United States Army Medical Research Institute of Infectious Diseases (USAMRIID). Researchers at USAMRIID mistakenly believed the strain came from Ames, Iowa because the return address on the package was the USDA's National Veterinary Services Laboratories in Ames and mislabeled the specimen.
The Ames strain came to wide public attention during the 2001 anthrax attacks when seven letters containing it were mailed to media outlets and US Senators on September 18, 2001, and October 9, 2001.
Because of its virulence, the Ames strain is used by the United States for developing vaccines and testing their effectiveness. Use of the Ames strain started in the 1980s, after work on weaponizing the Vollum 1B strain ended and all weaponized stocks were destroyed after the end of the U.S. biological warfare program in 1969.
Virulence
Virulence plasmids
Researchers have identified two specific virulence plasmids in B. anthracis, with the Ames strain expressing greater virulence compared to other strains. The virulence of B. anthracis results from two plasmids, pXO1 and pXO2. Plasmid pXO2 encodes an antiphagocytic poly-D-glutamic acid capsule, which allows B. anthracis to evade the host immune system. Plasmid pXO1 encodes three toxin proteins: edema factor (EF), lethal factor (LF) and protective antigen (PA). Variation in virulence can be explained by the presence or absence of plasmids; for example, isolates missing either pXO1 or pXO2 are considered attenuated, meaning they will not cause significant infection. One possible mechanism that may be responsible for the regulation of virulence is the copy number of plasmids per cell. The number of plasmids among isolates varies, with as m
|
https://en.wikipedia.org/wiki/Cowboy%20coding
|
Cowboy coding is software development where programmers have autonomy over the development process. This includes control of the project's schedule, languages, algorithms, tools, frameworks and coding style. Typically, little to no coordination exists with other developers or stakeholders.
A cowboy coder can be a lone developer or part of a group of developers working with minimal process or discipline. Usually it occurs when there is little participation by business users, or fanned by management that controls only non-development aspects of the project, such as the broad targets, timelines, scope, and visuals (the "what", but not the "how").
"Cowboy coding" commonly sees usage as a derogatory term when contrasted with more structured software development methodologies.
Disadvantages
In cowboy coding, the lack of formal software project management methodologies may be indicative (though not necessarily) of a project's small size or experimental nature. Software projects with these attributes may exhibit:
Lack of release structure
Lack of estimation or implementation planning might cause a project to be delayed. Sudden deadlines or pushes to release software may encourage the use of "quick and dirty" techniques that will require further attention later.
Inexperienced developers
Cowboy coding can be common at the hobbyist or student level where developers might initially be unfamiliar with the technologies, such as testing, version control and/or build tools, usually more than just the basic coding a software project requires.
This can result in underestimating time required for learning, causing delays in the development process. Inexperience might also lead to disregard of accepted standards, making the project source difficult to read or causing conflicts between the semantics of the language constructs and the result of their output.
Uncertain design requirements
Custom software applications, even when using a proven development cycle, can experience prob
|
https://en.wikipedia.org/wiki/Insteon
|
Insteon is a proprietary home automation (domotics) system that enables light switches, lights, thermostats, leak sensors, remote controls, motion sensors, and other electrically powered devices to interoperate through power lines, radio frequency (RF) communications, or both. It employed a dual-mesh networking topology in which all devices are peers and each device independently transmits, receives, confirm and repeats messages. Like other home automation systems, it had been associated with the Internet of things.
In mid-April of 2022, the company appeared to have abruptly shut down.
Corporate history
Insteon-based products were launched in 2005 by Smartlabs, the company which holds the trademark for Insteon. A Smartlabs subsidiary, also named Insteon, was created to market the technology. or by CEO Joe Dada. Dada had previously founded Smarthome in 1992, a home automation product catalog company, and operator of the Smarthome.com e-commerce site. In the late 1990s, Dada acquired two product engineering firms which undertook extensive product development efforts to create networking technology based on both power-line and RF communications. In 2004, the company filed for patent protection for the resultant technology, called Insteon, and it was released in 2005.
In 2012, the company released the first network-controlled light bulb using Insteon-enabled technology, and at that point Dada spun Insteon off from Smarthome.
In 2017, SmartLabs and the Insteon trademark were acquired by Richmond Capital Partners.
The company produced over 200 products featuring the technology.
As of April 15 2022, there are reports that Insteon has shut down its servers and closed. On April 16, reports emerged of users finding their Insteon Hubs offline. The company forums, web servers and API servers went offline. The company's CEO Rob Lilleness appeared to have scrubbed any references to Insteon from his LinkedIn page, and other employees also appeared to indicate on their
|
https://en.wikipedia.org/wiki/Secure%20telephone
|
A secure telephone is a telephone that provides voice security in the form of end-to-end encryption for the telephone call, and in some cases also the mutual authentication of the call parties, protecting them against a man-in-the-middle attack. Concerns about massive growth of telephone tapping incidents led to growing demand for secure telephones.
The practical availability of secure telephones is restricted by several factors; notably politics, export issues, incompatibility between different products (the devices on each side of the call have to use the same protocol), and high (though recently decreasing) price of the devices.
Well-known products
The best-known product on the US government market is the STU-III family. However, this system has now been replaced by the Secure Terminal Equipment (STE) and SCIP standards which defines specifications for the design of equipment to secure both data and voice. The SCIP standard was developed by the NSA and the US DOD to derive more interoperability between secure communication equipment. A new family of standard secure phones has been developed based on Philip Zimmermann's VoIP encryption standard ZRTP.
VoIP and direct connection phones
As the popularity of VoIP grows, secure telephony is becoming more widely used. Many major hardware and software providers offer it as a standard feature at no extra cost.
Examples include the Gizmo5 and Twinkle. Both of the former work with offerings from the founder of PGP, Phil Zimmermann, and his VoIP secure protocol, ZRTP. ZRTP is implemented in, amongst others, Ripcord Networks product SecurePC with up to NSA Suite B compliant Elliptic Curve math libraries. ZRTP is also being made available for mobile GSM CSD as a new standard for non-VoIP secure calls.
The U.S. National Security Agency is developing a secure phone based on Google's Android called Fishbowl.
Historically significant products
Scramblers were used to secure voice traffic during World War II, but were ofte
|
https://en.wikipedia.org/wiki/IMSI-catcher
|
An international mobile subscriber identity-catcher, or IMSI-catcher, is a telephone eavesdropping device used for intercepting mobile phone traffic and tracking location data of mobile phone users. Essentially a "fake" mobile tower acting between the target mobile phone and the service provider's real towers, it is considered a man-in-the-middle (MITM) attack. The 3G wireless standard offers some risk mitigation due to mutual authentication required from both the handset and the network. However, sophisticated attacks may be able to downgrade 3G and LTE to non-LTE network services which do not require mutual authentication.
IMSI-catchers are used in a number of countries by law enforcement and intelligence agencies, but their use has raised significant civil liberty and privacy concerns and is strictly regulated in some countries such as under the German Strafprozessordnung (StPO / Code of Criminal Procedure). Some countries do not have encrypted phone data traffic (or very weak encryption), thus rendering an IMSI-catcher unnecessary.
Overview
A virtual base transceiver station (VBTS) is a device for identifying the temporary mobile subscriber identity (TMSI), international mobile subscriber identity (IMSI) of a nearby GSM mobile phone and intercepting its calls, some are even advanced enough to detect the international mobile equipment identity (IMEI). It was patented and first commercialized by Rohde & Schwarz in 2003. The device can be viewed as simply a modified cell tower with a malicious operator, and on 4 January 2012, the Court of Appeal of England and Wales held that the patent is invalid for obviousness.
IMSI-catchers are often deployed by court order without a search warrant, the lower judicial standard of a pen register and trap-and-trace order being preferred by law enforcement. They can also be used in search and rescue operation for missing persons. Police departments have been reluctant to reveal use of these programs and contracts with vendors
|
https://en.wikipedia.org/wiki/Kruskal%E2%80%93Szekeres%20coordinates
|
In general relativity, Kruskal–Szekeres coordinates, named after Martin Kruskal and George Szekeres, are a coordinate system for the Schwarzschild geometry for a black hole. These coordinates have the advantage that they cover the entire spacetime manifold of the maximally extended Schwarzschild solution and are well-behaved everywhere outside the physical singularity. There is no misleading coordinate singularity at the horizon.
The Kruskal–Szekeres coordinates also apply to space-time around a spherical object, but in that case do not give a description of space-time inside the radius of the object. Space-time in a region where a star is collapsing into a black hole is approximated by the Kruskal–Szekeres coordinates (or by the Schwarzschild coordinates). The surface of the star remains outside the event horizon in the Schwarzschild coordinates, but crosses it in the Kruskal–Szekeres coordinates. (In any "black hole" which we observe, we see it at a time when its matter has not yet finished collapsing, so it is not really a black hole yet.) Similarly, objects falling into a black hole remain outside the event horizon in Schwarzschild coordinates, but cross it in Kruskal–Szekeres coordinates.
Definition
Kruskal–Szekeres coordinates on a black hole geometry are defined, from the Schwarzschild coordinates , by replacing t and r by a new timelike coordinate T and a new spacelike coordinate :
for the exterior region outside the event horizon and:
for the interior region . Here is the gravitational constant multiplied by the Schwarzschild mass parameter, and this article is using units where = 1.
It follows that on the union of the exterior region, the event horizon and the interior region the Schwarzschild radial coordinate (not to be confused with the Schwarzschild radius ), is determined in terms of Kruskal–Szekeres coordinates as the (unique) solution of the equation:
Using the Lambert W function the solution is written as:
.
Moreover one sees immediately
|
https://en.wikipedia.org/wiki/Telecentre
|
A telecentre is a public place where people can access computers, the Internet, and other digital technologies that enable them to gather information, create, learn, and communicate with others while they develop essential digital skills. Telecentres exist in almost every country, although they sometimes go by a different names including public internet access center (PIAP), village knowledge center, infocenter, Telecottage, Electronic Village Hall, community technology center (CTC), community multimedia center (CMC), multipurpose community telecentre (MCT), Common/Citizen Service Centre (CSC) and school-based telecentre. While each telecentre is different, their common focus is on the use of digital technologies to support community, economic, educational, and social development—reducing isolation, bridging the digital divide, promoting health issues, creating economic opportunities, and reaching out to youth for example.
Evolution of the telecentre movement
The telecentre movement's origins can be traced to Europe's telecottage and Electronic Village Halls (originally in Denmark) and Community Technology Centers (CTCs) in the United States, both of which emerged in the 1980s as a result of advances in computing. At a time when computers were available but not yet a common household good, public access to computers emerged as a solution. Today, although home ownership of computers is widespread in the United States and other industrialized countries, there remains a need for free public access to computing, whether it is in CTCs, telecottages or public libraries to ensure that everyone has access to technologies that have become essential.
There are also CTCs located in most of the states of Australia, they are also known as Community Resource Centres (often abbreviated to CRC) that provide technology, resources, training and educational programs to communities in regional, rural and remote areas.
Types
Beyond the differences in names, public ICT access centers
|
https://en.wikipedia.org/wiki/Kernel%20density%20estimation
|
In statistics, kernel density estimation (KDE) is the application of kernel smoothing for probability density estimation, i.e., a non-parametric method to estimate the probability density function of a random variable based on kernels as weights. KDE answers a fundamental data smoothing problem where inferences about the population are made, based on a finite data sample. In some fields such as signal processing and econometrics it is also termed the Parzen–Rosenblatt window method, after Emanuel Parzen and Murray Rosenblatt, who are usually credited with independently creating it in its current form. One of the famous applications of kernel density estimation is in estimating the class-conditional marginal densities of data when using a naive Bayes classifier, which can improve its prediction accuracy.
Definition
Let (x1, x2, ..., xn) be independent and identically distributed samples drawn from some univariate distribution with an unknown density ƒ at any given point x. We are interested in estimating the shape of this function ƒ. Its kernel density estimator is
where K is the kernel — a non-negative function — and is a smoothing parameter called the bandwidth. A kernel with subscript h is called the scaled kernel and defined as . Intuitively one wants to choose h as small as the data will allow; however, there is always a trade-off between the bias of the estimator and its variance. The choice of bandwidth is discussed in more detail below.
A range of kernel functions are commonly used: uniform, triangular, biweight, triweight, Epanechnikov, normal, and others. The Epanechnikov kernel is optimal in a mean square error sense, though the loss of efficiency is small for the kernels listed previously. Due to its convenient mathematical properties, the normal kernel is often used, which means , where ϕ is the standard normal density function.
The construction of a kernel density estimate finds interpretations in fields outside of density estimation. For example
|
https://en.wikipedia.org/wiki/Intragenomic%20conflict
|
Intragenomic conflict refers to the evolutionary phenomenon where genes have phenotypic effects that promote their own transmission in detriment of the transmission of other genes that reside in the same genome. The selfish gene theory postulates that natural selection will increase the frequency of those genes whose phenotypic effects cause their transmission to new organisms, and most genes achieve this by cooperating with other genes in the same genome to build an organism capable of reproducing and/or helping kin to reproduce. The assumption of the prevalence of intragenomic cooperation underlies the organism-centered concept of inclusive fitness. However, conflict among genes in the same genome may arise both in events related to reproduction (a selfish gene may "cheat" and increase its own presence in gametes or offspring above the expected according to fair Mendelian segregation and fair gametogenesis) and altruism (genes in the same genome may disagree on how to value other organisms in the context of helping kin because coefficients of relatedness diverge between genes in the same genome).
Nuclear genes
Autosomic genes usually have the same mode of transmission in sexually reproducing species due to the fairness of Mendelian segregation, but conflicts among alleles of autosomic genes may arise when an allele cheats during gametogenesis (segregation distortion) or eliminates embryos that don't contain it (lethal maternal effects). An allele may also directly convert its rival allele into a copy of itself (homing endonucleases). Finally, mobile genetic elements completely bypass Mendelian segregation, being able to insert new copies of themselves into new positions in the genome (transposons).
Segregation distortion
In principle, the two parental alleles have equal probabilities of being present in the mature gamete. However, there are several mechanisms that lead to an unequal transmission of parental alleles from parents to offspring. One example is a gen
|
https://en.wikipedia.org/wiki/Auf%20Wiedersehen%20Monty
|
Auf Wiedersehen Monty (German for "Goodbye Monty") is a computer game for the ZX Spectrum, Commodore 64, Amstrad CPC, MSX and Commodore 16. Released in 1987, it is the fourth game in the Monty Mole series. It was written by Peter Harrap and Shaun Hollingworth with music by Rob Hubbard and Ben Daglish.
Gameplay
The player controls Monty as he travels around Europe collecting money in order to buy a Greek island - Montos, where he can safely retire. Gameplay is in the style of a flick-screen platform game, similar to many such games of the 1980s such as Technician Ted and Jet Set Willy. Some screens (such as those representing the Eiffel Tower and the Pyrenees) bear some relation to their real-life counterparts but most are just typical platform game screens.
Auf Wiedersehen Monty contains many features and peculiarities for the player to discover. Examples include being suddenly attacked by a bull's head in Spain after collecting a red cape (presumably a reference to bullfighting), a car being dropped in one of two places on entering a screen representing Düsseldorf in West Germany, a chef's hat found in Sweden (a reference to the Swedish Chef of Muppets fame; also, the two rooms representing Sweden are subtitled Bjorn and Borg), and a record in Luxembourg that when collected makes Monty breakdance to the game's title music (this may be a reference to Radio Luxembourg).
It is possible to get to areas of the game more quickly by flying from an airport using air tickets which can be collected throughout the game. Some parts of the game can only be reached in this manner.
As well as money, there are other miscellaneous objects to collect in the game for points. This was important as the player needs a certain number of points to get to Montos. These are often particular to the country Monty is visiting (such as berets in France). Bottles of wine or a glass of beer in West Germany cause Monty to briefly become drunk and his control to become slightly erratic leading
|
https://en.wikipedia.org/wiki/Fault%20tolerance
|
Fault tolerance is the property that enables a system to continue operating properly in the event of the failure of one or more faults within some of its components. If its operating quality decreases at all, the decrease is proportional to the severity of the failure, as compared to a naively designed system, in which even a small failure can cause total breakdown. Fault tolerance is particularly sought after in high-availability, mission-critical, or even life-critical systems. The ability of maintaining functionality when portions of a system break down is referred to as graceful degradation.
A fault-tolerant design enables a system to continue its intended operation, possibly at a reduced level, rather than failing completely, when some part of the system fails. The term is most commonly used to describe computer systems designed to continue more or less fully operational with, perhaps, a reduction in throughput or an increase in response time in the event of some partial failure. That is, the system as a whole is not stopped due to problems either in the hardware or the software. An example in another field is a motor vehicle designed so it will continue to be drivable if one of the tires is punctured, or a structure that is able to retain its integrity in the presence of damage due to causes such as fatigue, corrosion, manufacturing flaws, or impact.
Within the scope of an individual system, fault tolerance can be achieved by anticipating exceptional conditions and building the system to cope with them, and, in general, aiming for self-stabilization so that the system converges towards an error-free state. However, if the consequences of a system failure are catastrophic, or the cost of making it sufficiently reliable is very high, a better solution may be to use some form of duplication. In any case, if the consequence of a system failure is so catastrophic, the system must be able to use reversion to fall back to a safe mode. This is similar to roll-back r
|
https://en.wikipedia.org/wiki/Nucleic%20acid%20double%20helix
|
In molecular biology, the term double helix refers to the structure formed by double-stranded molecules of nucleic acids such as DNA. The double helical structure of a nucleic acid complex arises as a consequence of its secondary structure, and is a fundamental component in determining its tertiary structure. The term entered popular culture with the publication in 1968 of The Double Helix: A Personal Account of the Discovery of the Structure of DNA by James Watson.
The DNA double helix biopolymer of nucleic acid is held together by nucleotides which base pair together. In B-DNA, the most common double helical structure found in nature, the double helix is right-handed with about 10–10.5 base pairs per turn. The double helix structure of DNA contains a major groove and minor groove. In B-DNA the major groove is wider than the minor groove. Given the difference in widths of the major groove and minor groove, many proteins which bind to B-DNA do so through the wider major groove.
History
The double-helix model of DNA structure was first published in the journal Nature by James Watson and Francis Crick in 1953, (X,Y,Z coordinates in 1954) based on the work of Rosalind Franklin and her student Raymond Gosling, who took the crucial X-ray diffraction image of DNA labeled as "Photo 51", and Maurice Wilkins, Alexander Stokes, and Herbert Wilson, and base-pairing chemical and biochemical information by Erwin Chargaff. Before this, Linus Pauling—who had already accurately characterised the conformation of protein secondary structure motifs—and his collaborator Robert Corey had posited, erroneously, that DNA would adopt a triple-stranded conformation.
The realization that the structure of DNA is that of a double-helix elucidated the mechanism of base pairing by which genetic information is stored and copied in living organisms and is widely considered one of the most important scientific discoveries of the 20th century. Crick, Wilkins, and Watson each received one-third
|
https://en.wikipedia.org/wiki/Volterra%20integral%20equation
|
In mathematics, the Volterra integral equations are a special type of integral equations. They are divided into two groups referred to as the first and the second kind.
A linear Volterra equation of the first kind is
where f is a given function and x is an unknown function to be solved for. A linear Volterra equation of the second kind is
In operator theory, and in Fredholm theory, the corresponding operators are called Volterra operators. A useful method to solve such equations, the Adomian decomposition method, is due to George Adomian.
A linear Volterra integral equation is a convolution equation if
The function in the integral is called the kernel. Such equations can be analyzed and solved by means of Laplace transform techniques.
For a weakly singular kernel of the form with , Volterra integral equation of the first kind can conveniently be transformed into a classical Abel integral equation.
The Volterra integral equations were introduced by Vito Volterra and then studied by Traian Lalescu in his 1908 thesis, Sur les équations de Volterra, written under the direction of Émile Picard. In 1911, Lalescu wrote the first book ever on integral equations.
Volterra integral equations find application in demography as Lotka's integral equation, the study of viscoelastic materials,
in actuarial science through the renewal equation, and in fluid mechanics to describe the flow behavior near finite-sized boundaries.
Conversion of Volterra equation of the first kind to the second kind
A linear Volterra equation of the first kind can always be reduced to a linear Volterra equation of the second kind, assuming that . Taking the derivative of the first kind Volterra equation gives us:Dividing through by yields:Defining and completes the transformation of the first kind equation into a linear Volterra equation of the second kind.
Numerical solution using trapezoidal rule
A standard method for computing the numerical solution of a linear Volterra equation
|
https://en.wikipedia.org/wiki/Comb%20drive
|
Comb-drives are microelectromechanical actuators, often used as linear actuators, which utilize electrostatic forces that act between two electrically conductive combs. Comb drive actuators typically operate at the micro- or nanometer scale and are generally manufactured by bulk micromachining or surface micromachining a silicon wafer substrate.
The attractive electrostatic forces are created when a voltage is applied between the static and moving combs causing them to be drawn together. The force developed by the actuator is proportional to the change in capacitance between the two combs, increasing with driving voltage, the number of comb teeth, and the gap between the teeth. The combs are arranged so that they never touch (because then there would be no voltage difference). Typically the teeth are arranged so that they can slide past one another until each tooth occupies the slot in the opposite comb.
Restoring springs, levers, and crankshafts can be added if the motor's linear operation is to be converted to rotation or other motions.
The force can be derived by first starting with the energy stored in a capacitor and then differentiating in the direction of the force. The energy in a capacitor is given by:
Using the capacitance for a parallel plate capacitor, the force is:
= applied electric potential,
= relative permittivity of dielectric,
= permittivity of free space (8.85 pF/m),
= total number of fingers on both sides of electrodes,
= thickness in the out-of-plane direction of the electrodes,
= gap between electrodes.
Structure of Comb-drives
• rows of interlocking teeth
• half fixed
• half part of movable assembly
• electrically isolated
• electrostatic attraction/repulsion
– CMOS drive voltage
• many teeth increased force
– typically 10μm long and strong
Scaling Issues
Comb drives cannot scale to large gap distances (equivalently actuation distance), since development of effective forces at large gaps distances would requir
|
https://en.wikipedia.org/wiki/Megahertz%20myth
|
The megahertz myth, or in more recent cases the gigahertz myth, refers to the misconception of only using clock rate (for example measured in megahertz or gigahertz) to compare the performance of different microprocessors. While clock rates are a valid way of comparing the performance of different speeds of the same model and type of processor, other factors such as an amount of execution units, pipeline depth, cache hierarchy, branch prediction, and instruction sets can greatly affect the performance when considering different processors. For example, one processor may take two clock cycles to add two numbers and another clock cycle to multiply by a third number, whereas another processor may do the same calculation in two clock cycles. Comparisons between different types of processors are difficult because performance varies depending on the type of task. A benchmark is a more thorough way of measuring and comparing computer performance.
The myth started around 1984 when comparing the Apple II with the IBM PC. The argument was that the IBM computer was five times faster than the Apple II, as its Intel 8088 processor had a clock speed roughly 4.7 times the clock speed of the MOS Technology 6502 used in the latter. However, what really matters is not how finely divided a machine's instructions are, but how long it takes to complete a given task. Consider the LDA # (Load Accumulator Immediate) instruction. On a 6502 that instruction requires two clock cycles, or 2 μs at 1 MHz. Although the 4.77 MHz 8088's clock cycles are shorter, the LDA # needs at least 4 of them, so it takes 4 / 4.77 MHz = 0.84 μs at least. So, at best, that instruction runs only a little more than 2 times as fast on the original IBM PC than on the Apple II.
History
Background
The x86 CISC based CPU architecture which Intel introduced in 1978 was used as the standard for the DOS based IBM PC, and developments of it still continue to dominate the Microsoft Windows market. An IBM RISC based arch
|
https://en.wikipedia.org/wiki/Sparklies
|
Sparklies is a form of interference on analogue satellite television transmissions.
Sparklies are black or white 'hard' interference dots (as opposed to the 'soft' interference patterns of terrestrial television), caused either by too weak or too strong a signal. When within the satellite's rated reception footprint, sparklies are most likely to be caused by a misaligned dish, or LNBs which are too high- or too low-gain for the dish and receiver.
The term "sparklies" is used by British Sky Broadcasting (BSkyB) and a number of hardware makers including Amstrad and Pace.
Sparklies do not occur on digital satellite systems; similar problems with digital signals cause MPEG artifacts.
See also
Salt and pepper noise
References
Satellite broadcasting
Television terminology
|
https://en.wikipedia.org/wiki/Radical%20axis
|
In Euclidean geometry, the radical axis of two non-concentric circles is the set of points whose power with respect to the circles are equal. For this reason the radical axis is also called the power line or power bisector of the two circles. In detail:
For two circles with centers and radii the powers of a point with respect to the circles are
Point belongs to the radical axis, if
If the circles have two points in common, the radical axis is the common secant line of the circles.
If point is outside the circles, has equal tangential distance to both the circles.
If the radii are equal, the radical axis is the line segment bisector of .
In any case the radical axis is a line perpendicular to
On notations
The notation radical axis was used by the French mathematician M. Chasles as axe radical.
J.V. Poncelet used .
J. Plücker introduced the term .
J. Steiner called the radical axis line of equal powers () which led to power line ().
Properties
Geometric shape and its position
Let be the position vectors of the points . Then the defining equation of the radical line can be written as:
From the right equation one gets
The pointset of the radical axis is indeed a line and is perpendicular to the line through the circle centers.
( is a normal vector to the radical axis !)
Dividing the equation by , one gets the Hessian normal form. Inserting the position vectors of the centers yields the distances of the centers to the radical axis:
,
with .
( may be negative if is not between .)
If the circles are intersecting at two points, the radical line runs through the common points. If they only touch each other, the radical line is the common tangent line.
Special positions
The radical axis of two intersecting circles is their common secant line.
The radical axis of two touching circles is their common tangent.
The radical axis of two non intersecting circles is the common secant of two convenient equipower circles (see below).
Orthogonal circles
|
https://en.wikipedia.org/wiki/Fortinet
|
Fortinet is a cybersecurity company with headquarters in Sunnyvale, California. The company develops and sells security solutions like firewalls, endpoint security and intrusion detection systems. Fortinet has offices located all over the world.
Brothers Ken Xie and Michael Xie founded Fortinet in 2000. The company's first and main product was FortiGate, a physical firewall. The company later added wireless access points, sandbox and messaging security. The company went public in November 2009.
History
Early history
In 2000, Ken Xie and his brother Michael Xie co-founded Appligation Inc. The company was renamed ApSecure in December 2000 and later renamed again to Fortinet, based on the phrase "Fortified Networks."
Fortinet introduced its first product, FortiGate, in 2002, followed by anti-spam and anti-virus software. The company raised $13 million in private funding from 2000 to early 2003. Fortinet's first channel program was established in October 2003. The company began distributing its products in Canada in December 2003 and in the UK in February 2004. By 2004, Fortinet had offices in Asia, Europe, and North America.
In April 2005, a German court issued a preliminary injunction against Fortinet's UK subsidiary in relation to source code for its GPL-licensed elements. The dispute ended a month later after Fortinet agreed to make the source code available upon request.
Growth and expansion
Fortinet became profitable in the third quarter of 2008. Later that year, the company acquired the intellectual property of IPLocks, a database security and auditing company. In August 2009, Fortinet acquired the intellectual property and other assets of Woven Systems, an Ethernet switching company.
According to market research firm IDC, by November 2009, Fortinet held over 15 percent of the unified threat management market. Also in 2009, CRN Magazines survey-based annual report card placed Fortinet first in network security hardware, up from seventh in 2007. In November
|
https://en.wikipedia.org/wiki/Power%20of%20a%20point
|
In elementary plane geometry, the power of a point is a real number that reflects the relative distance of a given point from a given circle. It was introduced by Jakob Steiner in 1826.
Specifically, the power of a point with respect to a circle with center and radius is defined by
If is outside the circle, then ,
if is on the circle, then and
if is inside the circle, then .
Due to the Pythagorean theorem the number has the simple geometric meanings shown in the diagram: For a point outside the circle is the squared tangential distance of point to the circle .
Points with equal power, isolines of , are circles concentric to circle .
Steiner used the power of a point for proofs of several statements on circles, for example:
Determination of a circle, that intersects four circles by the same angle.
Solving the Problem of Apollonius
Construction of the Malfatti circles: For a given triangle determine three circles, which touch each other and two sides of the triangle each.
Spherical version of Malfatti's problem: The triangle is a spherical one.
Essential tools for investigations on circles are the radical axis of two circles and the radical center of three circles.
The power diagram of a set of circles divides the plane into regions within which the circle minimizing the power is constant.
More generally, French mathematician Edmond Laguerre defined the power of a point with respect to any algebraic curve in a similar way.
Geometric properties
Besides the properties mentioned in the lead there are further properties:
Orthogonal circle
For any point outside of the circle there are two tangent points on circle , which have equal distance to . Hence the circle with center through passes , too, and intersects orthogonal:
The circle with center and radius intersects circle orthogonal.
If the radius of the circle centered at is different from one gets the angle of intersection between the two circles applying the Law of c
|
https://en.wikipedia.org/wiki/Xinu
|
Xinu Is Not Unix (Xinu, a recursive acronym), is an operating system for embedded systems, originally developed by Douglas Comer for educational use at Purdue University in the 1980s. The name is both recursive, and is Unix spelled backwards. It has been ported to many hardware platforms, including the DEC PDP-11 and VAX systems, Motorola 68k (Sun-2 and Sun-3 workstations, AT&T UNIX PC, MECB), Intel x86, PowerPC G3, MIPS, ARM architecture and AVR (atmega328p/Arduino). Xinu was also used for some models of Lexmark printers.
Despite its name suggesting some similarity to Unix, Xinu is a different type of operating system, written with no knowledge of the Unix source code, or compatibility goals. It uses different abstractions, and system calls, some with names matching those of Unix, but different semantics.
History
Xinu first ran on the LSI-11 platform. A Motorola 68000 port was done by Derrick Burns in 1984. A VAX port was done in 1986 by Comer and Tom Stonecypher, an IBM PC compatible port in 1988 by Comer and Timothy Fossum, a second Motorola 68000 (Sun 3) port circa 1988 by Shawn Ostermann and Steve Chapin, a Macintosh platform port in 1989 by Comer and Steven Munson, an Intel 80486 version by John Lin in 1995, a SPARC port by Jim Griffioen, and a PowerPC port in 2005 and MIPS port of Embedded Xinu in 2006 by Dennis Brylow.
Later developments
Dennis Brylow at Marquette University has ported Xinu to both the PowerPC and MIPSEL processor architectures. Porting Xinu to reduced instruction set computing (RISC) architectures greatly simplified its implementation, increasing its ability to be used as a tool for teaching and research.
MIPSEL was chosen as a target architecture due to the proliferation of the MIPSEL-based WRT54GL router and the cool incentive that motivates some students to become involved in projects. The first embedded Xinu systems laboratory based on the WRT54GL router was developed at Marquette University. In collaboration with the Marquette Xinu
|
https://en.wikipedia.org/wiki/Abel%20polynomials
|
The Abel polynomials are a sequence of polynomials named after Niels Henrik Abel, defined by the following equation:
This polynomial sequence is of binomial type: conversely, every polynomial sequence of binomial type may be obtained from the Abel sequence using umbral calculus.
Examples
For , the polynomials are
For , the polynomials are
References
External links
Polynomials
|
https://en.wikipedia.org/wiki/Wastebasket%20taxon
|
Wastebasket taxon (also called a wastebin taxon, dustbin taxon or catch-all taxon) is a term used by some taxonomists to refer to a taxon that has the purpose of classifying organisms that do not fit anywhere else. They are typically defined by either their designated members' often superficial similarity to each other, or their lack of one or more distinct character states or by their not belonging to one or more other taxa. Wastebasket taxa are by definition either paraphyletic or polyphyletic, and are therefore not considered valid taxa under strict cladistic rules of taxonomy. The name of a wastebasket taxon may in some cases be retained as the designation of an evolutionary grade, however.
The term was coined in a 1985 essay by Steven Jay Gould.
Examples
There are many examples of paraphyletic groups, but true "wastebasket" taxa are those that are known not to, and perhaps not intended to, represent natural groups, but are nevertheless used as convenient groups of organisms. The acritarchs are perhaps the most famous example. Wastebasket taxa are often old (and perhaps not described with the systematic rigour and precision that is possible in the light of accumulated knowledge of diversity) and populous; further characteristics are reviewed by.
The Flacourtiaceae, a now-defunct family of flowering plants – the Angiosperm Phylogeny Group has placed its tribes and genera in various other families, especially the Achariaceae and Salicaceae.
The obsolete kingdom Protista is composed of all eukaryotes that are not animals, plants or fungi, leaving to the protists all single-celled eukaryotes.
The Tricholomataceae is a fungal group, at one point composed of the white-, yellow-, or pink-spored genera in the Agaricales not already classified as belonging to the Amanitaceae, Lepiotaceae, Hygrophoraceae, Pluteaceae, or Entolomataceae.
Carnosauria and Thecodontia are fossil groups, banded together back when the limited fossil record did not allow for a more deta
|
https://en.wikipedia.org/wiki/Oligodynamic%20effect
|
The oligodynamic effect (from Greek oligos, "few", and dynamis, "force") is a biocidal effect of metals, especially heavy metals, that occurs even in low concentrations.
In modern times, the effect was observed by Carl Nägeli, although he did not identify the cause. Brass doorknobs and silverware both exhibit this effect to an extent.
Mechanism
The metals react with thiol (-SH) or amine (-NH(1,2,3)) groups of proteins, a mode of action to which microorganisms may develop resistance. Such resistance may be transmitted by plasmids.
List of uses
Aluminium
Aluminium triacetate (Burow's solution) is used as an astringent mild antiseptic.
Antimony
Orthoesters of diarylstibinic acids are fungicides and bactericides, used in paints, plastics, and fibers. Trivalent organic antimony was used in therapy for schistosomiasis.
Arsenic
For many decades, arsenic was used medicinally to treat syphilis. It is still used in sheep dips, rat poisons, wood preservatives, weed killers, and other pesticides. Arsenic is poisonous if it enters the human body.
Barium
Barium polysulfide is a fungicide and acaricide used in fruit and grape growing.
Bismuth
Bismuth compounds have been used because of their astringent, antiphlogistic, bacteriostatic, and disinfecting actions. In dermatology bismuth subgallate is still used in vulnerary salves and powders as well as in antimycotics. In the past, bismuth has also been used to treat syphilis and malaria.
Boron
Boric acid esters derived from glycols (example, organo-borate formulation, Biobor JF) are being used for the control of microorganisms in fuel systems containing water.
Copper
Brass vessels release a small amount of copper ions into stored water, thus killing fecal bacterial counts as high as 1 million bacteria per milliliter.
Copper sulfate mixed with lime (Bordeaux mixture) is used as a fungicide and antihelminthic. Copper sulfate is used chiefly to destroy green algae (algicide) that grow in reservoirs, stock ponds, swi
|
https://en.wikipedia.org/wiki/MIL-STD-498
|
MIL-STD-498, Military Standard Software Development and Documentation, was a United States military standard whose purpose was to "establish uniform requirements for software development and documentation." It was released Nov. 8, 1994, and replaced DOD-STD-2167A, DOD-STD-2168, DOD-STD-7935A, and DOD-STD-1703. It was meant as an interim standard, to be in effect for about two years until a commercial standard was developed.
Unlike previous efforts like the seminal DOD-STD-2167A which was mainly focused on the risky new area of software development, MIL-STD-498 was the first attempt at comprehensive description of the systems development life-cycle. MIL-STD-498 was the baseline for certain ISO and IEEE standards that followed it. It also contains much of the material that the subsequent professionalization of project management covered in the Project Management Body of Knowledge (PMBOK). The document "MIL-STD-498 Overview and Tailoring Guidebook" is 98 pages. The "MIL-STD-498 Application and Reference Guidebook" is 516 pages. Associated to these were document templates, or Data Item Descriptions, described below, bringing documentation and process order that could scale to projects of the size humans were then conducting (aircraft, battleships, canals, dams, factories, satellites, submarines, etcetera).
It was one of the few military standards that survived the "Perry Memo", then U.S. Secretary of Defense William Perry's 1994 memorandum commanding the discontinuation of defense standards. However, it was canceled on May 27, 1998, and replaced by the essentially identical demilitarized version EIA J-STD-016 as a process example guide for IEEE 12207. Several programs outside of the U.S. military continued to use the standard due to familiarity and perceived advantages over alternative standards, such as free availability of the standards documents and presence of process detail including contractually-usable Data Item Descriptions.
In military airborne softwar
|
https://en.wikipedia.org/wiki/Lie%20algebra%20cohomology
|
In mathematics, Lie algebra cohomology is a cohomology theory for Lie algebras. It was first introduced in 1929 by Élie Cartan to study the topology of Lie groups and homogeneous spaces by relating cohomological methods of Georges de Rham to properties of the Lie algebra. It was later extended by to coefficients in an arbitrary Lie module.
Motivation
If is a compact simply connected Lie group, then it is determined by its Lie algebra, so it should be possible to calculate its cohomology from the Lie algebra. This can be done as follows. Its cohomology is the de Rham cohomology of the complex of differential forms on . Using an averaging process, this complex can be replaced by the complex of left-invariant differential forms. The left-invariant forms, meanwhile, are determined by their values at the identity, so that the space of left-invariant differential forms can be identified with the exterior algebra of the Lie algebra, with a suitable differential.
The construction of this differential on an exterior algebra makes sense for any Lie algebra, so it is used to define Lie algebra cohomology for all Lie algebras. More generally one uses a similar construction to define Lie algebra cohomology with coefficients in a module.
If is a simply connected noncompact Lie group, the Lie algebra cohomology of the associated Lie algebra does not necessarily reproduce the de Rham cohomology of . The reason for this is that the passage from the complex of all differential forms to the complex of left-invariant differential forms uses an averaging process that only makes sense for compact groups.
Definition
Let be a Lie algebra over a commutative ring R with universal enveloping algebra , and let M be a representation of (equivalently, a -module). Considering R as a trivial representation of , one defines the cohomology groups
(see Ext functor for the definition of Ext). Equivalently, these are the right derived functors of the left exact invariant submodule functor
A
|
https://en.wikipedia.org/wiki/Cyberinfrastructure
|
United States federal research funders use the term cyberinfrastructure to describe research environments that support advanced data acquisition, data storage, data management, data integration, data mining, data visualization and other computing and information processing services distributed over the Internet beyond the scope of a single institution. In scientific usage, cyberinfrastructure is a technological and sociological solution to the problem of efficiently connecting laboratories, data, computers, and people with the goal of enabling derivation of novel scientific theories and knowledge.
Origin
The term National Information Infrastructure had been popularized by Al Gore in the 1990s. This use of the term "cyberinfrastructure" evolved from the same thinking that produced Presidential Decision Directive NSC-63 on Protecting America's Critical Infrastructures (PDD-63). PDD-63 focuses on the security and vulnerability of the nation's "cyber-based information systems" as well as the critical infrastructures on which America's military strength and economic well-being depend, such as the electric power grid, transportation networks, potable water and wastewater infrastructures.
The term "cyberinfrastructure" was used in a press briefing on PDD-63 on May 22, 1998 with Richard A. Clarke, then national coordinator for security, infrastructure protection, and counter-terrorism, and Jeffrey Hunker, who had just been named director of the critical infrastructure assurance office. Hunker stated:
"One of the key conclusions of the President's commission that laid the intellectual framework for the President's announcement today was that while we certainly have a history of some real attacks, some very serious, to our cyber-infrastructure, the real threat lay in the future. And we can't say whether that's tomorrow or years hence. But we've been very successful as a country and as an economy in wiring together our critical infrastructures. This is a development that's
|
https://en.wikipedia.org/wiki/Beam%27s%20eye%20view
|
Beam's eye view (abbreviated BEV) is an imaging technique used in radiation therapy for quality assurance and planning of external beam radiotherapy (EBRT). These are primarily used to ensure that the relative orientation of the patient and the treatment machine are correct. The BEV image will typically include the images of the patient's anatomy and the beam modifiers, such as jaws or multi-leaf collimators (MLCs).
Generation of Beam's Eye Views
Physical Construction:
BEV's can be generated by exposing a high energy film (similar to photographic film) or an Electronic Portal Imaging Device (EPID) with the treatment beam itself after it passes through the patient and any beam modifiers (such as blocks). Although this type of image is an excellent indication of the basic quality of the treatment plan, the quality of film images can be poor.
A BEV can be created using a radiation therapy simulator which mimics the treatment geometry (couch angle, gantry angle, etc.) using an X-ray source instead of the higher energy treatment source. The jaws and blocks can be imaged on the same film as the patient's landmarks.
Artificial Reconstruction: The BEV can be created using a Digitally Reconstructed Radiograph (or DRR) that is created from a computed tomography (or CT) data set. This image would contain the same treatment plan information, but the patient image is reconstructed from the CT image data using a physics model.
References
Faiz Kahn and Roger Potish (Eds.) (1998).Treatment Planning in Radiation Oncology. Williams & Wilkins. .
Jacob Van Dyk (Ed.) (1999). The Modern Technology of Radiation Oncology. Medical Physics Publishing. .
Ross I. Berbeco (Ed.) (2018). Beam's Eye View Imaging in Radiation Oncology. CRC Press. .
Louis Lemieux, Roger Jagoe, David R. Fish, Neil D. Kitchen, David G. Thomas, A patient-to-computed-tomography image registration method based on digitally reconstructed radiographs. Med. Phys. 21(11), 1749–1760 (1994). https://doi.org/10.1118/
|
https://en.wikipedia.org/wiki/Miller%20Puckette
|
Miller Smith Puckette (born 1959) is the associate director of the Center for Research in Computing and the Arts as well as a professor of music at the University of California, San Diego, where he has been since 1994.
Puckette is known for authoring Max, a graphical development environment for music and multimedia synthesis, which he developed while working at IRCAM in the late 1980s. He is also the author of Pure Data (Pd), a real-time performing platform for audio, video and graphical programming language for the creation of interactive computer music and multimedia works, written in the 1990s with input from many others in the computer music and free software communities.
Biography
An alumnus of St. Andrew's-Sewanee School in Tennessee, Miller Puckette got involved in computer music in 1979 at MIT with Barry Vercoe. In 1979 he became a Putnam Fellow.
He earned a Ph.D. in mathematics from Harvard University in 1986 after completing an undergraduate degree at MIT in 1980. He was a member of the MIT Media Lab from its opening in 1985 until 1987 before continuing his research at IRCAM, and since 1997 has been a part of the Global Visual Music project.
He used Max to complete his first work, which is called Pluton from the second work of Manoury' series called Sonus ex Machina.
He is the 2008 SEAMUS Award Recipient.
On May 11, 2011, he received the title of Doctor Honoris Causa from the University of Mons.
On July 21, 2012, he received an Honorary Degree from Bath Spa University in recognition of his extraordinary contribution to computer music research.
He was the recipient of the Gold Medal at the 1975 Math Olympiads and the Silver Medal at the 1976 Math Olympiads.
Selected publications
For a full list, see: http://msp.ucsd.edu/publications.html
Puckette, Miller (2004) “Who Owns our Software?: A first-person case study” Proceedings, ISEA, pp. 200–202, republished in September 2009 issue of Montréal: Communauté électroacoustique canadienne / Canadian Electro
|
https://en.wikipedia.org/wiki/Hazard%20pointer
|
In a multithreaded computing environment, hazard pointers are one approach to solving the problems posed by dynamic memory management of the nodes in a lock-free data structure. These problems generally arise only in environments that don't have automatic garbage collection.
Any lock-free data structure that uses the compare-and-swap primitive must deal with the ABA problem. For example, in a lock-free stack represented as an intrusively linked list, one thread may be attempting to pop an item from the front of the stack (A → B → C). It remembers the second-from-top value "B", and then performs compare_and_swap(target=&head, newvalue=B, expected=A). Unfortunately, in the middle of this operation, another thread may have done two pops and then pushed A back on top, resulting in the stack (A → C). The compare-and-swap succeeds in swapping `head` with `B`, and the result is that the stack now contains garbage (a pointer to the freed element "B").
Furthermore, any lock-free algorithm containing code of the form
Node* currentNode = this->head; // assume the load from "this->head" is atomic
Node* nextNode = currentNode->next; // assume this load is also atomic
suffers from another major problem, in the absence of automatic garbage collection. In between those two lines, it is possible that another thread may pop the node pointed to by this->head and deallocate it, meaning that the memory access through currentNode on the second line reads deallocated memory (which may in fact already be in use by some other thread for a completely different purpose).
Hazard pointers can be used to address both of these problems. In a hazard-pointer system, each thread keeps a list of hazard pointers indicating which nodes the thread is currently accessing. (In many systems this "list" may be probably limited to only one or two elements.) Nodes on the hazard pointer list must not be modified or deallocated by any other thread.
When a thread wishes to remove a node, it places
|
https://en.wikipedia.org/wiki/Neighbor%20Discovery%20Protocol
|
The Neighbor Discovery Protocol (NDP), or simply Neighbor Discovery (ND), is a protocol of the Internet protocol suite used with Internet Protocol Version 6 (IPv6). It operates at the link layer of the Internet model, and is responsible for gathering various information required for network communication, including the configuration of local connections and the domain name servers and gateways.
The protocol defines five ICMPv6 packet types to perform functions for IPv6 similar to the Address Resolution Protocol (ARP) and Internet Control Message Protocol (ICMP) Router Discovery and Router Redirect protocols for IPv4. It provides many improvements over its IPv4 counterparts (RFC 4861, section 3.1). For example, it includes Neighbor Unreachability Detection (NUD), thus improving robustness of packet delivery in the presence of failing routers or links, or mobile nodes.
The Inverse Neighbor Discovery (IND) protocol extension (RFC 3122) allows nodes to determine and advertise an IPv6 address corresponding to a given link-layer address, similar to Reverse ARP for IPv4.
The Secure Neighbor Discovery Protocol (SEND), a security extension of NDP, uses Cryptographically Generated Addresses (CGA) and the Resource Public Key Infrastructure (RPKI) to provide an alternative mechanism for securing NDP with a cryptographic method that is independent of IPsec. Neighbor Discovery Proxy (ND Proxy) (RFC 4389) provides a service similar to IPv4 Proxy ARP and allows bridging multiple network segments within a single subnet prefix when bridging cannot be done at the link layer.
Functions
NDP defines five ICMPv6 packet types for the purpose of router solicitation, router advertisement, neighbor solicitation, neighbor advertisement, and network redirects.
Router Solicitation (Type 133) Hosts inquire with Router Solicitation messages to locate routers on an attached link. Routers which forward packets not addressed to them generate Router Advertisements immediately upon receipt of thi
|
https://en.wikipedia.org/wiki/T-norm
|
In mathematics, a t-norm (also T-norm or, unabbreviated, triangular norm) is a kind of binary operation used in the framework of probabilistic metric spaces and in multi-valued logic, specifically in fuzzy logic. A t-norm generalizes intersection in a lattice and conjunction in logic. The name triangular norm refers to the fact that in the framework of probabilistic metric spaces t-norms are used to generalize the triangle inequality of ordinary metric spaces.
Definition
A t-norm is a function T: [0, 1] × [0, 1] → [0, 1] that satisfies the following properties:
Commutativity: T(a, b) = T(b, a)
Monotonicity: T(a, b) ≤ T(c, d) if a ≤ c and b ≤ d
Associativity: T(a, T(b, c)) = T(T(a, b), c)
The number 1 acts as identity element: T(a, 1) = a
Since a t-norm is a binary algebraic operation on the interval [0, 1], infix algebraic notation is also common, with the t-norm usually denoted by .
The defining conditions of the t-norm are exactly those of a partially ordered abelian monoid on the real unit interval [0, 1]. (Cf. ordered group.) The monoidal operation of any partially ordered abelian monoid L is therefore by some authors called a triangular norm on L.
Classification of t-norms
A t-norm is called continuous if it is continuous as a function, in the usual interval topology on [0, 1]2. (Similarly for left- and right-continuity.)
A t-norm is called strict if it is continuous and strictly monotone.
A t-norm is called nilpotent if it is continuous and each x in the open interval (0, 1) is nilpotent, that is, there is a natural number n such that x ... x (n times) equals 0.
A t-norm is called Archimedean if it has the Archimedean property, that is, if for each x, y in the open interval (0, 1) there is a natural number n such that x ... x (n times) is less than or equal to y.
The usual partial ordering of t-norms is pointwise, that is,
T1 ≤ T2 if T1(a, b) ≤ T2(a, b) for all a, b in [0, 1].
As functions, pointwise larger t-norms are sometimes call
|
https://en.wikipedia.org/wiki/List%20of%20common%20household%20pests
|
This is a list of common household pests – undesired animals that have a history of living, invading, causing damage, eating human foods, acting as disease vectors or causing other harms in human habitation.
Mammals
Mice
Field mice
House mice
Possums
Brushtail possum
Ringtail possum
Rats
Black rats
Brown rats
Wood rats
Cotton rats
Invertebrates
Ants
Argentine ants
Carpenter ants
Fire ants
Odorous house ants
Pharaoh ants
Thief ants
Bed bugs
Beetles
Woodworms
Death watch beetles
Furniture beetles
Weevils
Maize weevil
Rice weevil
Carpet beetles
Fur beetles
Varied carpet beetles
Spider beetles
Mealworm beetles
Centipedes
House centipedes
Cockroaches
Brown-banded cockroaches
German cockroaches
American cockroaches
Oriental cockroaches
Dust mites
Earwigs
Crickets
House crickets
Flies
Bottle flies
Blue bottle flies
Green bottle flies
House flies
Fruit flies
Mosquitoes
Moths
Almond moths
Indianmeal moths
Clothes moths
Common clothes moths
Brown house moths
Paper Lice
Red spiders
Silverfish
Spiders
Termites
Dampwood termites
Subterranean termites
Woodlouse
See also
Home-stored product entomology
List of notifiable diseases
Noxious weed
Pest (organism)
References
Pests (organism)
Nature-related lists
|
https://en.wikipedia.org/wiki/Saprobiont
|
Saprobionts are organisms that digest their food externally and then absorb the products. This process is called saprotrophic nutrition. Fungi are examples of saprobiontic organisms, which are a type of decomposer.
Saprobiontic organisms feed off dead and/or decaying biological materials. Digestion is accomplished by excretion of digestive enzymes which break down cell tissues, allowing saprobionts to extract the nutrients they need while leaving the indigestible waste. This is called extracellular digestion. This is very important in ecosystems, for the nutrient cycle.
Saprobionts should not be confused with detritivores, another class of decomposers which digest internally.
These organisms can be good sources of extracellular enzymes for industrial processes such as the production of fruit juice. For instance, the fungus Aspergillus niger is used to produce pectinase, an enzyme which is used to break down pectin in juice concentrates, making the juice appear more translucent.
References
Ecology
Dead wood
|
https://en.wikipedia.org/wiki/Mimic%20function
|
A mimic function changes a file so it assumes the statistical properties of another file . That is, if is the probability of some substring occurring in , then a mimic function , recodes so that approximates for all strings of length less than some . It is commonly considered to be one of the basic techniques for hiding information, often called steganography.
The simplest mimic functions use simple statistical models to pick the symbols in the output. If the statistical model says that item occurs with probability and item occurs with probability , then a random number is used to choose between outputting or with probability or respectively.
Even more sophisticated models use reversible Turing machines.
References
Steganography
|
https://en.wikipedia.org/wiki/Multi-adjoint%20logic%20programming
|
Multi-adjoint logic programming defines syntax and semantics of a logic programming program in such a way that the underlying maths justifying the results are a residuated lattice and/or MV-algebra.
The definition of a multi-adjoint logic program is given, as usual in fuzzy logic programming, as a set of weighted rules and facts of a given formal language F. Notice that we are allowed to use different implications in our rules.
Definition: A multi-adjoint logic program is a set P of rules of the form <(A ←i B), δ> such that:
1. The rule (A ←i B) is a formula of F;
2. The confidence factor δ is an element (a truth-value) of L;
3. The head A is an atom;
4. The body B is a formula built from atoms B1, …, Bn (n ≥ 0) by the use of conjunctors, disjunctors, and aggregators.
5. Facts are rules with body ┬.
6. A query (or goal) is an atom intended as a question ?A prompting the system.
Implementations
Implementations of Multi-adjoint logic programming:
Rfuzzy,
Floper,
and more we do not remember now.
Programming languages
|
https://en.wikipedia.org/wiki/Waterproofing
|
Waterproofing is the process of making an object or structure waterproof or water-resistant so that it remains relatively unaffected by water or resisting the ingress of water under specified conditions. Such items may be used in wet environments or underwater to specified depths.
Water-resistant and waterproof often refer to resistance to penetration of water in its liquid state and possibly under pressure, whereas damp proof refers to resistance to humidity or dampness. Permeation of water vapour through a material or structure is reported as a moisture vapor transmission rate (MVTR).
The hulls of boats and ships were once waterproofed by applying tar or pitch. Modern items may be waterproofed by applying water-repellent coatings or by sealing seams with gaskets or o-rings.
Waterproofing is used in reference to building structures (such as basements, decks, or wet areas), watercraft, canvas, clothing (raincoats or waders), electronic devices and paper packaging (such as cartons for liquids).
In construction
In construction, a building or structure is waterproofed with the use of membranes and coatings to protect contents and structural integrity. The waterproofing of the building envelope in construction specifications is listed under 07 - Thermal and Moisture Protection within MasterFormat 2004, by the Construction Specifications Institute, and includes roofing and waterproofing materials.
In building construction, waterproofing is a fundamental aspect of creating a building envelope, which is a controlled environment. The roof covering materials, siding, foundations, and all of the various penetrations through these surfaces must be water-resistant and sometimes waterproof. Roofing materials are generally designed to be water-resistant and shed water from a sloping roof, but in some conditions, such as ice damming and on flat roofs, the roofing must be waterproof. Many types of waterproof membrane systems are available, including felt paper or tar paper wi
|
https://en.wikipedia.org/wiki/Attribute-oriented%20programming
|
Attribute-oriented programming (@OP) is a technique for embedding metadata, namely attributes, within program code.
Attribute-oriented programming in various languages
Java
With the inclusion of Metadata Facility for Java (JSR-175) into the J2SE 5.0 release it is possible to utilize attribute-oriented programming right out of the box.
XDoclet library makes it possible to use attribute-oriented programming approach in earlier versions of Java.
C#
The C# language has supported attributes from its very first release. These attributes was used to give run-time information and are not used by a preprocessor. Currently with source generators, you can use attributes to drive generation of additional code at compile-time.
UML
The Unified Modeling Language (UML) supports a kind of attribute called stereotypes.
Hack
The Hack programming language supports attributes. Attributes can be attached to various program entities, and information about those attributes can be retrieved at run-time via reflection.
Tools
Annotation Processing Tool (apt)
Spoon, an Annotation-Driven Java Program Transformer
XDoclet, a Javadoc-Driven Program Generator
References
External links
Don Schwarz. Peeking Inside the Box: Attribute-Oriented Programming with Java5
Sun JSR 175
Attributes and Reflection - sample chapter from Programming C# book
Modeling Turnpike Project
Fraclet : An annotation-based programming model for the Fractal component model
Attribute Enabled Software Development book
Programming paradigms
|
https://en.wikipedia.org/wiki/How%20to%20Lie%20with%20Statistics
|
How to Lie with Statistics is a book written by Darrell Huff in 1954, presenting an introduction to statistics for the general reader. Not a statistician, Huff was a journalist who wrote many how-to articles as a freelancer.
The book is a brief, breezy illustrated volume outlining the misuse of statistics and errors in the interpretation of statistics, and how errors create incorrect conclusions.
In the 1960s and 1970s, it became a standard textbook introduction to the subject of statistics for many college students. It has become one of the best-selling statistics books in history, with over one and a half million copies sold in the English-language edition. It has also been widely translated.
Themes of the book include "Correlation does not imply causation" and "Using random sampling". It also shows how statistical graphs can be used to distort reality, for example by truncating the bottom of a line or bar chart, so that differences seem larger than they are, or by representing one-dimensional quantities on a pictogram by two- or three-dimensional objects to compare their sizes, so that the reader forgets that the images do not scale the same way the quantities do.
The original edition contained illustrations by artist Irving Geis. In a UK edition, Geis' illustrations were replaced by cartoons by Mel Calman.
See also
Lies, damned lies, and statistics
Notes
References
Darrell Huff, (1954) How to Lie with Statistics (illust. I. Geis), Norton, New York,
External links
1954 non-fiction books
Statistics books
Misuse of statistics
|
https://en.wikipedia.org/wiki/Spermatheca
|
The spermatheca (pronounced plural: spermathecae ), also called receptaculum seminis (plural: receptacula seminis), is an organ of the female reproductive tract in insects, e.g. ants, bees, some molluscs, oligochaeta worms and certain other invertebrates and vertebrates. Its purpose is to receive and store sperm from the male or, in the case of hermaphrodites, the male component of the body. Spermathecae can sometimes be the site of fertilisation when the oocytes are sufficiently developed.
Some species of animal have multiple spermathecae. For example, certain species of earthworms have four pairs of spermathecae—one pair each in the 6th, 7th, 8th, and 9th segments. The spermathecae receive and store the spermatozoa of another earthworm during copulation. They are lined with epithelium and are variable in shape: some are thin, heavily coiled tubes, while others are vague outpocketings from the main reproductive tract. It is one of the many variations in sexual reproduction.
The nematode Caenorhabditis elegans has two spermathecae, one at the end of each gonad. The C. elegans spermatheca is made up of 24 smooth muscle-like cells that form a stretchable tubular structure. Actin filaments line the spermatheca in a circumferential manner. The C. elegans spermatheca is used as a model to study mechanotransduction.
An apiculturist may examine the spermatheca of a dead queen bee to find out whether it had received sperm from a male. In many species of stingless bees, especially Melipona bicolor, the queen lays her eggs during the provisioning and oviposition process and the spermatheca fertilizes the egg as it passes along the oviduct. The haplo-diploid system of sex determination makes it possible for the queen to choose the sex of the egg.
See also
Cyphopods, sperm receptacles in female millipedes
Female sperm storage
Reproductive system of gastropods
References
Animal reproductive system
Sexual anatomy
Animal anatomy
Reproduction
|
https://en.wikipedia.org/wiki/Directory%20System%20Agent
|
A Directory System Agent (DSA) is the element of an X.500 directory service that provides User Agents with access to a portion of the directory (usually the portion associated with a single Organizational Unit). X.500 is an international standard developed by the International Organization for Standardization (ISO) and the International Telecommunication Union (ITU-T). The model and function of a directory system agent are specified in ITU-T Recommendation X.501.
Active Directory
In Microsoft's Active Directory the DSA is a collection of servers and daemon processes that run on Windows Server systems that provide various means for clients to access the Active Directory data store.
Clients connect to an Active Directory DSA using various communications protocols:
LDAP version 3.0—used by Windows 2000 and Windows XP clients
LDAP version 2.0
Security Account Manager (SAM) interface—used by Windows NT clients
MAPI RPC interface—used by Microsoft Exchange Server and other MAPI clients
A proprietary RPC interface—used by Active Directory DSAs to communicate with one another and replicate data amongst themselves
References
RFCs
RFC 2148 — Deployment of the Internet White Pages Service
Computer networking
Identity management
|
https://en.wikipedia.org/wiki/Linear%20actuator
|
A linear actuator is an actuator that creates linear motion (i.e., in a straight line), in contrast to the circular motion of a conventional electric motor. Linear actuators are used in machine tools and industrial machinery, in computer peripherals such as disk drives and printers, in valves and dampers, and in many other places where linear motion is required. Hydraulic or pneumatic cylinders inherently produce linear motion. Many other mechanisms are used to generate linear motion from a rotating motor.
Types
Mechanical actuators
Mechanical linear actuators typically operate by conversion of rotary motion into linear motion. Conversion is commonly made via a few simple types of mechanism:
Screw: leadscrew, screw jack, ball screw and roller screw actuators all operate on the principle of the simple machine known as the screw. By rotating the actuator's nut, the screw shaft moves in a line.
Wheel and axle: Hoist, winch, rack and pinion, chain drive, belt drive, rigid chain and rigid belt actuators operate on the principle of the wheel and axle. A rotating wheel moves a cable, rack, chain or belt to produce linear motion.
Cam: Cam actuators function on a principle similar to that of the wedge, but provide relatively limited travel. As a wheel-like cam rotates, its eccentric shape provides thrust at the base of a shaft.
Some mechanical linear actuators only pull, such as hoists, chain drive and belt drives. Others only push (such as a cam actuator). Pneumatic and hydraulic cylinders, or lead screws can be designed to generate force in both directions.
Mechanical actuators typically convert rotary motion of a control knob or handle into linear displacement via screws and/or gears to which the knob or handle is attached. A jackscrew or car jack is a familiar mechanical actuator. Another family of actuators are based on the segmented spindle. Rotation of the jack handle is converted mechanically into the linear motion of the jack head. Mechanical actuators ar
|
https://en.wikipedia.org/wiki/Extension%20by%20new%20constant%20and%20function%20names
|
In mathematical logic, a theory can be extended with
new constants or function names under certain conditions with assurance that the extension will introduce
no contradiction. Extension by definitions is perhaps the best-known approach, but it requires
unique existence of an object with the desired property. Addition of new names can also be done
safely without uniqueness.
Suppose that a closed formula
is a theorem of a first-order theory . Let be a theory obtained from by extending its language with new constants
and adding a new axiom
.
Then is a conservative extension of , which means that the theory has the same set of theorems in the original language (i.e., without constants ) as the theory .
Such a theory can also be conservatively extended by introducing a new functional symbol:
Suppose that a closed formula is a theorem of a first-order theory , where we denote . Let be a theory obtained from by extending its language with a new functional symbol (of arity ) and adding a new axiom . Then is a conservative extension of , i.e. the theories and prove the same theorems not involving the functional symbol ).
Shoenfield states the theorem in the form for a new function name, and constants are the same as functions
of zero arguments. In formal systems that admit ordered tuples, extension by multiple constants as shown here
can be accomplished by addition of a new constant tuple and the new constant names
having the values of elements of the tuple.
See also
Conservative extension
Extension by definition
References
Mathematical logic
Theorems in the foundations of mathematics
Proof theory
|
https://en.wikipedia.org/wiki/Gumstix
|
Gumstix is an American multinational corporation headquartered in Redwood City, California. It develops and manufactures small system boards comparable in size to a stick of gum. In 2003, when it was first fully functional, it used ARM architecture system on a chip (SoC) and an operating system based on Linux 2.6 kernel. It has an online tool called Geppetto that allows users to design their own boards. In August 2013 it started a crowd-funding service to allow a group of users that want to get a custom design manufactured to share the setup costs.
See also
Arduino
Embedded system
Raspberry Pi
Stick PC
References
External links
Gumstix users wiki
Gumstix mailing list archives on nabble
Embedded Linux
Linux-based devices
Computer companies of the United States
Companies based in Redwood City, California
Network computer (brand)
Motherboard form factors
Motherboard companies
Privately held companies based in California
Single-board computers
|
https://en.wikipedia.org/wiki/Code%20point
|
A code point, codepoint or code position is a unique position in a quantized n-dimensional space that has been assigned a semantic meaning.
In other words, a code point is a particular position in a table, where the position has been assigned a meaning. The table has discrete positions (1, 2, 3, 4, but not fractions) and may be one dimensional (a column), two dimensional (like cells in a spreadsheet), three dimensional (sheets in a workbook), etc... in any number of dimensions.
Code points are used in a multitude of formal information processing and telecommunication standards. For example ITU-T Recommendation T.35 contains a set of country codes for telecommunications equipment (originally fax machines) which allow equipment to indicate its country of manufacture or operation. In T.35, Argentina is represented by the code point 0x07, Canada by 0x20, Gambia by 0x41, etc.
In character encoding
Code points are commonly used in character encoding, where a code point is a numerical value that maps to a specific character. In character encoding code points usually represent a single grapheme—usually a letter, digit, punctuation mark, or whitespace—but sometimes represent symbols, control characters, or formatting. The set of all possible code points within a given encoding/character set make up that encoding's codespace.
For example, the character encoding scheme ASCII comprises 128 code points in the range 0hex to 7Fhex, Extended ASCII comprises 256 code points in the range 0hex to FFhex, and Unicode comprises code points in the range 0hex to 10FFFFhex. The Unicode code space is divided into seventeen planes (the basic multilingual plane, and 16 supplementary planes), each with (= 216) code points. Thus the total size of the Unicode code space is 17 × = .
In Unicode
For Unicode, the particular sequence of bits is called a code unit – for the UCS-4 encoding, any code point is encoded as 4-byte (octet) binary numbers, while in the UTF-8 encoding, different code
|
https://en.wikipedia.org/wiki/Orthomode%20transducer
|
An orthomode transducer (OMT) is a waveguide component that is commonly referred to as a polarisation duplexer. Orthomode is a contraction of orthogonal mode. Orthomode transducers serve either to combine or to separate two orthogonally polarized microwave signal paths. One of the paths forms the uplink, which is transmitted over the same waveguide as the received signal path, or downlink path. Such a device may be part of a very small aperture terminal (VSAT) antenna feed or a terrestrial microwave radio feed; for example, OMTs are often used with a feed horn to isolate orthogonal polarizations of a signal and to transfer transmit and receive signals to different ports.
VSAT and satellite Earth station applications
For VSAT modems the transmission and reception paths are at 90° to each other, or in other words, the signals are orthogonally polarized with respect to each other. This orthogonal shift between the two signal paths provides approximately an isolation of 40 dB in the Ku band and Ka band radio frequency bands.
Hence this device serves in an essential role as the junction element of the outdoor unit (ODU) of a VSAT modem. It protects the receiver front-end element (the low-noise block converter, LNB) from burn-out by the power of the output signal generated by the block up converter (BUC). The BUC is also connected to the feed horn through a wave guide port of the OMT junction device.
Orthomode transducers are used in dual-polarized VSATs, in sparsely populated areas, radar antennas, radiometers, and communications links. They are usually connected to the antenna's down converter or LNB and to the high-power amplifier (HPA), attached to a transmitting antenna.
When the transmitted and received radio signal to and from the antenna have two different polarizations (horizontal and vertical), they are said to be orthogonal. This means that the modulation planes of the two radio signal waves are at 90 degrees to each other. The OMT device is used to separ
|
https://en.wikipedia.org/wiki/Lamina%20%28anatomy%29
|
Lamina is a general anatomical term meaning "plate" or "layer". It is used in both gross anatomy and microscopic anatomy to describe structures.
Some examples include:
The laminae of the thyroid cartilage: two leaf-like plates of cartilage that make up the walls of the structure.
The vertebral laminae: plates of bone that form the posterior walls of each vertebra, enclosing the spinal cord.
The laminae of the thalamus: the layers of thalamus tissue.
The lamina propria: a connective tissue layer under the epithelium of an organ.
The nuclear lamina: a dense fiber network inside the nucleus of cells.
The lamina affixa: a layer of epithelium growing on the surface of the thalamus.
Lamina cribrosa with two different meanings.
References
Anatomy
|
https://en.wikipedia.org/wiki/Cursor%20%28user%20interface%29
|
In human–computer interaction, a cursor is an indicator used to show the current position on a computer monitor or other display device that will respond to input.
Etymology
Cursor is Latin for 'runner'. A cursor is a name given to the transparent slide engraved with a hairline used to mark a point on a slide rule. The term was then transferred to computers through analogy.
On 14 November 1963, while attending a conference on computer graphics in Reno, Nevada, Douglas Engelbart of Augmentation Research Center (ARC) first expressed his thoughts to pursue his objective of developing both hardware and software computer technology to "augment" human intelligence by pondering how to adapt the underlying principles of the planimeter to inputting X- and Y-coordinate data, and envisioned something like the cursor of a mouse he initially called a "bug", which, in a "3-point" form, could have a "drop point and 2 orthogonal wheels". He wrote that the "bug" would be "easier" and "more natural" to use, and unlike a stylus, it would stay still when let go, which meant it would be "much better for coordination with the keyboard."
According to Roger Bates, a young hardware designer at ARC under Bill English, the cursor on the screen was for some unknown reason also referred to as "CAT" at the time, which led to calling the new pointing device a "mouse" as well.
Text cursor
In most command-line interfaces or text editors, the text cursor, also known as a caret, is an underscore, a solid rectangle, or a vertical line, which may be flashing or steady, indicating where text will be placed when entered (the insertion point). In text mode displays, it was not possible to show a vertical bar between characters to show where the new text would be inserted, so an underscore or block cursor was used instead. In situations where a block was used, the block was usually created by inverting the pixels of the character using the boolean math exclusive or function. On text editors and word
|
https://en.wikipedia.org/wiki/Two-photon%20excitation%20microscopy
|
Two-photon excitation microscopy (TPEF or 2PEF) is a fluorescence imaging technique that is particularly well-suited to image scattering living tissue of up to about one millimeter in thickness. Unlike traditional fluorescence microscopy, where the excitation wavelength is shorter than the emission wavelength, two-photon excitation requires simultaneous excitation by two photons with longer wavelength than the emitted light. The laser is focused onto a specific location in the tissue and scanned across the sample to sequentially produce the image. Due to the non-linearity of two-photon excitation, mainly fluorophores in the micrometer-sized focus of the laser beam are excited, which results in the spatial resolution of the image. This contrasts with confocal microscopy, where the spatial resolution is produced by the interaction of excitation focus and the confined detection with a pinhole.
Two-photon excitation microscopy typically uses near-infrared (NIR) excitation light which can also excite fluorescent dyes. Using infrared light minimizes scattering in the tissue because infrared light is scattered less in typical biological tissues. Due to the multiphoton absorption, the background signal is strongly suppressed. Both effects lead to an increased penetration depth for this technique. Two-photon excitation can be a superior alternative to confocal microscopy due to its deeper tissue penetration, efficient light detection, and reduced photobleaching.
Concept
Two-photon excitation employs two-photon absorption, a concept first described by Maria Goeppert Mayer (1906–1972) in her doctoral dissertation in 1931, and first observed in 1961 in a CaF2:Eu2+ crystal using laser excitation by Wolfgang Kaiser. Isaac Abella showed in 1962 in caesium vapor that two-photon excitation of single atoms is possible.
Two-photon excitation fluorescence microscopy has similarities to other confocal laser microscopy techniques such as laser scanning confocal microscopy and Raman m
|
https://en.wikipedia.org/wiki/Eastern%20equine%20encephalitis
|
Eastern equine encephalitis (EEE), commonly called Triple E or sleeping sickness (not to be confused with African trypanosomiasis), is a disease caused by a zoonotic mosquito-vectored Togavirus that is present in North, Central, and South America, and the Caribbean. EEE was first recognized in Massachusetts, United States, in 1831, when 75 horses died mysteriously of viral encephalitis.
Epizootics in horses have continued to occur regularly in the United States. It can also be identified in donkeys and zebras. Due to the rarity of the disease, its occurrence can cause economic impact beyond the cost of horses and poultry. EEE is found today in the eastern part of the United States and is often associated with coastal plains. It can most commonly be found in East Coast and Gulf Coast states. In Florida, about one to two human cases are reported a year, although over 60 cases of equine encephalitis are reported. In years in which conditions are favorable for the disease, the number of equine cases is over 200. Diagnosing equine encephalitis is challenging because many of the symptoms are shared with other illnesses and patients can be asymptomatic. Confirmations may require a sample of cerebral spinal fluid or brain tissue, although CT scans and MRI scans are used to detect encephalitis. This could be an indication that the need to test for EEE is necessary. If a biopsy of the cerebral spinal fluid is taken, it is sent to a specialized laboratory for testing.
Eastern equine encephalitis virus (EEEV) is closely related to Venezuelan equine encephalitis virus and western equine encephalitis virus.
Signs and symptoms
The incubation period for Eastern equine encephalitis virus (EEEV) disease ranges from 4 to 10 days. The illness can progress either systematically or encephalitically, depending on the person's age. Encephalitic disease involves swelling of the brain and can be asymptomatic, while the systemic illness occurs very abruptly. Those with the systemic illnes
|
https://en.wikipedia.org/wiki/Cyclically%20reduced%20word
|
In mathematics, cyclically reduced word is a concept of combinatorial group theory.
Let F(X) be a free group. Then a word w in F(X) is said to be cyclically reduced if and only if every cyclic permutation of the word is reduced.
Properties
Every cyclic shift and the inverse of a cyclically reduced word are cyclically reduced again.
Every word is conjugate to a cyclically reduced word. The cyclically reduced words are minimal-length representatives of the conjugacy classes in the free group. This representative is not uniquely determined, but it is unique up to cyclic shifts (since every cyclic shift is a conjugate element).
References
Combinatorial group theory
Combinatorics on words
|
https://en.wikipedia.org/wiki/ResKnife
|
ResKnife is an open-source resource editor for the Apple Macintosh platform. It supports reading and writing resource maps to any fork (data, resource or otherwise) and has basic template-based and hexadecimal editing functionality. ResKnife can export resource data to flat files and supports third-party plug-in editors.
See also
ResEdit
External links
- (Source code and documentation)
ResKnife at CNet Downloads (PPC Binary Download)
ResKnife Lion Compile (OSX 10.7 (Lion) compatible version)
C++ software
Objective-C software
MacOS-only free software
Programming tools
|
https://en.wikipedia.org/wiki/Master%20of%20Mathematics
|
A Master of Mathematics (or MMath) degree is a specific advanced integrated Master's degree for courses in the field of mathematics.
United Kingdom
In the United Kingdom, the MMath is the internationally recognized standard qualification after a four-year course in mathematics at a university.
The MMath programme was set up by most leading universities after the Neumann Report in 1992. It is classed as a level 7 qualification in the Frameworks of Higher Education Qualifications of UK Degree-Awarding Bodies. The UCAS course codes for the MMath degrees start at G100 upwards, most courses taking the codes G101 - G104.
Universities which offer MMath degrees include:
Aberystwyth University
University of Bath
University of Bristol (MSci)
Brunel University
University of Birmingham (MSci)
Cardiff University
University of Cambridge
City University London
University of Central Lancashire
University of Dundee
University of Durham
University of East Anglia
University of Edinburgh
University of Essex
University of Exeter
University of Glasgow
Heriot-Watt University
University of Hull
University of Keele
University of Kent
Lancaster University
University of Leeds
University of Leicester
University of Lincoln
University of Liverpool
Liverpool Hope University
Loughborough University
University of Manchester
Manchester Metropolitan University
Middlesex University (from 2014)
Newcastle University
Northumbria University
University of Nottingham
Nottingham Trent University
Open University (until 2007)
Oxford Brookes University
University of Oxford
University of Plymouth
University of Portsmouth
University of Reading
University of St Andrews
University of Sheffield
University of Southampton
University of Strathclyde
University of Surrey
University of Sussex
Swansea University
University of Warwick
University of York
Notes
Canada
In Canada, the MMath is a graduate degree offered by the University of Waterloo. The length of the MMath degree program is typically between one and two ye
|
https://en.wikipedia.org/wiki/Subcloning
|
In molecular biology, subcloning is a technique used to move a particular DNA sequence from a parent vector to a destination vector.
Subcloning is not to be confused with molecular cloning, a related technique.
Procedure
Restriction enzymes are used to excise the gene of interest (the insert) from the parent. The insert is purified in order to isolate it from other DNA molecules. A common purification method is gel isolation. The number of copies of the gene is then amplified using polymerase chain reaction (PCR).
Simultaneously, the same restriction enzymes are used to digest (cut) the destination. The idea behind using the same restriction enzymes is to create complementary sticky ends, which will facilitate ligation later on. A phosphatase, commonly calf-intestinal alkaline phosphatase (CIAP), is also added to prevent self-ligation of the destination vector. The digested destination vector is isolated/purified.
The insert and the destination vector are then mixed together with DNA ligase. A typical molar ratio of insert genes to destination vectors is 3:1; by increasing the insert concentration, self-ligation is further decreased. After letting the reaction mixture sit for a set amount of time at a specific temperature (dependent upon the size of the strands being ligated; for more information see DNA ligase), the insert should become successfully incorporated into the destination plasmid.
Amplification of product plasmid
The plasmid is often transformed into a bacterium like E. coli. Ideally when the bacterium divides the plasmid should also be replicated. In the best case scenario, each bacterial cell should have several copies of the plasmid. After a good number of bacterial colonies have grown, they can be miniprepped to harvest the plasmid DNA.
Selection
In order to ensure growth of only transformed bacteria (which carry the desired plasmids to be harvested), a marker gene is used in the destination vector for selection. Typical marker g
|
https://en.wikipedia.org/wiki/KJZZ-TV
|
KJZZ-TV (channel 14) is an independent television station in Salt Lake City, Utah, United States. It is owned by Sinclair Broadcast Group alongside CBS affiliate KUTV (channel 2) and MyNetworkTV affiliate KMYU (channel 12) in St. George. The stations share studios on South Main Street in downtown Salt Lake City, while KJZZ-TV's transmitter is located on Farnsworth Peak in the Oquirrh Mountains, southwest of Salt Lake City. KJZZ-TV is the ATSC 3.0 (Next Gen TV) host station for the Salt Lake City market; in turn, other stations broadcast its subchannels on its behalf.
The station went on the air as KXIV in 1989. It functioned as the second independent station for the Salt Lake City area. In 1993, Larry H. Miller, the then-owner of the Utah Jazz of the NBA, purchased the station and renamed it KJZZ-TV; it also became the new TV home of the basketball team for 16 seasons. During Miller's ownership, the station affiliated for five years with UPN, with the station's decision not to renew leading to accusations of racism against management; in the latter years, operations and programming were outsourced in turn to two other Salt Lake stations. Sinclair purchased KJZZ-TV from the Miller family in 2016. The station airs syndicated programming and local newscasts from KUTV.
In 2023, pre-season and regular season Jazz games will return to the station under a new rights agreement between current Jazz owner Ryan Smith and Sinclair.
History
"Real TV"
An original construction permit was granted by the Federal Communications Commission (FCC) on December 6, 1984, to American Television of Utah, Inc., a subsidiary of Salt Lake City-based American Stores Company, for a full-power television station on UHF channel 14 to serve Salt Lake City and the surrounding area. American Stores had filed for the construction permit in 1979; its original intention for the station was to broadcast subscription television programming, as it would eventually do on a microwave distribution system k
|
https://en.wikipedia.org/wiki/Remote%20keyless%20system
|
A remote keyless system (RKS), also known as remote keyless entry (RKE) or remote central locking, is an electronic lock that controls access to a building or vehicle by using an electronic remote control (activated by a handheld device or automatically by proximity). RKS largely and quickly superseded keyless entry, a budding technology that restrictively bound locking and locking functions to vehicle-mounted keypads.
Widely used in automobiles, an RKS performs the functions of a standard car key without physical contact. When within a few yards of the car, pressing a button on the remote can lock or unlock the doors, and may perform other functions.
A remote keyless system can include both remote keyless entry (RKE), which unlocks the doors, and remote keyless ignition (RKI), which starts the engine.
History
Remote keyless entry was patented in 1981 by Paul Lipschultz, who worked for Niemans (a supplier of security components to the car industry) and had developed a number of automotive security devices. His electrically actuated lock system could be controlled by using a handheld fob to stream infrared data. Patented in 1981 after successful submission in 1979, it worked using a "coded pulse signal generator and battery-powered infra-red radiation emitter." In some geographic areas, the system is called a PLIP system, or Plipper, after Lipschultz. Infrared technology was superseded in 1995 when a European frequency was standardised.
The remote keyless systems using a handheld transmitter first appeared on the French made Renault Fuego in 1982, and as an option on several American Motors vehicles in 1983, including the Renault Alliance. The feature gained its first widespread availability in the U.S. on several General Motors vehicles in 1989.
Prior to Remote Keyless Entry, a number of systems were introduced featuring Keyless Entry (i.e., not remote), including Ford's 1980 system introduced on the Ford Thunderbird, Mercury Cougar, Lincoln Continental
|
https://en.wikipedia.org/wiki/Requirements%20management
|
Requirements management is the process of documenting, analyzing, tracing, prioritizing and agreeing on requirements and then controlling change and communicating to relevant stakeholders. It is a continuous process throughout a project. A requirement is a capability to which a project outcome (product or service) should conform.
Overview
The purpose of requirements management is to ensure that an organization documents, verifies, and meets the needs and expectations of its customers and internal or external stakeholders. Requirements management begins with the analysis and elicitation of the objectives and constraints of the organization. Requirements management further includes supporting planning for requirements, integrating requirements and the organization for working with them (attributes for requirements), as well as relationships with other information delivering against requirements, and changes for these.
The traceability thus established is used in managing requirements to report back fulfilment of company and stakeholder interests in terms of compliance, completeness, coverage, and consistency. Traceabilities also support change management as part of requirements management in understanding the impacts of changes through requirements or other related elements (e.g., functional impacts through relations to functional architecture), and facilitating introducing these changes.
Requirements management involves communication between the project team members and stakeholders, and adjustment to requirements changes throughout the course of the project. To prevent one class of requirements from overriding another, constant communication among members of the development team is critical. For example, in software development for internal applications, the business has such strong needs that it may ignore user requirements, or believe that in creating use cases, the user requirements are being taken care of.
Traceability
Requirements traceability is concerne
|
https://en.wikipedia.org/wiki/Virtual%20routing%20and%20forwarding
|
In IP-based computer networks, virtual routing and forwarding (VRF) is a technology that allows multiple instances of a routing table to co-exist within the same router at the same time. One or more logical or physical interfaces may have a VRF and these VRFs do not share routes. Therefore, the packets are only forwarded between interfaces on the same VRF. VRFs are the TCP/IP layer 3 equivalent of a VLAN. Because the routing instances are independent, the same or overlapping IP addresses can be used without conflicting with each other. Network functionality is improved because network paths can be segmented without requiring multiple routers.
Simple implementation
The simplest form of VRF implementation is VRF-Lite. In this implementation, each router within the network participates in the virtual routing environment in a peer-based fashion. While simple to deploy and appropriate for small to medium enterprises and shared data centers, VRF Lite does not scale to the size required by global enterprises or large carriers, as there is the need to implement each VRF instance on every router, including intermediate routers. VRFs were initially introduced in combination with Multiprotocol Label Switching (MPLS), but VRF proved to be so useful that it eventually evolved to live independent of MPLS. This is the historical explanation of the term VRF Lite: usage of VRFs without MPLS.
Full implementation
The scaling limitations of VRF Lite are resolved by the implementation of IP VPNs. In this implementation, a core backbone network is responsible for the transmission of data across the wide area between VRF instances at each edge location. IP VPNs have been traditionally deployed by carriers to provide a shared wide-area backbone network for multiple customers. They are also appropriate in the large enterprise, multi-tenant and shared data center environments.
In a typical deployment, customer edge (CE) routers handle local routing in a traditional fashion and dissemi
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.