source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Telegrapher%27s%20equations
|
The telegrapher's equations (or just telegraph equations) are a set of two coupled, linear equations that predict the voltage and current distributions on a linear electrical transmission line. The equations are important because they allow transmission lines to be analyzed using circuit theory. The equations and their solutions are applicable from 0 Hz to frequencies at which the transmission line structure can support higher order non-TEM modes. The equations can be expressed in both the time domain and the frequency domain. In the time domain the independent variables are distance and time. The resulting time domain equations are partial differential equations of both time and distance. In the frequency domain the independent variables are distance and either frequency, or complex frequency, The frequency domain variables can be taken as the Laplace transform or Fourier transform of the time domain variables or they can be taken to be phasors. The resulting frequency domain equations are ordinary differential equations of distance. An advantage of the frequency domain approach is that differential operators in the time domain become algebraic operations in frequency domain.
The equations come from Oliver Heaviside who developed the transmission line model starting with an August 1876 paper, On the Extra Current. The model demonstrates that the electromagnetic waves can be reflected on the wire, and that wave patterns can form along the line. Originally developed to describe telegraph wires, the theory can also be applied to radio frequency conductors, audio frequency (such as telephone lines), low frequency (such as power lines), and pulses of direct current.
Distributed components
The telegrapher's equations, like all other equations describing electrical phenomena, result from Maxwell's equations. In a more practical approach, one assumes that the conductors are composed of an infinite series of two-port elementary components, each representing an in
|
https://en.wikipedia.org/wiki/Ethics%20of%20terraforming
|
The ethics of terraforming has constituted a philosophical debate within biology, ecology, and environmental ethics as to whether terraforming other worlds is an ethical endeavor.
Support
On the pro-terraforming side of the argument, there are those like Robert Zubrin and Richard L. S. Taylor who believe that it is humanity's moral obligation to make other worlds suitable for Terran life, as a continuation of the history of life transforming the environments around it on Earth. They also point out that Earth will eventually be destroyed as nature takes its course, so that humanity faces a very long-term choice between terraforming other worlds or allowing all Earth life to become extinct. Dr. Zubrin further argues that even if native microbes have arisen on Mars, for example, the fact that they have not progressed beyond the microbe stage by this point, halfway through the lifetime of the Sun, is a strong indicator that they never will; and that if microbial life exists on Mars, it is likely related to Earth life through a common origin on one of the two planets, which spread to the other as an example of panspermia. Since Mars life would then not be fundamentally unrelated to Earth life, it would not be unique, and competition with such life would not be fundamentally different from competing against microbes on Earth. Zubrin summed up this view:
Richard Taylor more succinctly exemplified this point of view with the slogan, "move over microbe".
Some human critics label this argument as an example of anthropocentrism. These critics may view the homocentric view as not only geocentric but short-sighted, and tending to favour human interests to the detriment of ecological systems. They argue that an anthropocentrically driven approach could lead to the extinction of indigenous extraterrestrial life, or interplanetary contamination.
Martyn J. Fogg rebutted these ideas by delineating four potential rationales on which to evaluate the ethics of terraforming—anthropoc
|
https://en.wikipedia.org/wiki/Distributed%20System%20Security%20Architecture
|
Distributed System Security Architecture or (DSSA) is a computer security architecture that provides a suite of functions including login, authentication, and access control in a distributed system. To differ from other similar architectures, the DSSA architecture offers the ability to access all these functions without the trusted server (known as a certificate authority) being active.
In DSSA, security objects are handled by owners and access is controlled by the central, universally trusted, certificate authority.
DSSA/SPX
DSSA/SPX is the authentication protocol of DSSA. The CDC is a certificate granting server while the certificate is a ticket signed by CA which contains the public key of the party being certified. Since the CDC is merely distributing previously signed certificates, it is not necessary for it to be trusted.
External links
Tromsø University
Gasser (1989)
Cryptographic protocols
|
https://en.wikipedia.org/wiki/HRU%20%28security%29
|
The HRU security model (Harrison, Ruzzo, Ullman model) is an operating system level computer security model which deals with the integrity of access rights in the system. It is an extension of the Graham-Denning model, based around the idea of a finite set of procedures being available to edit the access rights of a subject on an object . It is named after its three authors, Michael A. Harrison, Walter L. Ruzzo and Jeffrey D. Ullman.
Along with presenting the model, Harrison, Ruzzo and Ullman also discussed the possibilities and limitations of proving the safety of systems using an algorithm.
Description of the model
The HRU model defines a protection system consisting of a set of generic rights R and a set of commands C. An instantaneous description of the system is called a configuration and is defined as a tuple of current subjects , current objects and an access matrix . Since the subjects are required to be part of the objects, the access matrix contains one row for each subject and one column for each subject and object. An entry for subject and object is a subset of the generic rights .
The commands are composed of primitive operations and can additionally have a list of pre-conditions that require certain rights to be present for a pair of subjects and objects.
The primitive requests can modify the access matrix by adding or removing access rights for a pair of subjects and objects and by adding or removing subjects or objects. Creation of a subject or object requires the subject or object not to exist in the current configuration, while deletion of a subject or object requires it to have existed prior to deletion. In a complex command, a sequence of operations is executed only as a whole. A failing operation in a sequence makes the whole sequence fail, a form of database transaction.
Discussion of safety
Harrison, Ruzzo and Ullman discussed whether there is an algorithm that takes an arbitrary initial configuration and answers the following que
|
https://en.wikipedia.org/wiki/TUNIS
|
TUNIS (Toronto University System) was a Unix-like operating system, developed at the University of Toronto in the early 1980s. TUNIS was a portable operating system compatible with Unix V7, but with a completely redesigned kernel, written in Concurrent Euclid. Programs that ran under Unix V7 could be run under TUNIS with no modification.
TUNIS was designed for teaching, and was intended to provide a model for the design of well-structured, highly portable, easily understood Unix-like operating systems. It made extensive use of Concurrent Euclid modules to isolate machine dependencies and provide a clean internal structure through information hiding. TUNIS also made use of Concurrent Euclid's built-in processes and synchronization features to make it easy to understand and maintain.
TUNIS targeted the PDP-11, Motorola 6809 and 68000, and National Semiconductor 32016 architectures, and supported distribution across multiple CPUs using Concurrent Euclid's synchronization features.
References
R.C. Holt (1982) TUNIS: a Unix look-alike written in concurrent Euclid (abstract). ACM SIGOPS Operating Systems Review 16(1):4--5.
Unix variants
Discontinued operating systems
|
https://en.wikipedia.org/wiki/Biodiversity%20and%20drugs
|
Biodiversity plays a vital role in maintaining human and animal health. Numerous plants, animals, and fungi are used in medicine, as well as to produce vital vitamins, painkillers, and other things. Natural products have been recognized and used as medicines by ancient cultures all around the world. Many animals are also known to self-medicate using plants and other materials available to them. More than 60% of the world's population relies almost entirely on plant medicine for primary health care.
About 119 pure chemicals are extracted from less than 90 species of higher plants and used as medicines throughout the world, for example, caffeine, methyl salicylate, and quinine.
Antibiotics
Streptomycin, neomycin, and erythromycin are derived from tropical soil fungi.
Plant drugs
A lot of plant species are used in today's studies and have been studied thoroughly for their potential value as a source of drugs. It is possible that some plant species may be a source of drugs against high blood pressure, AIDS, or heart troubles.
In China, Japan, India, and Germany, there is a great deal of interest in and support for the search for new drugs from higher plants.
Sweet wormwood
Each species carries unique genetic material in its DNA and in its chemical factory responding to these genetic instructions. For example, in the valleys of central China, a fernlike endangered weed called sweet wormwood grows, which is the only source of artemisinin, a drug that is nearly 100 percent effective against malaria. If this plant were lost to extinction, then the ability to control malaria, even today a potent killer, would diminish.
Zoopharmacognosy
Zoopharmacognosy is the study of how animals use plants, insects, and other inorganic materials in self-medication. For example, apes have been observed selecting a particular part of a medicinal plant by taking off leaves and breaking the stem to suck out the juice. In an interview with the late Neil Campbell, Eloy Rodriguez descri
|
https://en.wikipedia.org/wiki/AT%26T%20UNIX%20PC
|
The AT&T UNIX PC is a Unix desktop computer originally developed by Convergent Technologies (later acquired by Unisys), and marketed by AT&T Information Systems in the mid- to late-1980s. The system was codenamed "Safari 4" and is also known as the PC 7300, and often dubbed the "3B1". Despite the latter name, the system had little in common with AT&T's line of 3B series computers. The system was tailored for use as a productivity tool in office environments and as an electronic communication center.
Hardware configuration
10 MHz Motorola 68010 (16-bit external bus, 32-bit internal) with custom, discrete MMU
Internal MFM hard drive, originally 10 MB, later models with up to 67 MB
Internal 5-1/4" floppy drive
At least 512 KB RAM on main board (1 MB or 2 MB were also options), expandable up to an additional 2 MB via expansion cards (4 MB max total)
32 KB VRAM
16 KB ROM (up to 32 KB ROM supported using 2x 27128 EPROMs)
2 KB SRAM (for MMU page table)
Monochrome green phosphor monitor
Internal 300/1200 bit/s modem
RS-232 serial port
Centronics parallel port
3 S4BUS expansion slots
3 phone jacks
PC 7300
The initial PC 7300 model offered a modest 512 KB of memory and a small, low performance 10 MB hard drive. This model, although progressive in offering a Unix system for desktop office operation, was underpowered and produced considerable fan and drive bearing noise even when idling. The modern-looking "wedge" design by Mike Nuttall was innovative, and the machine gained notoriety appearing in numerous movies and TV shows as the token "computer".
AT&T 3B/1
An enhanced model, "3B/1", was introduced in October 1985 starting at . The cover was redesigned to accommodate a full-height 67 MB hard drive. This cover change added a 'hump' to the case, expanded onboard memory to 1 or 2 MB, as well as added a better power supply.
S/50
Convergent Technologies offered an S/50 which was a re-badged PC 7300.
Olivetti AT&T 3B1
British Olivetti released the "Olivetti A
|
https://en.wikipedia.org/wiki/Screen%20protector
|
A screen protector is an additional sheet of material—commonly polyurethane or laminated glass—that can be attached to the screen of an electronic device and protect it against physical damage.
History
The first screen protector was designed and patented by Herbert Schlegel in 1968 for use on television screens.
Screen protectors first entered the mobile-device market after the rise of personal digital assistants (PDAs). Since PDAs were often operated via a stylus, the tip of the stylus could scratch the sensitive LCD screen surface. Therefore, screen protectors provided sacrificial protection from this damage. Since then, the ubiquity of mobile-devices have seen the screen protector become more widely used.
Materials
Screen protectors are made of either plastics, such as polyethylene terephthalate (PET) or thermoplastic polyurethane (TPU), or of laminated tempered glass, similar to the device’s original screen they are meant to protect. Plastic screen protectors cost less than glass and are thinner (around thick, compared to for glass) and more flexible. At the same price, glass will resist scratches better than plastic, and feel more like the device's screen, though higher priced plastic protectors may be better than the cheapest tempered glass models, since glass will shatter or crack with sufficient impact force.
Screen protectors' surface can be glossy or matte. Glossy protectors retain the display's original clarity, while a matte ("anti-glare") surface facilitates readability in bright environments and mitigates stains such as finger prints.
Disadvantages
Screen protectors have been known to interfere with the operation of some touchscreens.
Also, an existing oleophobic coating of a touchscreen will be covered, although some tempered glass screen protectors come with their own oleophobic coating.
On some devices, the thickness of screen protectors can affect the look and feel of the device.
Function
The main function of this screen protector is
|
https://en.wikipedia.org/wiki/Waste%20heat
|
Waste heat is heat that is produced by a machine, or other process that uses energy, as a byproduct of doing work. All such processes give off some waste heat as a fundamental result of the laws of thermodynamics. Waste heat has lower utility (or in thermodynamics lexicon a lower exergy or higher entropy) than the original energy source. Sources of waste heat include all manner of human activities, natural systems, and all organisms, for example, incandescent light bulbs get hot, a refrigerator warms the room air, a building gets hot during peak hours, an internal combustion engine generates high-temperature exhaust gases, and electronic components get warm when in operation.
Instead of being "wasted" by release into the ambient environment, sometimes waste heat (or cold) can be used by another process (such as using hot engine coolant to heat a vehicle), or a portion of heat that would otherwise be wasted can be reused in the same process if make-up heat is added to the system (as with heat recovery ventilation in a building).
Thermal energy storage, which includes technologies both for short- and long-term retention of heat or cold, can create or improve the utility of waste heat (or cold). One example is waste heat from air conditioning machinery stored in a buffer tank to aid in night time heating. Another is seasonal thermal energy storage (STES) at a foundry in Sweden. The heat is stored in the bedrock surrounding a cluster of heat exchanger equipped boreholes, and is used for space heating in an adjacent factory as needed, even months later. An example of using STES to use natural waste heat is the Drake Landing Solar Community in Alberta, Canada, which, by using a cluster of boreholes in bedrock for interseasonal heat storage, obtains 97 percent of its year-round heat from solar thermal collectors on the garage roofs. Another STES application is storing winter cold underground, for summer air conditioning.
On a biological scale, all organisms reject w
|
https://en.wikipedia.org/wiki/Math%20circle
|
A math circle is a learning space where participants engage in the depths and intricacies of mathematical thinking, propagate the culture of doing mathematics, and create knowledge. To reach these goals, participants partake in problem-solving, mathematical modeling, the practice of art, and philosophical discourse. Some circles involve competition, while others do not.
Characteristics
Math circles can have a variety of styles. Some are very informal, with the learning proceeding through games, stories, or hands-on activities. Others are more traditional enrichment classes but without formal examinations. Some have a strong emphasis on preparing for Olympiad competitions; some avoid competition as much as possible. Models can use any combination of these techniques, depending on the audience, the mathematician, and the environment of the circle. Athletes have sports teams through which to deepen their involvement with sports; math circles can play a similar role for kids who like to think. Two features all math circles have in common are (1) that they are composed of students who want to be there - either like math, or want to like math, and (2) that they give students a social context in which to enjoy mathematics.
History
Mathematical enrichment activities in the United States have been around since sometime before 1977, in the form of residential summer programs, math contests, and local school-based programs. The concept of a math circle, on the other hand, with its emphasis on convening professional mathematicians and secondary school students regularly to solve problems, appeared in the U.S. in 1994 with Robert and Ellen Kaplan at Harvard University. This form of mathematical outreach made its way to the U.S. most directly from the former Soviet Union and present-day Russia and Bulgaria. They first appeared in the Soviet Union during the 1930s; they have existed in Bulgaria since sometime before 1907. The tradition arrived in the U.S. with émigrés who had r
|
https://en.wikipedia.org/wiki/Error%20exponent
|
In information theory, the error exponent of a channel code or source code over the block length of the code is the rate at which the error probability decays exponentially with the block length of the code. Formally, it is defined as the limiting ratio of the negative logarithm of the error probability to the block length of the code for large block lengths. For example, if the probability of error of a decoder drops as , where is the block length, the error exponent is . In this example, approaches for large . Many of the information-theoretic theorems are of asymptotic nature, for example, the channel coding theorem states that for any rate less than the channel capacity, the probability of the error of the channel code can be made to go to zero as the block length goes to infinity. In practical situations, there are limitations to the delay of the communication and the block length must be finite. Therefore, it is important to study how the probability of error drops as the block length go to infinity.
Error exponent in channel coding
For time-invariant DMC's
The channel coding theorem states that for any ε > 0 and for any rate less than the channel capacity, there is an encoding and decoding scheme that can be used to ensure that the probability of block error is less than ε > 0 for sufficiently long message block X. Also, for any rate greater than the channel capacity, the probability of block error at the receiver goes to one as the block length goes to infinity.
Assuming a channel coding setup as follows: the channel can transmit any of messages, by transmitting the corresponding codeword (which is of length n). Each component in the codebook is drawn i.i.d. according to some probability distribution with probability mass function Q. At the decoding end, maximum likelihood decoding is done.
Let be the th random codeword in the codebook, where goes from to . Suppose the first message is selected, so codeword is transmitted. Given that is rece
|
https://en.wikipedia.org/wiki/Scratchpad%20memory
|
Scratchpad memory (SPM), also known as scratchpad, scratchpad RAM or local store in computer terminology, is an internal memory, usually high-speed, used for temporary storage of calculations, data, and other work in progress. In reference to a microprocessor (or CPU), scratchpad refers to a special high-speed memory used to hold small items of data for rapid retrieval. It is similar to the usage and size of a scratchpad in life: a pad of paper for preliminary notes or sketches or writings, etc. When the scratchpad is a hidden portion of the main memory then it is sometimes referred to as bump storage.
In some systems it can be considered similar to the L1 cache in that it is the next closest memory to the ALU after the processor registers, with explicit instructions to move data to and from main memory, often using DMA-based data transfer. In contrast to a system that uses caches, a system with scratchpads is a system with non-uniform memory access (NUMA) latencies, because the memory access latencies to the different scratchpads and the main memory vary. Another difference from a system that employs caches is that a scratchpad commonly does not contain a copy of data that is also stored in the main memory.
Scratchpads are employed for simplification of caching logic, and to guarantee a unit can work without main memory contention in a system employing multiple processors, especially in multiprocessor system-on-chip for embedded systems. They are mostly suited for storing temporary results (as it would be found in the CPU stack) that typically wouldn't need to always be committing to the main memory; however when fed by DMA, they can also be used in place of a cache for mirroring the state of slower main memory. The same issues of locality of reference apply in relation to efficiency of use; although some systems allow strided DMA to access rectangular data sets. Another difference is that scratchpads are explicitly manipulated by applications. They may be useful
|
https://en.wikipedia.org/wiki/Pattern%20hair%20loss
|
Pattern hair loss (also known as androgenetic alopecia (AGA)) is a hair loss condition that primarily affects the top and front of the scalp. In male-pattern hair loss (MPHL), the hair loss typically presents itself as either a receding front hairline, loss of hair on the crown (vertex) of the scalp, or a combination of both. Female-pattern hair loss (FPHL) typically presents as a diffuse thinning of the hair across the entire scalp.
Male pattern hair loss seems to be due to a combination of oxidative stress, the microbiome of the scalp, genetics, and circulating androgens; particularly dihydrotestosterone (DHT). Men with early onset androgenic alopecia (before the age of 35) have been deemed the male phenotypic equivalent for polycystic ovary syndrome (PCOS). As an early clinical expression of insulin resistance and metabolic syndrome, AGA is related to being an increased risk factor for cardiovascular diseases, glucose metabolism disorders, type 2 diabetes, and enlargement of the prostate.
The cause in female pattern hair loss remains unclear; androgenetic alopecia for women is associated with an increased risk of polycystic ovary syndrome (PCOS).
Management may include simply accepting the condition or shaving one's head to improve the aesthetic aspect of the condition. Otherwise, common medical treatments include minoxidil, finasteride, dutasteride, or hair transplant surgery. Use of finasteride and dutasteride in women is not well-studied and may result in birth defects if taken during pregnancy.
Pattern hair loss by the age of 50 affects about half of males and a quarter of females. It is the most common cause of hair loss. Both males aged 40–91 and younger male patients of early onset AGA (before the age of 35), had a higher likelihood of metabolic syndrome (MetS) and insulin resistance. With younger males, studies found metabolic syndrome to be at approximately a 4× increased frequency which is clinically deemed significant. Abdominal obesity, hypertensi
|
https://en.wikipedia.org/wiki/Robert%20A.%20Jarrow
|
Robert Alan Jarrow is the Ronald P. and Susan E. Lynch Professor of Investment Management at the Johnson Graduate School of Management, Cornell University. Professor Jarrow is a co-creator of the Heath–Jarrow–Morton framework for pricing interest rate derivatives, a co-creator of the reduced form Jarrow–Turnbull credit risk models employed for pricing credit derivatives, and the creator of the forward price martingale measure. These tools and models are now the standards utilized for pricing and hedging in major investment and commercial banks.
He is on the advisory board of Mathematical Finance – a journal he co-started in 1989. He is also an associate or advisory editor for numerous other journals and serves on the board of directors of several firms and professional societies. He is currently both an IAFE senior fellow and an FDIC senior fellow. He has served as the Director for Research of Kamakura Corporation since 1995.
Professor Jarrow has been the recipient of numerous prizes and awards including the CBOE Pomerance Prize for Excellence in the Area of Options Research, the Graham and Dodd Scrolls Award, and the 1997 International Association of Financial Engineers IAFE/SunGard Financial Engineer of the Year Award. He is included in both the Fixed Income Analysts Society Hall of Fame and Risk Magazine's 50 member Hall of Fame.
Publications include five books - Options Pricing, Finance Theory, Modeling Fixed Income Securities and Interest Rate Options (second edition), Derivative Securities (second edition), and And Introduction to Derivative Securities, Financial Markets, and Risk Management - as well as over 100 publications in leading finance and economic journals.
He graduated magna cum laude from Duke University in 1974 with a major in mathematics, received an MBA from the Tuck School of Business at Dartmouth College in 1976 with highest distinction, and in 1979 he obtained a PhD in finance from the MIT Sloan School of Management under Robert C. M
|
https://en.wikipedia.org/wiki/BrainMaps
|
BrainMaps is an NIH-funded interactive zoomable high-resolution digital brain atlas and virtual microscope that is based on more than 140 million megapixels (140 terabytes) of scanned images of serial sections of both primate and non-primate brains and that is integrated with a high-speed database for querying and retrieving data about brain structure and function over the internet.
Currently featured are complete brain atlas datasets for 16 species; a few of which are: Macaca mulatta, Chlorocebus aethiops, Felis silvestris catus, Mus musculus, Rattus norvegicus, and Tyto alba.
The project's principal investigator was UC Davis neuroscientist Ted Jones from 2005 through 2011, after which the role was taken by W. Martin Usrey.
Description
BrainMaps uses multiresolution image formats for representing massive brain images, and a dHTML/Javascript front-end user interface for image navigation, both similar to the way that Google Maps works for geospatial data.
BrainMaps is one of the most massive online neuroscience databases and image repositories and features the highest-resolution whole brain atlas ever constructed.
Extensions to interactive 3-dimensional visualization have been developed through OpenGL-based desktop applications. Freely available image analysis tools enable end-users to datamine online images at the sub-neuronal level. BrainMaps has been used in both research and didactic settings.
Additional images
See also
List of neuroscience databases
Human Brain Project
NeuroNames
Mouse brain
References
External links
BrainMaps.org (official website)
BrainMaps featured in Science Magazine
BrainMaps images featured in Discover Magazine article, "10 Unsolved Mysteries Of The Brain"
BrainMaps-related Publications (archived 9 May 2012)
Online databases
Anatomy websites
Biological databases
|
https://en.wikipedia.org/wiki/Frequency%20band
|
A frequency band is an interval in the frequency domain, delimited by a lower frequency and an upper frequency. The term may refer to a radio band (such as wireless communication standards set by the International Telecommunication Union) or an interval of some other spectrum.
The frequency range of a system is the range over which it is considered to provide satisfactory performance, such as a useful level of signal with acceptable distortion characteristics. A listing of the upper and lower limits of frequency limits for a system is not useful without a criterion for what the range represents.
Many systems are characterized by the range of frequencies to which they respond. For example:
Musical instruments produce different ranges of notes within the hearing range.
The electromagnetic spectrum can be divided into many different ranges such as visible light, infrared or ultraviolet radiation, radio waves, X-rays and so on, and each of these ranges can in turn be divided into smaller ranges.
A radio communications signal must occupy a range of frequencies carrying most of its energy, called its bandwidth. A frequency band may represent one communication channel or be subdivided into many. Allocation of radio frequency ranges to different uses is a major function of radio spectrum allocation.
See also
References
Signal processing
Telecommunication theory
|
https://en.wikipedia.org/wiki/Langevin%20dynamics
|
In physics, Langevin dynamics is an approach to the mathematical modeling of the dynamics of molecular systems. It was originally developed by French physicist Paul Langevin. The approach is characterized by the use of simplified models while accounting for omitted degrees of freedom by the use of stochastic differential equations. Langevin dynamics simulations are a kind of Monte Carlo simulation.
Overview
A real world molecular system is unlikely to be present in vacuum. Jostling of solvent or air molecules causes friction, and the occasional high velocity collision will perturb the system. Langevin dynamics attempts to extend molecular dynamics to allow for these effects. Also, Langevin dynamics allows temperature to be controlled like with a thermostat, thus approximating the canonical ensemble.
Langevin dynamics mimics the viscous aspect of a solvent. It does not fully model an implicit solvent; specifically, the model does not account for the electrostatic screening and also not for the hydrophobic effect. For denser solvents, hydrodynamic interactions are not captured via Langevin dynamics.
For a system of particles with masses , with coordinates that constitute a time-dependent random variable, the resulting Langevin equation is
where is the particle interaction potential; is the gradient operator such that is the force calculated from the particle interaction potentials; the dot is a time derivative such that is the velocity and is the acceleration; is the damping constant (units of reciprocal time), also known as the collision frequency; is the temperature, is Boltzmann's constant; and is a delta-correlated stationary Gaussian process with zero-mean, satisfying
Here, is the Dirac delta.
If the main objective is to control temperature, care should be exercised to use a small damping constant . As grows, it spans from the inertial all the way to the diffusive (Brownian) regime. The Langevin dynamics limit of non-inertia is commonly descr
|
https://en.wikipedia.org/wiki/Choi%27s%20theorem%20on%20completely%20positive%20maps
|
In mathematics, Choi's theorem on completely positive maps is a result that classifies completely positive maps between finite-dimensional (matrix) C*-algebras. An infinite-dimensional algebraic generalization of Choi's theorem is known as Belavkin's "Radon–Nikodym" theorem for completely positive maps.
Statement
Choi's theorem. Let be a linear map. The following are equivalent:
(i) is -positive (i.e. is positive whenever is positive).
(ii) The matrix with operator entries
is positive, where is the matrix with 1 in the -th entry and 0s elsewhere. (The matrix CΦ is sometimes called the Choi matrix of .)
(iii) is completely positive.
Proof
(i) implies (ii)
We observe that if
then E=E* and E2=nE, so E=n−1EE* which is positive. Therefore CΦ =(In ⊗ Φ)(E) is positive by the n-positivity of Φ.
(iii) implies (i)
This holds trivially.
(ii) implies (iii)
This mainly involves chasing the different ways of looking at Cnm×nm:
Let the eigenvector decomposition of CΦ be
where the vectors lie in Cnm . By assumption, each eigenvalue is non-negative so we can absorb the eigenvalues in the eigenvectors and redefine so that
The vector space Cnm can be viewed as the direct sum compatibly with the above identification
and the standard basis of Cn.
If Pk ∈ Cm × nm is projection onto the k-th copy of Cm, then Pk* ∈ Cnm×m is the inclusion of Cm as the k-th summand of the direct sum and
Now if the operators Vi ∈ Cm×n are defined on the k-th standard
basis vector ek of Cn by
then
Extending by linearity gives us
for any A ∈ Cn×n. Any map of this form is manifestly completely positive: the map is completely positive, and the sum (across ) of completely positive operators is again completely positive. Thus is completely positive, the desired result.
The above is essentially Choi's original proof. Alternative proofs have also been known.
Consequences
Kraus operators
In the context of quantum information theory, the operators {Vi} are called the Kraus operators (af
|
https://en.wikipedia.org/wiki/Sorbitan%20monostearate
|
Sorbitan monostearate is an ester of sorbitan (a sorbitol derivative) and stearic acid and is sometimes referred to as a synthetic wax.
Uses
Sorbitan monostearate is used in the manufacture of food and healthcare products as a non-ionic surfactant with emulsifying, dispersing, and wetting properties. It is also employed to create synthetic fibers, metal machining fluid, and as a brightener in the leather industry. Sorbitans are also known as "Spans".
Sorbitan monostearate has been approved by the European Union for use as a food additive (emulsifier) (E number: E 491). It is also approved for use by the British Pharmacopoeia.
See also
Polysorbate
Sorbitan tristearate (Span 65)
References
Food additives
Non-ionic surfactants
E-number additives
Stearate esters
|
https://en.wikipedia.org/wiki/Container%20%28abstract%20data%20type%29
|
In computer science, a container is a class or a data structure<ref>Paul E. Black (ed.), entry for data structure in Dictionary of Algorithms and Data Structures. US National Institute of Standards and Technology.15 December 2004. Accessed 4 Oct 2011.</ref> whose instances are collections of other objects. In other words, they store objects in an organized way that follows specific access rules.
The size of the container depends on the number of objects (elements) it contains. Underlying (inherited) implementations of various container types may vary in size, complexity and type of language, but in many cases they provide flexibility in choosing the right implementation for any given scenario.
Container data structures are commonly used in many types of programming languages.
Function and properties
Containers can be characterized by the following three properties:
access, that is the way of accessing the objects of the container. In the case of arrays, access is done with the array index. In the case of stacks, access is done according to the LIFO (last in, first out) order and in the case of queues it is done according to the FIFO (first in, first out) order;
storage, that is the way of storing the objects of the container;
traversal, that is the way of traversing the objects of the container.
Container classes are expected to implement CRUD-like methods to do the following:
create an empty container (constructor);
insert objects into the container;
delete objects from the container;
delete all the objects in the container (clear);
access the objects in the container;
access the number of objects in the container (count).
Containers are sometimes implemented in conjunction with iterators.
Types
Containers may be classified as either single-value containers or associative containers''.
Single-value containers store each object independently. Objects may be accessed directly, by a language loop construct (e.g. for loop) or with an iterator.
An ass
|
https://en.wikipedia.org/wiki/Butson-type%20Hadamard%20matrix
|
In mathematics, a complex Hadamard matrix H of size N with all its columns (rows) mutually orthogonal, belongs to the Butson-type H(q, N) if all its elements are powers of q-th root of unity,
Existence
If p is prime and , then can exist
only for with integer m and
it is conjectured they exist for all such cases
with . For , the corresponding conjecture is existence for all multiples of 4.
In general, the problem of finding all sets
such that the Butson - type matrices
exist, remains open.
Examples
contains real Hadamard matrices of size N,
contains Hadamard matrices composed of - such matrices were called by Turyn, complex Hadamard matrices.
in the limit one can approximate all complex Hadamard matrices.
Fourier matrices
belong to the Butson-type,
while
, where
References
A. T. Butson, Generalized Hadamard matrices, Proc. Am. Math. Soc. 13, 894-898 (1962).
A. T. Butson, Relations among generalized Hadamard matrices, relative difference sets, and maximal length linear recurring sequences, Can. J. Math. 15, 42-48 (1963).
R. J. Turyn, Complex Hadamard matrices, pp. 435–437 in Combinatorial Structures and their Applications, Gordon and Breach, London (1970).
External links
Complex Hadamard Matrices of Butson type - a catalogue, by Wojciech Bruzda, Wojciech Tadej and Karol Życzkowski, retrieved October 24, 2006
Matrices
|
https://en.wikipedia.org/wiki/Pierce%20oscillator
|
The Pierce oscillator is a type of electronic oscillator particularly well-suited for use in piezoelectric crystal oscillator circuits. Named for its inventor, George W. Pierce (1872–1956), the Pierce oscillator is a derivative of the Colpitts oscillator. Virtually all digital IC clock oscillators are of Pierce type, as the circuit can be implemented using a minimum of components: a single digital inverter, one resistor, two capacitors, and the quartz crystal, which acts as a highly selective filter element. The low manufacturing cost of this circuit and the outstanding frequency stability of the quartz crystal give it an advantage over other designs in many consumer electronics applications.
Operation
If the circuit consists of perfect lossless components, the signal on C1 and C2 will be proportional to the impedance of each, and the ratio of the signal voltages at C1 and C2 will be C2/C1. With C1 and C2 equal size (a common configuration), the current in C1 to C2 would be exactly equal, but out of phase, requiring no current from the amplifier or voltage gain from the amplifier, and allowing a high output impedance amplifier, or the use of an isolating series resistance in the amplifier output. Normal crystals are lossless enough to make this a reasonable approximation: the amplifier does not drive the resonant circuit, but merely stays in sync with it, providing enough power to match losses.
A series resistor is occasionally shown in the amplifier output. When used, a series resistor reduces loop gain, and amplifier gain must be increased to restore total loop gain to unity. The purpose of using such a resistor in the amplifier circuit is to increase phase shift at startup, or when the crystal circuit is pulled out of phase by loading, and to eliminate the effects of amplifier non-linearity and of crystal overtones or spurious modes. It is not part of the basic operation of the Pierce topology.
Biasing resistor
R1 acts as a feedback resistor, biasing the i
|
https://en.wikipedia.org/wiki/Medically%20unexplained%20physical%20symptoms
|
Medically unexplained physical symptoms (MUPS or MUS) are symptoms for which a treating physician or other healthcare providers have found no medical cause, or whose cause remains contested. In its strictest sense, the term simply means that the cause for the symptoms is unknown or disputed—there is no scientific consensus. Not all medically unexplained symptoms are influenced by identifiable psychological factors. However, in practice, most physicians and authors who use the term consider that the symptoms most likely arise from psychological causes. Typically, the possibility that MUPS are caused by prescription drugs or other drugs is ignored. It is estimated that between 15% and 30% of all primary care consultations are for medically unexplained symptoms. A large Canadian community survey revealed that the most common medically unexplained symptoms are musculoskeletal pain, ear, nose, and throat symptoms, abdominal pain and gastrointestinal symptoms, fatigue, and dizziness. The term MUPS can also be used to refer to syndromes whose etiology remains contested, including chronic fatigue syndrome, fibromyalgia, multiple chemical sensitivity and Gulf War illness.
The term medically unexplained symptoms is in some cases treated as synonymous to older terms such as psychosomatic symptoms, conversion disorders, somatic symptoms, somatisations or somatoform disorders; as well as contemporary terms such as functional disorders, bodily distress, and persistent physical symptoms. The plethora of terms reflects imprecision and uncertainty in their definition, controversy, and care taken to avoid stigmatising affected people. Risk factors for medically unexplained symptoms are complex and include both psychological and organic features, and such symptoms are often accompanied by other somatic symptoms attributable to organic disease. As such it is recognised that the boundary defining symptoms as medically unexplained is increasingly becoming blurred.
Women are signific
|
https://en.wikipedia.org/wiki/Scalar%20field%20theory
|
In theoretical physics, scalar field theory can refer to a relativistically invariant classical or quantum theory of scalar fields. A scalar field is invariant under any Lorentz transformation.
The only fundamental scalar quantum field that has been observed in nature is the Higgs field. However, scalar quantum fields feature in the effective field theory descriptions of many physical phenomena. An example is the pion, which is actually a pseudoscalar.
Since they do not involve polarization complications, scalar fields are often the easiest to appreciate second quantization through. For this reason, scalar field theories are often used for purposes of introduction of novel concepts and techniques.
The signature of the metric employed below is .
Classical scalar field theory
A general reference for this section is Ramond, Pierre (2001-12-21). Field Theory: A Modern Primer (Second Edition). USA: Westview Press. , Ch 1.
Linear (free) theory
The most basic scalar field theory is the linear theory. Through the Fourier decomposition of the fields, it represents the normal modes of an infinity of coupled oscillators where the continuum limit of the oscillator index i is now denoted by . The action for the free relativistic scalar field theory is then
where is known as a Lagrangian density; for the three spatial coordinates; is the Kronecker delta function; and for the -th coordinate .
This is an example of a quadratic action, since each of the terms is quadratic in the field, . The term proportional to is sometimes known as a mass term, due to its subsequent interpretation, in the quantized version of this theory, in terms of particle mass.
The equation of motion for this theory is obtained by extremizing the action above. It takes the following form, linear in ,
where ∇2 is the Laplace operator. This is the Klein–Gordon equation, with the interpretation as a classical field equation, rather than as a quantum-mechanical wave equation.
Nonlinear (int
|
https://en.wikipedia.org/wiki/Beta-alumina%20solid%20electrolyte
|
Beta-alumina solid electrolyte (BASE) is a fast ion conductor material used as a membrane in several types of molten salt electrochemical cell. Currently there is no known substitute available. β-Alumina exhibits an unusual layered crystal structure which enables very fast ion transport. β-Alumina is not an isomorphic form of aluminium oxide (Al2O3), but a sodium polyaluminate. It is a hard polycrystalline ceramic, which, when prepared as an electrolyte, is complexed with a mobile ion, such as Na+, K+, Li+, Ag+, H+, Pb2+, Sr2+ or Ba2+ depending on the application. β-Alumina is a good conductor of its mobile ion yet allows no non-ionic (i.e., electronic) conductivity. The crystal structure of the β-alumina provides an essential rigid framework with channels along which the ionic species of the solid can migrate. Ion transport involves hopping from site to site along these channels. Since the 1970's this technology has been thoroughly developed, resulting in interesting applications. Its special characteristics on ion and electrical conductivity make this material extremely interesting in the field of energy storage.
Solid electrolyte
β-alumina is a solid electrolyte. Solid-state electrolytes are solids with high ionic conductivity, comparable to those of molten salts. Solid-state electrolytes have applications in electrical energy storage and various sensors. They can be used in supercapacitors, fuel cells and solid-state batteries, substituting liquid electrolytes used in for example the lithium-ion battery. The solid electrolyte contains highly mobile ions, allowing the movement of ions. The ions move by hopping through the otherwise rigid crystal. The main advantage of solid electrolytes over liquid ones are increased safety and higher power density.
History
BASE was first developed by researchers at the Ford Motor Company, in the search for a storage device for electric vehicles while developing the sodium–sulfur battery. The compound β-alumina was already
|
https://en.wikipedia.org/wiki/Computable%20ordinal
|
In mathematics, specifically computability and set theory, an ordinal is said to be computable or recursive if there is a computable well-ordering of a computable subset of the natural numbers having the order type .
It is easy to check that is computable. The successor of a computable ordinal is computable, and the set of all computable ordinals is closed downwards.
The supremum of all computable ordinals is called the Church–Kleene ordinal, the first nonrecursive ordinal, and denoted by . The Church–Kleene ordinal is a limit ordinal. An ordinal is computable if and only if it is smaller than . Since there are only countably many computable relations, there are also only countably many computable ordinals. Thus, is countable.
The computable ordinals are exactly the ordinals that have an ordinal notation in Kleene's .
See also
Arithmetical hierarchy
Large countable ordinal
Ordinal analysis
Ordinal notation
References
Hartley Rogers Jr. The Theory of Recursive Functions and Effective Computability, 1967. Reprinted 1987, MIT Press, (paperback),
Gerald Sacks Higher Recursion Theory. Perspectives in mathematical logic, Springer-Verlag, 1990.
Set theory
Computability theory
Ordinal numbers
|
https://en.wikipedia.org/wiki/Ricoh%202A03
|
The Ricoh 2A03 or RP2A03 (NTSC version) / Ricoh 2A07 or RP2A07 (PAL version) is an 8-bit microprocessor manufactured by Ricoh for the Nintendo Entertainment System video game console. It was also used as a sound chip and secondary CPU by Nintendo's arcade games Punch-Out!! and Donkey Kong 3.
Technical details
The Ricoh 2A03 contains a second-sourced MOS Technology 6502 core, modified to disable the 6502's binary-coded decimal mode (possibly to avoid a MOS Technology patent). It also integrates a programmable sound generator (also known as APU, featuring twenty two memory-mapped I/O registers), rudimentary DMA, and game controller polling.
Sound hardware
The Ricoh 2A03's sound hardware has 5 channels, separated into two APUs (Audio Processing Units). The first APU contains two general purpose pulse channels with 4 duty cycles, and the second APU contains a triangle wave generator, an LFSR-based Noise generator, and a 1-bit Delta modulation-encoded PCM (DPCM) channel. While a majority of the NES library uses only 4 channels, later games use the 5th DPCM channel due to cartridge memory expansions becoming cheaper. For example, Super Mario Bros. 3 uses the DPCM channel for simple drum sounds, while Journey to Silius uses it for sampled basslines. An interesting quirk of the DPCM channel is that the bit order is reversed compared to what is normally expected for 1-bit PCM. Many developers were unaware of this detail, causing samples to be distorted during playback.
The output of each channel is mixed non-linearly in their respective APU before being combined. On Famicom and Dendy systems, expansion sound chips may add their own sound to the output via a pin on the game cartridge. Expansion audio capabilities were removed from international NES systems, but can be restored by modifying the expansion port located on the bottom of the system.
Regional variations
PAL versions of the NES (sold in Europe, Asia, and Australia) use the Ricoh 2A07 or RP2A07 processor, whic
|
https://en.wikipedia.org/wiki/Roundness
|
Roundness is the measure of how closely the shape of an object approaches that of a mathematically perfect circle. Roundness applies in two dimensions, such as the cross sectional circles along a cylindrical object such as a shaft or a cylindrical roller for a bearing. In geometric dimensioning and tolerancing, control of a cylinder can also include its fidelity to the longitudinal axis, yielding cylindricity. The analogue of roundness in three dimensions (that is, for spheres) is sphericity.
Roundness is dominated by the shape's gross features rather than the definition of its edges and corners, or the surface roughness of a manufactured object. A smooth ellipse can have low roundness, if its eccentricity is large. Regular polygons increase their roundness with increasing numbers of sides, even though they are still sharp-edged.
In geology and the study of sediments (where three-dimensional particles are most important), roundness is considered to be the measurement of surface roughness and the overall shape is described by sphericity.
Simple definitions
The ISO definition of roundness is the ratio of the radii of inscribed and circumscribed circles, i.e. the maximum and minimum sizes for circles that are just sufficient to fit inside and to enclose the shape.
Diameter
Having a constant diameter, measured at varying angles around the shape, is often considered to be a simple measurement of roundness. This is misleading.
Although constant diameter is a necessary condition for roundness, it is not a sufficient condition for roundness: shapes exist that have constant diameter but are far from round. Mathematical shapes such as the Reuleaux triangle and, an everyday example, the British 50p coin demonstrate this.
Radial displacements
Roundness does not describe radial displacements of a shape from some notional centre point, merely the overall shape.
This is important in manufacturing, such as for crankshafts and similar objects, where not only the roundnes
|
https://en.wikipedia.org/wiki/Newisys
|
Newisys was an American technology company. At various times it sold computers for data centers (known as servers), and computer data storage products.
It operated as a subsidiary of Sanmina Corporation since 2004.
History
Newisys was founded in July 2000 by Claymon A. Cipione and Phillip Doyce Hester, both from IBM. It was originally based in Austin, Texas.
By the end of 2000, almost $28 million in venture capital funding was obtained from New Enterprise Associates, Austin Ventures, and Advanced Micro Devices (AMD).
By 2002, they gave demonstrations of server using 64-bit AMD processors.
Another round of about $23 million funding was announced in November 2002, increased to $25 million in February 2003.
In July 2003, Sanmina-SCI (which had been a manufacturing partner) announced it would acquire Newisys for an undisclosed amount.
Newisys became an original design manufacturer for Sanminia.
In 2005, Hester left to become the chief technical officer of AMD until 2008, and Cipione also left to join AMD to become chief information officer.
In August 2005, a network-attached storage server product called the NA-1400 was announced, although shipments were reported to be delayed. It used an XScale 80219 processor from Intel.
In November 2005, Newisys announced an integrated circuit call the AMD Horus, which allowed servers to be built with large numbers of AMD Opteron processors.
In January 2006, the company acquired the block storage division of Adaptec, located in Colorado Springs, Colorado.
In May 2007, the server portion of the company was shut down, leaving storage (developed in Colorado) as the main focus.
Newisys returned to the server market in 2013 by adding Intel based servers into their storage products.
References
Companies based in Austin, Texas
Computer storage companies
Defunct computer companies of the United States
|
https://en.wikipedia.org/wiki/Piston%20motion%20equations
|
The reciprocating motion of a non-offset piston connected to a rotating crank through a connecting rod (as would be found in internal combustion engines) can be expressed by equations of motion. This article shows how these equations of motion can be derived using calculus as functions of angle (angle domain) and of time (time domain).
Crankshaft geometry
The geometry of the system consisting of the piston, rod and crank is represented as shown in the following diagram:
Definitions
From the geometry shown in the diagram above, the following variables are defined:
rod length (distance between piston pin and crank pin)
crank radius (distance between crank center and crank pin, i.e. half stroke)
crank angle (from cylinder bore centerline at TDC)
piston pin position (distance upward from crank center along cylinder bore centerline)
The following variables are also defined:
piston pin velocity (upward from crank center along cylinder bore centerline)
piston pin acceleration (upward from crank center along cylinder bore centerline)
crank angular velocity (in the same direction/sense as crank angle )
Angular velocity
The frequency (Hz) of the crankshaft's rotation is related to the engine's speed (revolutions per minute) as follows:
So the angular velocity (radians/s) of the crankshaft is:
Triangle relation
As shown in the diagram, the crank pin, crank center and piston pin form triangle NOP.
By the cosine law it is seen that:
where and are constant and varies as changes.
Equations with respect to angular position (Angle Domain)
Angle domain equations are expressed as functions of angle.
Deriving angle domain equations
The angle domain equations of the piston's reciprocating motion are derived from the system's geometry equations as follows.
Position
Position with respect to crank angle (from the triangle relation, completing the square, utilizing the Pythagorean identity, and rearranging):
Velocity
Velocity with respect to crank angle (take
|
https://en.wikipedia.org/wiki/Conserved%20name
|
A conserved name or nomen conservandum (plural nomina conservanda, abbreviated as nom. cons.) is a scientific name that has specific nomenclatural protection. That is, the name is retained, even though it violates one or more rules which would otherwise prevent it from being legitimate. Nomen conservandum is a Latin term, meaning "a name to be conserved". The terms are often used interchangeably, such as by the International Code of Nomenclature for Algae, Fungi, and Plants (ICN), while the International Code of Zoological Nomenclature favours the term "conserved name".
The process for conserving botanical names is different from that for zoological names. Under the botanical code, names may also be "suppressed", nomen rejiciendum (plural nomina rejicienda or nomina utique rejicienda, abbreviated as nom. rej.), or rejected in favour of a particular conserved name, and combinations based on a suppressed name are also listed as “nom. rej.”.
Botany
Conservation
In botanical nomenclature, conservation is a nomenclatural procedure governed by Article 14 of the ICN. Its purpose is
"to avoid disadvantageous nomenclatural changes entailed by the strict application of the rules, and especially of the principle of priority [...]" (Art. 14.1).
Conservation is possible only for names at the rank of family, genus or species.
It may effect a change in original spelling, type, or (most commonly) priority.
Conserved spelling (orthographia conservanda, orth. cons.) allows spelling usage to be preserved even if the name was published with another spelling: Euonymus (not Evonymus), Guaiacum (not Guajacum), etc. (see orthographical variant).
Conserved types (typus conservandus, typ. cons.) are often made when it is found that a type in fact belongs to a different taxon from the description, when a name has subsequently been generally misapplied to a different taxon, or when the type belongs to a small group separate from the monophyletic bulk of a taxon.
Conservation of a nam
|
https://en.wikipedia.org/wiki/Oryx/Pecos
|
Oryx/Pecos is a proprietary operating system developed from scratch by Bell Labs beginning in 1978 for the express purpose of running AT&T's large-scale PBX switching equipment. The operating system was first used with AT&T's flagship System 75, and until very recently, was used in all variations up through and including Definity G3 (Generic 3) switches, now manufactured by AT&T/Lucent Technologies spinoff Avaya. The last system based on Oryx/Pecos was the Avaya G3 CSI running release 13.1 Definity software. The formal end of sale was February 5, 2007. Although widely believed to be a Unix-like variant developed directly by Bell Labs, that is not the case, as it is not based on any version of Unix.
Description
Oryx/Pecos consists of a kernel (Oryx), and the associated processes running on top of it (Pecos). The system is named for Pecos Street, which bounds the Westminster, CO campus of then AT&T's Colorado Bell Labs location, while Oryx was the last word alphabetically before OS in the office dictionary and the Oryx was purportedly the origin of the unicorn myth. The system is loosely based on Thoth (developed at the University of Waterloo) and DEMOS (developed at Los Alamos Scientific Labs).
Features normally found in commercial operating systems are not found in Oryx/Pecos. Such features include:
A documented API structure
Dynamic application execution capability where additional applications can be loaded and executed without a need to compile and link them directly to the operating system
A Disk-Operating System compatible with standard file systems used today
Dynamically-linked libraries
Memory management for strong separation of applications and operating system processes
A commercially available development package
There is one historical link between Oryx/Pecos and Unix: the authors of the above article proposed as a future development the implementation of a UNIX execution environment on top of Oryx/Pecos, and in fact, such a project was underta
|
https://en.wikipedia.org/wiki/Plasma%20cleaning
|
Plasma cleaning is the removal of impurities and contaminants from surfaces through the use of an energetic plasma or dielectric barrier discharge (DBD) plasma created from gaseous species. Gases such as argon and oxygen, as well as mixtures such as air and hydrogen/nitrogen are used. The plasma is created by using high frequency voltages (typically kHz to >MHz) to ionise the low pressure gas (typically around 1/1000 atmospheric pressure), although atmospheric pressure plasmas are now also common.
Methods
In plasma, gas atoms are excited to higher energy states and also ionized. As the atoms and molecules 'relax' to their normal, lower energy states they release a photon of light, this results in the characteristic “glow” or light associated with plasma. Different gases give different colors. For example, oxygen plasma emits a light blue color.
A plasma’s activated species include atoms, molecules, ions, electrons, free radicals, metastables, and photons in the short wave ultraviolet (vacuum UV, or VUV for short) range. This mixture then interacts with any surface placed in the plasma.
If the gas used is oxygen, the plasma is an effective, economical, environmentally safe method for critical cleaning. The VUV energy is very effective in the breaking of most organic bonds (i.e., C–H, C–C, C=C, C–O, and C–N) of surface contaminants. This helps to break apart high molecular weight contaminants. A second cleaning action is carried out by the oxygen species created in the plasma (O2+, O2−, O3, O, O+, O−, ionised ozone, metastable excited oxygen, and free electrons). These species react with organic contaminants to form H2O, CO, CO2, and lower molecular weight hydrocarbons. These compounds have relatively high vapor pressures and are evacuated from the chamber during processing. The resulting surface is ultra-clean. In Fig. 2, a relative content of carbon over material depth is shown before and after cleaning with excited oxygen [1].
If the part consists of easily oxi
|
https://en.wikipedia.org/wiki/MSU%20Faculty%20of%20Mechanics%20and%20Mathematics
|
The MSU Faculty of Mechanics and Mathematics () is a faculty of Moscow State University.
History
Although lectures in mathematics had been delivered since Moscow State University was founded in 1755, the mathematical and physical department was founded only in 1804. The Mathematics and Mechanics Department was founded on 1 May 1933 and comprised mathematics, mechanics and astronomy departments (the latter passed to the Physics Department in 1956). In 1953 the department moved to a new building on the Sparrow Hills and the current division in mathematics and mechanics branches was settled. In 1970, the Department of Computational Mathematics and Cybernetics broke off the department due to the research in computer science.
A 2014 article entitled "Math as a tool of anti-semitism" in The Mathematics Enthusiast discussed antisemitism in the Moscow State University’s Department of Mathematics during the 1970s and 1980s.
Current state
Today the Department comprises 26 chairs (17 in the mathematical and 9 in the mechanics branch) and 14 research laboratories. Around 350 professors, assistant professors and researchers work at the department. Around 2000 students and 450 postgraduates study at the department. The education lasts 5 years (6 years from 2011).
Notable alumni
Notable faculty (past and present)
Algebra – O. U. Schmidt, A. G. Kurosh, Yu. I. Manin
Number theory – B. N. Delaunay, A. I. Khinchin, L. G. Shnirelman, A. O. Gelfond
Topology – P. S. Alexandrov, A. N. Tychonoff, L. S. Pontryagin, Lev Tumarkin
Real analysis – D. E. Menshov, A. I. Khinchin, N. K. Bari, A. N. Kolmogorov, S. B. Stechkin
Complex analysis – I. I. Privalov, M. A. Lavrentiev, A. O. Gelfond, M. V. Keldysh
Ordinary differential equations – V. V. Stepanov, V. V. Nemitski, V. I. Arnold, N. N. Nekhoroshev
Partial differential equations – I. G. Petrovsky, S. L. Sobolev, E. M. Landis
Mathematical logic and Theory of algorithms – A. A. Markov (Jr.), A. N. Kolmogorov, V. A. Melnikov, V. A. Uspensky
|
https://en.wikipedia.org/wiki/Jean%20Bartik
|
Jean Bartik ( Betty Jean Jennings; December 27, 1924 – March 23, 2011) was one of the original six programmers for the ENIAC computer.
Bartik studied mathematics in school then began work at the University of Pennsylvania, first manually calculating ballistics trajectories and then using ENIAC to do so. The other five ENIAC programmers were Betty Holberton, Ruth Teitelbaum, Kathleen Antonelli, Marlyn Meltzer, and Frances Spence. Bartik and her colleagues developed and codified many of the fundamentals of programming while working on the ENIAC, since it was the first computer of its kind.
After her work on ENIAC, Bartik went on to work on BINAC and UNIVAC, and spent time at a variety of technical companies as a writer, manager, engineer and programmer. She spent her later years as a real estate agent and died in 2011 from congestive heart failure complications.
Content-management framework Drupal's default theme, Bartik, is named in her honor.
Early life and education
Born Betty Jean Jennings in Gentry County, Missouri in 1924, she was the sixth of seven children. Her father, William Smith Jennings (1893–1971) was from Alanthus Grove, where he was a schoolteacher as well as a farmer. Her mother, Lula May Spainhower (1887–1988) was from Alanthus. Jennings had three older brothers, William (January 10, 1915) Robert (March 15, 1918); and Raymond (January 23, 1922); two older sisters, Emma (August 11, 1916) and Lulu (August 22, 1919), and one younger sister, Mable (December 15, 1928).
In her childhood, she would ride on horseback to visit her grandmother, who bought the young girl a newspaper to read every day and became a role model for the rest of her life. She began her education at a local one-room school, and gained local attention for her softball skill. In order to attend high school, she lived with her older sister in the neighboring town, where the school was located, and then began to drive every day despite being only 14. She graduated from Stanberry High
|
https://en.wikipedia.org/wiki/Chi%20Epsilon
|
Chi Epsilon () is an American collegiate civil engineering honor society. It honors engineering students who have exemplified the "principles of scholarship, character, practicality, and sociability...in the civil engineering profession." As of 2023, there are 141 chapters, of which 137 are active, where over 125,000 members have been inducted.
History
In early 1922, two local civil engineering student groups–Chi Epsilon and Chi Delta Chi–formed independently at the University of Illinois at Urbana–Champaign and petitioned for university recognition. Once the two groups learned of each other, they merged under the Chi Epsilon name. The university approved Chi Epsilon on May 20, 1922, recognized by the society as it founding date, The group had 25 founding members.
Chi Epsilon is "dedicated to the purpose of maintaining and promoting the status of civil engineering as an ideal profession." Its objective and purpose are to uphold competence, sound engineering, good moral judgment, and a commitment to society to improve the civil engineering profession.
The society received a certificate of incorporation from the State of Illinois on February 23, 1923.
Chi Epsilon sent letters to other engineering programs, inviting students to found a chapter. A second chapter was chartered at the Armour Institute of Technology on March 29, 1923.
The society is overseen by student officers at each chapter who act through a National Council. Its headquarters is located at the University of Texas at Arlington.
Symbols
The society's motto is "Conception, Design, Construction", suggested by the Greek letters Chi Delta Chi, the proposed name for one of Chi Epsion's predecessor groups.
The colors of Chi Epsilon are purple and white. Its badge is a key made in the likeness of the front of a Theodolite or engineer's transit, the instrument of a surveyor. Its publication is The Transit, published semi-annually in the spring and fall of each year.
Membership
Male and female undergr
|
https://en.wikipedia.org/wiki/IntelliTXT
|
IntelliTXT is a keyword advertising platform developed by Vibrant Media. Web page publishers insert a script into their pages which calls the IntelliTXT platform when a viewer views the page. This script then finds keywords on the page and double underlines them. When holding the mouse over the double underlined link, an advertisement associated with that word will pop up. Advertisers pay to have their particular words associated to their advertisements.
Customers
According to Vibrant Media, more than 4500 publishers use the IntelliTXT system. Nike, Sony and Microsoft are advertising on the platform, their ads reaching more than 100 million unique users in the US and 170 million internationally each month.
Competitors
Adbrite
References
External links
The IntelliTXT website
Vibrant media privacy statement
Disable text ads user script for Greasemonkey and Opera's UserJS
Online advertising services and affiliate networks
|
https://en.wikipedia.org/wiki/Automatic%20acoustic%20management
|
Automatic acoustic management (AAM) is a method for reducing acoustic emanations in AT Attachment (ATA) mass storage devices for computer data storage, such as ATA hard disk drives and ATAPI optical disc drives. AAM is an optional feature set for ATA/ATAPI devices; when a device supports AAM, the acoustic management parameters are adjustable through a software or firmware user interface.
Details
The ATA/ATAPI sub-command for setting the level of AAM operation is an 8-bit value from 0 to 255. Most modern drives ship with the vendor-defined value of 0x00 in the acoustic management setting. This often translates to the max-performance value of 254 stated in the standard. Values between 128 and 254 (0x80 - 0xFE) enable the feature and select most-quiet to most-performance settings along that range. Though hard drive manufacturers may support the whole range of values, the settings are allowed to be banded, so many values could provide the same acoustic performance.
Although there is no definition of the function implemented to provide acoustic management in the ATA standard, most drives use power control of the head-positioning servo to reduce vibration induced by the head positioning mechanism. Western Digital calls this IntelliSeek™ which uses only enough head acceleration to position the head at the target track and sector "just in time" to access data. Previous seek mechanisms used maximum power and acceleration to position the head. This operation induced the familiar clicking vibration emanating from a seeking hard drive. Western Digital provides a demonstration flash movie illustrating just-in-time head positioning on their web site.
To provide best acoustic performance, some drive manufacturers may limit the maximum seek velocity of the heads for AAM operation. This degrades performance by increasing the average seek time: some head movements are forced to wait an additional disk rotation before accessing data because the head was unable to move to the
|
https://en.wikipedia.org/wiki/RP-570
|
RP-570 is a communications protocol used in industrial environments to communicate between a front-end computer and the substation to be controlled.
It is a SCADA legacy protocol and is based on the low-level protocol IEC TC57, format class 1.2.
RP-570 stands for:
"RTU Protocol based on IEC 57 part 5-1 (present IEC 870) version 0 or 1"
External links
Details may be found here:
RP 570 Protocol Description
Overview of supported protocol features in RP 570
RP 570/1 Master & Slave OPC Server
Network protocols
|
https://en.wikipedia.org/wiki/Cold%20vapour%20atomic%20fluorescence%20spectroscopy
|
Cold vapour atomic fluorescence spectroscopy (CVAFS) is a subset of the analytical technique known as atomic fluorescence spectroscopy (AFS).
Use for mercury detection
Used in the measurement of trace amounts of volatile heavy metals such as mercury, cold vapour AFS makes use of the unique characteristic of mercury that allows vapour measurement at room temperature. Free mercury atoms in a carrier gas are excited by a collimated ultraviolet light source at a wavelength of 253.7 nanometres. The excited atoms re-radiate their absorbed energy (fluoresce) at this same wavelength. Unlike the directional excitation source, the fluorescence is omnidirectional and may thus be detected using a photomultiplier tube or UV photodiode.
Gold coated traps may be used to collect mercury in ambient air or other media. The traps are then heated, releasing the mercury from the gold while passing argon through the cartridge. This preconcentrates the mercury, increasing sensitivity, and also transfers the mercury into an inert gas.
Transportable analysers
A number of companies have commercialized mercury detection via CVAFS and produced transportable analysers capable of measuring mercury in ambient air. These devices can measure levels in the low parts per quadrillion range (10−15).
EPA-approved methods
Various analytical methods approved by the United States Environmental Protection Agency (EPA) for measuring mercury in wastewater are in common use. EPA Methods 245.7 and 1631 are commonly used for measurement of industrial wastewater using CVAFS.
See also
Other analytical techniques suitable for analyzing heavy metals in air or water:
Inductively coupled plasma mass spectrometry
Atomic absorption spectroscopy
References
Spectroscopy
Fluorescence techniques
|
https://en.wikipedia.org/wiki/SMS%20Magdeburg
|
SMS ("His Majesty's Ship ") was a lead ship of the of light cruisers in the German (Imperial Navy). Her class included three other ships: , , and . was built at the AG Weser shipyard in Bremen from 1910 to August 1912, when she was commissioned into the High Seas Fleet. The ship was armed with a main battery of twelve 10.5 cm SK L/45 guns and had a top speed of . was used as a torpedo test ship after her commissioning until the outbreak of World War I in August 1914, when she was brought to active service and deployed to the Baltic.
In the Baltic, fired the first shots of the war against the Russians on 2 August, when she shelled the port of Libau. She participated in a series of bombardments of Russian positions until late August. On the 26th, she participated in a sweep of the entrance to the Gulf of Finland; while steaming off the Estonian coast, she ran aground off the island of Odensholm and could not be freed. A pair of Russian cruisers appeared and seized the ship. Fifteen crew members were killed in the brief engagement. They recovered three intact German code books, one of which they passed to the British. The ability to decrypt German wireless signals provided the British with the ability to ambush German units on several occasions during the war, including the Battle of Jutland. The Russians partially scrapped while she remained grounded before completely destroying the wreck.
Design
was long overall and had a beam of and a draft of forward. She displaced normally and up to at full load. Her propulsion system consisted of three sets of Bergmann steam turbines driving three screw propellers. They were designed to give , but reached in service. These were powered by sixteen coal-fired Marine-type water-tube boilers, although they were later altered to use fuel oil that was sprayed on the coal to increase its burn rate. These gave the ship a top speed of . carried of coal, and an additional of oil that gave her a range of approximately a
|
https://en.wikipedia.org/wiki/Modulo-N%20code
|
Modulo-N code is a lossy compression algorithm used to compress correlated data sources using modular arithmetic.
Compression
When applied to two nodes in a network whose data are in close range of each other modulo-N code requires one node (say odd) to send the coded data value as the raw data ; the even node is required to send the coded data as the . Hence the name modulo-N code.
Since at least bits are required to represent a number K in binary, the modulo coded data of the two nodes requires bits. As we can generally expect always, because . This is the how compression is achieved.
A compression ratio achieved is
Decompression
At the receiver by joint decoding we may complete the process of extracting the data and rebuilding the original values. The code from the even node is reconstructed by the assumption that it must be close to the data from the odd node. Hence the decoding algorithm retrieves even node data as
The decoder essentially finds the closest match to and the decoded value is declared as
Example
For a mod-8 code, we have
Encoder
D_o=43,D_e=47
M_o=43,M_e=47 mod(8) = 7,
Decoder
M_o=43,M_e=47 mod(8) = 7,
D_o=43,D_e=CLOSEST(43,8⋅k + 7)
D_o=43,D_e=47
Modulo-N decoding is similar to phase unwrapping and has the same limitation: If the difference from one node to the next is more than N/2 (if the phase changes from one sample to the next more than ), then decoding leads to an incorrect value.
See also
DISCUS is a more sophisticated technique for compressing correlated data sources.
Delta encoding is a related algorithm used in lossless compression algorithms designed for correlated data sources.
Information theory
Data compression
Wireless sensor network
|
https://en.wikipedia.org/wiki/Princess%20Tomato%20in%20the%20Salad%20Kingdom
|
is a video game by Hudson Soft originally released in 1984 for the NEC PC-8801, NEC PC-6001, FM-7 and MSX Japanese home computers.
It was ported on May 27, 1988, to the Famicom, and February 8, 1991 for the Nintendo Entertainment System in North America. It was also released on the Wii's Virtual Console in Japan on January 19, 2010, and in North America on February 8.
The characters are primarily cartoon-like anthropomorphic fruits and vegetables, though the game does contain some human characters, including Princess Tomato's sister, Lisa, and the villainous Farmies.
Plot
Taking the role of Sir Cucumber, a knight, the player is assigned by King Broccoli (now deceased) to defeat the evil Minister Pumpkin, who has kidnapped Princess Tomato. Early on, Sir Cucumber gains a sidekick, Percy the baby persimmon, who offers advice and helps throughout the quest (and always refers to Sir Cucumber as "Boss").
Gameplay
Princess Tomato in the Salad Kingdom plays similarly to a text adventure, though due to the NES's lack of a keyboard accessory, the possible commands are represented by buttons which line both sides of the screen. The commands are fixed and do not change during gameplay. Primarily, the game consists of still screens, with the exception of the "finger wars", mazes and occasional animated character, such as the octoberry and fernbirds. Players can issue commands to the game's protagonist. While the player may run into difficulty determining which actions will advance the game, the only way to "lose" is by failing to defeat the end-game boss, Minister Pumpkin, in a final game of "finger wars".
Legacy
Princess Tomato makes an appearance in Super Bomberman R as a playable DLC character named "Princess Tomato Bomber". She was added in the 2.0 update released in November 2017.
See also
List of Nintendo Entertainment System games
List of Hudson Soft games
References
External links
1984 video games
1991 video games
Adventure games
Fictional princesses
FM-7
|
https://en.wikipedia.org/wiki/Born%20coordinates
|
In relativistic physics, the Born coordinate chart is a coordinate chart for (part of) Minkowski spacetime, the flat spacetime of special relativity. It is often used to analyze the physical experience of observers who ride on a ring or disk rigidly rotating at relativistic speeds, so called Langevin observers. This chart is often attributed to Max Born, due to his 1909 work on the relativistic physics of a rotating body. For overview of the application of accelerations in flat spacetime, see Acceleration (special relativity) and proper reference frame (flat spacetime).
From experience by inertial scenarios (i.e. measurements in inertial frames), Langevin observers synchronize their clocks by standard Einstein convention or by slow clock synchronization, respectively (both internal synchronizations). For a certain Langevin observer this method works perfectly. Within its immediate vicinity clocks are synchronized and light propagates isotropic in space. But the experience when the observers try to synchronize their clocks along a closed path in space is puzzling: there are always at least two neighboring clocks which have different times. To remedy the situation, the observers agree on an external synchronization procedure (coordinate time t — or for ring-riding observers, a proper coordinate time for a fixed radius r). By this agreement, Langevin observers riding on a rigidly rotating disk will conclude from measurements of small distances between themselves that the geometry of the disk is non-Euclidean. Regardless of which method they use, they will conclude that the geometry is well approximated by a certain Riemannian metric, namely the Langevin–Landau–Lifschitz metric. This is in turn very well approximated by the geometry of the hyperbolic plane (with the negative curvatures –3 ω2 and –3 ω2 r2, respectively). But if these observers measure larger distances, they will obtain different results, depending upon which method of measurement they use! In all
|
https://en.wikipedia.org/wiki/Command%20language
|
A command language is a language for job control in computing. It is a domain-specific and interpreted language; common examples of a command language are shell or batch programming languages.
These languages can be used directly at the command line, but can also automate tasks that would normally be performed manually at the command line. They share this domain—lightweight automation—with scripting languages, though a command language usually has stronger coupling to the underlying operating system. Command languages often have either very simple grammars or syntaxes very close to natural language, to shallow the learning curve, as with many other domain-specific languages.
See also
Command-line interface
In the Beginning... Was the Command Line
Batch processing
Job (computing)
Notes
External links
A longer definition.
Programming language topics
|
https://en.wikipedia.org/wiki/Network%20Chemistry
|
Network Chemistry was a Wi-Fi security startup based in Redwood City, California. The firm was founded in 2002 by several co-founders including Gary Ramah, Rob Markovich and Dr. Christopher Waters and is backed by venture capital firms such as San Francisco-based Geneva Venture Partners, Innovacom and In-Q-Tel, the investment arm of the CIA.
The company sold products such as RFprotect Distributed, a wireless intrusion detection system; RFprotect Endpoint, a laptop security product; and RFprotect Mobile, a portable tool for analyzing network security. The final product was RFprotect Scanner, a wired-side rogue access point detection and mitigation system utilizing patent-pending device fingerprinting technology.
Network Chemistry also created the Wireless Vulnerabilities and Exploits database, which is the result of a collaborative industry effort to catalog and define exploits and vulnerabilities specifically related to the use of wireless technologies in IT networks.
The wireless security business of Network Chemistry was sold to Aruba Networks (NASDAQ: ARUN) in July 2007.
External links
“Network Chem Gets $6 million” April 2005 article on RedHerring.com
American companies established in 2002
American companies disestablished in 2007
Computer companies established in 2002
Computer companies disestablished in 2007
Defunct computer hardware companies
Defunct computer companies of the United States
Networking hardware companies
|
https://en.wikipedia.org/wiki/Word%20error%20rate
|
Word error rate (WER) is a common metric of the performance of a speech recognition or machine translation system.
The general difficulty of measuring performance lies in the fact that the recognized word sequence can have a different length from the reference word sequence (supposedly the correct one). The WER is derived from the Levenshtein distance, working at the word level instead of the phoneme level. The WER is a valuable tool for comparing different systems as well as for evaluating improvements within one system. This kind of measurement, however, provides no details on the nature of translation errors and further work is therefore required to identify the main source(s) of error and to focus any research effort.
This problem is solved by first aligning the recognized word sequence with the reference (spoken) word sequence using dynamic string alignment. Examination of this issue is seen through a theory called the power law that states the correlation between perplexity and word error rate.
Word error rate can then be computed as:
where
S is the number of substitutions,
D is the number of deletions,
I is the number of insertions,
C is the number of correct words,
N is the number of words in the reference (N=S+D+C)
The intuition behind 'deletion' and 'insertion' is how to get from the reference to the hypothesis. So if we have the reference "This is wikipedia" and hypothesis "This _ wikipedia", we call it a deletion.
When reporting the performance of a speech recognition system, sometimes word accuracy (WAcc) is used instead:
Note that since N is the number of words in the reference, the word error rate can be larger than 1.0, and thus, the word accuracy can be smaller than 0.0.
Experiments
It is commonly believed that a lower word error rate shows superior accuracy in recognition of speech, compared with a higher word error rate. However, at least one study has shown that this may not be true. In a Microsoft Research experiment, it was shown
|
https://en.wikipedia.org/wiki/Aladdin%20Knowledge%20Systems
|
Aladdin Knowledge Systems (formerly and ) was a company that produced software for digital rights management and Internet security. The company was acquired by Safenet Inc, in 2009. Its corporate headquarters are located in Belcamp, MD.
History
Aladdin Knowledge Systems was founded in 1985 by Jacob (Yanki) Margalit, when he was 23 years old; he was soon joined by his brother Danny Margalit, who took the responsibility for product development at the age of 18, while at the same time completing a Mathematics and Computer Science degree at Tel Aviv University. In its early years the company developed two product lines, an artificial intelligence package (which was dropped early on) and a hardware product to prevent unauthorized software copying, similar to digital rights management. Margalit raised just $10,000 as an initial capital for the company.
The digital rights management product became a success and by 1993 generated sales of $4,000,000. The same year that company had an initial public offering on NASDAQ raising $7,900,000. In 2004 the company's shares were also listed on the Tel Aviv Stock Exchange. By 2007 the company's annual revenues reached over $105 million.
In mid-2008, Vector Capital was attempting to purchase Aladdin. Vector initially offered $14.50 per share, but Aladdin's founder Margalit refused the offer arguing that the company was worth more. Aladdin's shareholders agreed on the merger in February 2009 at $11.50 per share, in cash. In March 2009, Vector Capital acquired Aladdin and officially merged it with SafeNet.
Corporate timeline
1985 – Aladdin Knowledge Systems was established
1993 – Aladdin held an initial public offering
1996 – Aladdin acquired the German company FAST
1998 – Aladdin patented USB smart card-based authentication tokens
1998_Dec – Aladdin acquired the software protection business of EliaShim
1999 – Aladdin acquired the eSafe "content security" business of EliaShim
2000 – Aladdin acquired 10% of Comsec
2001 –
|
https://en.wikipedia.org/wiki/StICQ
|
stICQ is an ICQ client for mobile phones with symbian OS.
StICQ was written by the Russian programmer Sergey Taldykin. StICQ is a native Symbian application (.SIS) for instant messaging over Internet for the ICQ network (using the OSCAR protocol).
It supports all main statuses including "Not Available", "Invisible" etc., contact search using ICQ UID, black lists, multi-user support, sound announcements and even SMS sending using default ICQ server.
Its features are its small size, low memory usage and relatively stable work. One of the key features of the client is its ability to suspend outcoming data until GPRS coverage is available. It also suspend the status of the user, while all other mobile clients usually report connection problem and drop the user out.
Currently, stICQ does not support smiley pictures but have a unique feature of quick emoticon input using the call button (special plugin required).
Notable, stICQ supports the yellow "Ready to chat" extended status while "Depressive", as well as "At home", "At work" etc. are displayed as "Offline". This caused to call stICQ an "anti-depressive ICQ".
The source code has been sold to the development team of Quiet Internet Pager messenger. 1.01 version QiP for Symbian has been released recently.
StICQ is free for download, as are a wide variety of mods changing status icons and menu text.
Keypad shortcuts
Pressing asterisk in a contact list window allows you to maximize the program window. It will also affect message windows.
Pressing the green button allows the smiley templates to be inserted by downloading and installing the templates file (stICQ.tpl).
Known bugs
StICQ is known to drop out when receiving large amounts of text in one message. Thus, users should beware their interlocutor of sending messages that exceeds 10-15 phone display lines.
When using StICQ with a T9 dictionary, users should press any cursor key to get rid of "Previous" command for right button after sending a message or the
|
https://en.wikipedia.org/wiki/List%20of%20food%20additives
|
Food additives are substances added to food to preserve flavor or enhance its taste, appearance, or other qualities.
Purposes
Additives are used for many purposes but the main uses are:
Acids Food acids are added to make flavors "sharper", and also act as preservatives and antioxidants. Common food acids include vinegar, citric acid, tartaric acid, malic acid, folic acid, fumaric acid, and lactic acid.
Acidity regulators Acidity regulators are used to change or otherwise control the acidity and alkalinity of foods.
Anticaking agents Anticaking agents keep powders such as milk powder from caking or sticking.
Antifoaming agents Antifoaming agents reduce or prevent foaming in foods.
Antioxidants Antioxidants such as vitamin C act as preservatives by inhibiting the effects of oxygen on food, and can be beneficial to health.
Bulking agents Bulking agents such as starch are additives that increase the bulk of a food without affecting its nutritional value.
Food coloring Colorings are added to food to replace colors lost during preparation, or to make food look more attractive.
Color retention agents In contrast to colorings, color retention agents are used to preserve a food's existing color.
Emulsifiers Emulsifiers allow water and oils to remain mixed together in an emulsion, as in mayonnaise, ice cream, and homogenized milk.
Flavors Flavors are additives that give food a particular taste or smell, and may be derived from natural ingredients or created artificially.
Flavor enhancers Flavor enhancers enhance a food's existing flavors. They may be extracted from natural sources (through distillation, solvent extraction, maceration, among other methods) or created artificially.
Flour treatment agents Flour treatment agents are added to flour to improve its color or its use in baking.
Glazing agents Glazing agents provide a shiny appearance or protective coating to foods.
Humectants Humectants prevent foods from drying out.
Tracer gas Tracer gas allow for pac
|
https://en.wikipedia.org/wiki/Web%20Services%20Conversation%20Language
|
The Web Service Conversation Language (WSCL) proposal defines the overall input and output message sequences for one web service using a finite state automaton FSA over the alphabet of message types.
External links
Web Service Conversation Language (WSCL) proposal
Web service specifications
World Wide Web Consortium standards
XML-based standards
Web services
|
https://en.wikipedia.org/wiki/MySociety
|
mySociety is a UK-based registered charity, previously named UK Citizens Online Democracy. It began as a UK-focused organisation with the aim of making online democracy tools for UK citizens. However, those tools were open source, so that the code could be (and soon was) redeployed in other countries.
mySociety went on to simplify and internationalise its code and through the now dormant Poplus project, encouraged others to share open source code that would minimise the amount of duplication in civic tech coding.
Like many non-profits, mySociety sustains itself with a mixture of grant funding and commercial work, providing software and development services to local government and other organisations.
mySociety was founded by Tom Steinberg in September 2003, and started activity after receiving a £250,000 grant in September 2004. Steinberg says that it was inspired by a collaboration with his then-flatmate James Crabtree which spawned Crabtree's article "Civic hacking: a new agenda for e-democracy".
In March 2015, Steinberg announced his decision to stand down as executive director of mySociety. In July of that year, Mark Cridge became the organisation's new CEO.
Projects
TheyWorkForYou is a parliamentary monitoring website which aims to make it easier for UK citizens to understand what is going on in Westminster as well as Scottish Parliament, the Welsh Assembly and the Northern Ireland Assembly. It also helps create accountability for UK politicians by publishing a complete archive of every word spoken in Parliament, along with a voting record and other details for each MP, past and present.
FixMyStreet platform is free and open source software which enables anyone to run a map based website and app that helps people inform their local authority of problems needing their attention, such as potholes, broken streetlamps, etc. The UK version is FixMyStreet.com. mySociety also provide FixMyStreet as a report making system for several local and transport authorit
|
https://en.wikipedia.org/wiki/Lyceum%20TV
|
The RCA Lyceum TV was a commercial monitor/receiver with a large input/output panel on the back, and a long grounded plug. During the mid-80s, RCA released the Colortrak 2000, a television identical to the Dimensia table-top model. Even though the Colortrak was considered the mid-range model, those bearing the name Colortrak 2000, were considered high-end, along with the Dimensia. The Lyceum TV, Dimensia, and Colortrak 2000 models all basically had the same chassis (a wood grain veneer, or black laminate for some Dimensias, and fabric covered speakers on the sides of the cabinet).
Many Colortrak 2000, Lyceum, and Dimensia TVs came packaged with a very large remote control, the Digital Command Center. There are several different versions of the Digital Command Center, but a main feature was that it could control an array of selected RCA components, all with the one remote – a universal remote only for RCA products, so to speak. The Dimensia version of the remote was called the "Dimensia-Intelligent Audio Video" and had identical buttons to the Digital Command Center.
See also
RCA Dimensia
Colortrak 2000
References
cedmagic.com
External links
RCA Dimensia Channel on YouTube
RCA Dimensia Facebook page
RCA brands
Television technology
|
https://en.wikipedia.org/wiki/Darboux%20derivative
|
The Darboux derivative of a map between a manifold and a Lie group is a variant of the standard derivative. It is arguably a more natural generalization of the single-variable derivative. It allows a generalization of the single-variable fundamental theorem of calculus to higher dimensions, in a different vein than the generalization that is Stokes' theorem.
Formal definition
Let be a Lie group, and let be its Lie algebra. The Maurer-Cartan form, , is the smooth -valued -form on (cf. Lie algebra valued form) defined by
for all and . Here denotes left multiplication by the element and is its derivative at .
Let be a smooth function between a smooth manifold and . Then the Darboux derivative of is the smooth -valued -form
the pullback of by . The map is called an integral or primitive of .
More natural?
The reason that one might call the Darboux derivative a more natural generalization of the derivative of single-variable calculus is this. In single-variable calculus, the derivative of a function assigns to each point in the domain a single number. According to the more general manifold ideas of derivatives, the derivative assigns to each point in the domain a linear map from the tangent space at the domain point to the tangent space at the image point. This derivative encapsulates two pieces of data: the image of the domain point and the linear map. In single-variable calculus, we drop some information. We retain only the linear map, in the form of a scalar multiplying agent (i.e. a number).
One way to justify this convention of retaining only the linear map aspect of the derivative is to appeal to the (very simple) Lie group structure of under addition. The tangent bundle of any Lie group can be trivialized via left (or right) multiplication. This means that every tangent space in may be identified with the tangent space at the identity, , which is the Lie algebra of . In this case, left and right multiplication are simply translation. By po
|
https://en.wikipedia.org/wiki/Exception%20safety
|
Exception safety is the state of code working correctly when exceptions are thrown. To aid in ensuring exception safety, C++ standard library developers have devised a set of exception safety levels, contractual guarantees of the behavior of a data structure's operations with regards to exceptions. Library implementers and clients can use these guarantees when reasoning about exception handling correctness. The exception safety levels apply equally to other languages and error-handling mechanisms.
History
As David Abrahams writes, "nobody ever spoke of 'error-safety' before C++ had exceptions." The term appeared as the topic of publications in JTC1/SC22/WG21, the C++ standard committee, as early as 1994. Exception safety for the C++ standard library was first formalized for STLport by Abrahams, establishing the basic safety/strong safety distinction. This was extended to the modern basic/strong/nothrow guarantees in a later proposal.
Background
Exceptions provide a form of non-local control flow, in that an exception may "bubble up" from a called function. This bubbling can cause an exception safety bug by breaking invariants of a mutable data structure, as follows:
A step of an operation on a mutable data structure modifies the data and breaks an invariant.
An exception is thrown and control "bubbles up", skipping the rest of the operation's code that would restore the invariant
The exception is caught and recovered from, or a finally block is entered
The data structure with broken invariant is used by code that assumes the invariant, resulting in a bug
Code with a bug such as the above can be said to be "exception unsafe".
Classification
The C++ standard library provides several levels of exception safety (in decreasing order of safety):
No-throw guarantee, also known as failure transparency: Operations are guaranteed to succeed and satisfy all requirements even in exceptional situations. If an exception occurs, it will be handled internally and n
|
https://en.wikipedia.org/wiki/Profile%20%28engineering%29
|
In standardization, a profile is a subset internal to a specification. Aspects of a complex technical specification may necessarily have more than one interpretation, and there are probably many optional features. These aspects constitute a profile of the standard. Two implementations engineered from the same description may not interoperate due to having a different profile of the standard. Vendors can even ignore features that they view as unimportant, yet prevail in the long run.
The use of profiles in these ways can force one interpretation, or create de facto standards from official standards. Engineers can design or procure by using a profile to ensure interoperability. For example, the International Standard Profile, ISP, is used by the ISO in their ISO ISP series of standards; in the context of OSI networking, Britain uses the UK-GOSIP profile and the US uses US-GOSIP; there are also various mobile profiles adopted by the W3C for web standards. In particular, implementations of standards on mobile devices often have significant limitations compared to their traditional desktop implementations, even if the standard which governs both permits such limitations.
In structural engineering a profile means a hot rolled structural steel shape like an -beam.
In civil engineering, a profile consists of a plotted line which indicates grades and distances (and typically depths of cut and/or elevations of fill) for excavation and grading work. Constructors of roadways, railways (and similar works) normally chart the profile along the centerline. A profile can also indicate the vertical slope(s) (changes in elevation) in a pipeline or similar structure. Civil engineers always depict profile as a side (cross section) view (as opposed to an overhead (plan) view).
Material fabrication
In fabricating, a profile consists of the more-or-less complex outline of a shape to be cut in a sheet of material such as laminated plastic, aluminium alloy or steel plate. In modern
|
https://en.wikipedia.org/wiki/Ergun%20equation
|
The Ergun equation, derived by the Turkish chemical engineer Sabri Ergun in 1952, expresses the friction factor in a packed column as a function of the modified Reynolds number.
Equation
where and are defined as
and
where:
is the modified Reynolds number,
is the packed bed friction factor
is the pressure drop across the bed,
is the length of the bed (not the column),
is the equivalent spherical diameter of the packing,
is the density of fluid,
is the dynamic viscosity of the fluid,
is the superficial velocity (i.e. the velocity that the fluid would have through the empty tube at the same volumetric flow rate)
is the void fraction (porosity) of the bed.
is the particle Reynolds Number (based on superficial velocity)
.
Extension
To calculate the pressure drop in a given reactor, the following equation may be deduced
This arrangement of the Ergun equation makes clear its close relationship to the simpler Kozeny-Carman equation which describes laminar flow of fluids across packed beds via the first term on the right hand side. On the continuum level, the second order velocity term demonstrates that the Ergun equation also includes the pressure drop due to inertia, as described by the Darcy–Forchheimer equation.
The extension of the Ergun equation to fluidized beds, where the solid particles flow with the fluid, is discussed by Akgiray and Saatçı (2001).
See also
Hagen–Poiseuille equation
Kozeny–Carman equation
References
Ergun, Sabri. "Fluid flow through packed columns." Chem. Eng. Prog. 48 (1952).
Ö. Akgiray and A. M. Saatçı, Water Science and Technology: Water Supply, Vol:1, Issue:2, pp. 65–72, 2001.
Equations
Chemical process engineering
Fluid dynamics
|
https://en.wikipedia.org/wiki/Potentially%20Hazardous%20Food
|
Potentially Hazardous Food is a term used by food safety organizations to classify foods that require time-temperature control to keep them safe for human consumption. A PHF is a food that:
Contains moisture – usually regarded as a water activity greater than 0.85
Contains protein
Is neutral to slightly acidic – typically having a pH between 4.6 and 7.5
US FDA Definition
Potentially Hazardous Food has been redefined by the US Food and Drug Administration in the 2013 FDA Food Code to Time/Temperature Control for Safety Food. Pages 22 and 23 (pdf pages 54 and 55), state the following:
"Time/temperature control for safety food" means a FOOD that requires time/temperature control for safety (TCS) to limit pathogenic microorganism growth or toxin formation.
"Time/temperature control for safety food" includes:
An animal FOOD that is raw or heat-treated; a plant FOOD that is heat-treated or consists of raw seed sprouts, cut melons, cut leafy greens, cut tomatoes or mixtures of cut tomatoes that are not modified in a way so that they are unable to support pathogenic microorganism growth or toxin formation, or garlic-in-oil mixtures that are not modified in a way so that they are unable to support pathogenic microorganism growth or toxin formation; and
Except as specified in Subparagraph (3)(d) of this definition, a FOOD that because of the interaction of its AW and pH values is designated as Product Assessment Required (PA) in Table A or B of this definition:
"Time/temperature control for safety food" does not include:
An air-cooled hard-boiled EGG with shell intact, or an EGG with shell intact that is not hard-boiled, but has been pasteurized to destroy all viable salmonellae;
A FOOD in an unopened HERMETICALLY SEALED CONTAINER that is commercially processed to achieve and maintain commercial sterility under conditions of non-refrigerated storage and distribution;
A FOOD that because of its pH or AW value, or interaction of AW and pH values, is designated as a
|
https://en.wikipedia.org/wiki/Test%20CD
|
A test CD is a compact disc containing tracks of musical and technical tests and demonstrations. Most of the tracks are made of electronic signals and pure frequencies. The purpose of these specialized compact discs is to make accurate tests and calibrate audio equipment.
Releases
A wide variety of CD-DA test discs have been produced in the past, and a few are still in production:
Stereophile Test CD 2
CD-CHECK Test Disc
NAB Broadcast & Audio Test CD, Vol 1 (still in production)
NAB Broadcast & Audio Test CD, Vol 2 (still in production)
Precision Test Signals
The Ultimate Test CD
CBS Records CD-1 Standard Test Disc (with EIA standard signals)
EIAJ CD-1 Standard Test Disc (YGDS 13) (EIAJ Standard CP-308)
Denon Audio Technical CD 38C39-7147
Japan Audio Society Audio Test CD-1 (YDDS-2)
Philips Test Sample 4a
Sony Test CD Type 3 YEDS-7
Technics CD Test Disc SH-CD001
See also
Audio equipment testing
Super Audio CD
References
Compact disc
Audio electronics
|
https://en.wikipedia.org/wiki/Efficient%20coding%20hypothesis
|
The efficient coding hypothesis was proposed by Horace Barlow in 1961 as a theoretical model of sensory coding in the brain. Within the brain, neurons communicate with one another by sending electrical impulses referred to as action potentials or spikes. One goal of sensory neuroscience is to decipher the meaning of these spikes in order to understand how the brain represents and processes information about the outside world. Barlow hypothesized that the spikes in the sensory system formed a neural code for efficiently representing sensory information. By efficient Barlow meant that the code minimized the number of spikes needed to transmit a given signal. This is somewhat analogous to transmitting information across the internet, where different file formats can be used to transmit a given image. Different file formats require different number of bits for representing the same image at given distortion level, and some are better suited for representing certain classes of images than others. According to this model, the brain is thought to use a code which is suited for representing visual and audio information representative of an organism's natural environment .
Efficient coding and information theory
The development of the Barlow's hypothesis was influenced by information theory introduced by Claude Shannon only a decade before. Information theory provides the mathematical framework for analyzing communication systems. It formally defines concepts such as information, channel capacity, and redundancy. Barlow's model treats the sensory pathway as a communication channel where neuronal spiking is an efficient code for representing sensory signals. The spiking code aims to maximize available channel capacity by minimizing the redundancy between representational units. H. Barlow was not the very first one to introduce the idea: it already appears in a 1954 article written by F. Attneave.
A key prediction of the efficient coding hypothesis is that sensory
|
https://en.wikipedia.org/wiki/Ruth%20Teitelbaum
|
Ruth Teitelbaum ( Lichterman; February 1, 1924 – August 9, 1986) was one of the first computer programmers in the world. Teitelbaum was one of the original programmers for the ENIAC computer.
The other five ENIAC programmers were Jean Bartik, Betty Holberton, Kathleen Antonelli, Marlyn Meltzer, and Frances Spence.
Early life and education
Teitelbaum was born Ruth Lichterman in The Bronx, New York, on February 1, 1924. She was the elder of two children, and the only daughter, of Sarah and Simon Lichterman, a teacher. Her parents were Jewish immigrants from Russia. She graduated from Hunter College with a B.Sc. in Mathematics.
Career
Teitelbaum was hired by the Moore School of Electrical Engineering at the University of Pennsylvania to compute ballistics trajectories. The Moore School was funded by the US Army during the Second World War. Here a group of about 80 women worked manually calculating ballistic trajectories - complex differential calculations.
In June 1943, the Army decided to fund an experimental project - the first all-electronic digital computer called the Electronic Numerical Integrator and Computer (ENIAC). The computer was a huge machine with 40 black 8-foot panels. The programmers had to physically program it using 3000 switches, and telephone switching cords in a dozen trays, to route the data, and the program, through the machine. This is the reason why these women were called "computers".
Along with Marlyn Meltzer, Teitelbaum was part of a special area of the ENIAC project to calculate ballistic trajectory equations using analog technology. They taught themselves and others certain functions of the ENIAC and helped prepare the ballistics software. In 1946, the ENIAC computer was unveiled before the public and the press. The seven women were the only generation of programmers to program the ENIAC.
After the war, Teitelbaum traveled with ENIAC to the Ballistics Research Laboratory at the Aberdeen Proving Ground where she remained for two mo
|
https://en.wikipedia.org/wiki/Pastry%20%28DHT%29
|
Pastry is an overlay network and routing network for the implementation of a distributed hash table (DHT) similar to Chord. The key–value pairs are stored in a redundant peer-to-peer network of connected Internet hosts. The protocol is bootstrapped by supplying it with the IP address of a peer already in the network and from then on via the routing table which is dynamically built and repaired. It is claimed that because of its redundant and decentralized nature there is no single point of failure and any single node can leave the network at any time without warning and with little or no chance of data loss. The protocol is also capable of using a routing metric supplied by an outside program, such as ping or traceroute, to determine the best routes to store in its routing table.
Overview
Although the distributed hash table functionality of Pastry is almost identical to other DHTs, what sets it apart is the routing overlay network built on top of the DHT concept. This allows Pastry to realize the scalability and fault tolerance of other networks, while reducing the overall cost of routing a packet from one node to another by avoiding the need to flood packets. Because the routing metric is supplied by an external program based on the IP address of the target node, the metric can be easily switched to shortest hop count, lowest latency, highest bandwidth, or even a general combination of metrics.
The hash table's key-space is taken to be circular, like the key-space in the Chord system, and node IDs are 128-bit unsigned integers representing position in the circular key-space. Node IDs are chosen randomly and uniformly so peers who are adjacent in node ID are geographically diverse. The routing overlay network is formed on top of the hash table by each peer discovering and exchanging state information consisting of a list of leaf nodes, a neighborhood list, and a routing table. The leaf node list consists of the L/2 closest peers by node ID in each direction around
|
https://en.wikipedia.org/wiki/Comparison%20of%20open-source%20wireless%20drivers
|
Wireless network cards for computers require control software to make them function (firmware, device drivers). This is a list of the status of some open-source drivers for 802.11 wireless network cards.
Linux
Status
Driver capabilities
DragonFly BSD
FreeBSD
Status
Driver capabilities
NetBSD
OpenBSD
The following is an incomplete list of supported wireless devices:
Status
Driver capabilities
Solaris and OpenSolaris
Darwin, OpenDarwin and macOS
Notes
References
http://support.intel.com/support/notebook/sb/CS-006408.htm
The SourceForge IPW websites (ipw 2100,ipw2200 and ipw3945)
The FSF website for the Ralink and Realtek cards
Kerneltrap for the list of OpenBSD drivers
The OpenSolaris website for the list of OpenSolaris and Solaris drivers
https://web.archive.org/web/20070927014705/http://rt2x00.serialmonkey.com/phpBB2/viewtopic.php?t=2084
https://web.archive.org/web/20060908050351/http://rt2x00.serialmonkey.com/wiki/index.php/Rt2x00_beta
http://www.hpl.hp.com/personal/Jean_Tourrilhes/Linux/Wireless.html
rt2x00 README from cvs
https://lkml.org/lkml/2007/2/9/323
External links
Seattle Wireless Linux drivers
Seattle Wireless Mac OS drivers
wireless.kernel.org Wiki
Current Stable Linux kernel: Wireless
Open Documentation for Hardware, a 2006 presentation by Theo de Raadt
Free software lists and comparisons
Wireless networking
Wireless drivers
Linux drivers
|
https://en.wikipedia.org/wiki/Stress%20relaxation
|
In materials science, stress relaxation is the observed decrease in stress in response to strain generated in the structure. This is primarily due to keeping the structure in a strained condition for some finite interval of time hence causing some amount of plastic strain. This should not be confused with creep, which is a constant state of stress with an increasing amount of strain.
Since relaxation relieves the state of stress, it has the effect of also relieving the equipment reactions. Thus, relaxation has the
same effect as cold springing, except it occurs over a longer period of time.
The amount of relaxation which takes place is a function of time, temperature and stress level, thus the actual effect it has on the system is not precisely known, but can be bounded.
Stress relaxation describes how polymers relieve stress under constant strain. Because they are viscoelastic, polymers behave in a nonlinear, non-Hookean fashion. This nonlinearity is described by both stress relaxation and a phenomenon known as creep, which describes how polymers strain under constant stress. Experimentally, stress relaxation is determined by step strain experiments, i.e. by applying a sudden one-time strain and measuring the build-up and subsequent relaxation of stress in the material (see figure), in either extensional or shear rheology.
Viscoelastic materials have the properties of both viscous and elastic materials and can be modeled by combining elements that represent these characteristics. One viscoelastic model, called the Maxwell model predicts behavior akin to a spring (elastic element) being in series with a dashpot (viscous element), while the Voigt model places these elements in parallel. Although the Maxwell model is good at predicting stress relaxation, it is fairly poor at predicting creep. On the other hand, the Voigt model is good at predicting creep but rather poor at predicting stress relaxation (see viscoelasticity).
The extracellular matrix and most tissu
|
https://en.wikipedia.org/wiki/Stirling%20radioisotope%20generator
|
A Stirling radioisotope generator (SRG) is a type of radioisotope generator based on a Stirling engine powered by a large radioisotope heater unit. The hot end of the Stirling converter reaches high temperature and heated helium drives the piston, with heat being rejected at the cold end of the engine. A generator or alternator converts the motion into electricity. Given the very constrained supply of plutonium, the Stirling converter is notable for producing about four times as much electric power from the plutonium fuel as compared to a radioisotope thermoelectric generator (RTG).
The Stirling generators were extensively tested on Earth by NASA, but their development was cancelled in 2013 before they could be deployed on actual spacecraft missions. A similar NASA project still under development, called Kilopower, also utilizes Stirling engines, but uses a small uranium fission reactor as the heat source.
History
Stirling and Brayton-cycle technology development has been conducted at NASA Glenn Research Center (formerly NASA Lewis) since the early 1970s. The Space Demonstrator Engine (SPDE) was the earliest 12.5 kWe per cylinder engine that was designed, built and tested. A later engine of this size, the Component Test Power Converter (CTPC), used a "Starfish" heat-pipe heater head, instead of the pumped-loop used by the SPDE. In the 1992-93 time period, this work was stopped due to the termination of the related SP-100 nuclear power system work and NASA's new emphasis on "better, faster, cheaper" systems and missions.
In 2020, a free-piston Stirling power converter reached 15 years of maintenance-free and degradation-free cumulative operation in the Stirling Research Laboratory at NASA Glenn. This duration equals the operational design life of the MMRTG, and is representative of typical mission concepts designed to explore the outer planets or even more distant Kuiper Belt Objects. This unit, called the Technology Demonstration Converter (TDC) #13, is the old
|
https://en.wikipedia.org/wiki/Crossed%20ladders%20problem
|
The crossed ladders problem is a puzzle of unknown origin that has appeared in various publications and regularly reappears in Web pages and Usenet discussions.
The problem
Two ladders of lengths a and b lie oppositely across an alley, as shown in the figure. The ladders cross at a height of h above the alley floor. What is the width of the alley?
Martin Gardner presents and discusses the problem in his book of mathematical puzzles published in 1979 and cites references to it as early as 1895. The crossed ladders problem may appear in various forms, with variations in name, using various lengths and heights, or requesting unusual solutions such as cases where all values are integers. Its charm has been attributed to a seeming simplicity which can quickly devolve into an "algebraic mess" (characterization attributed by Gardner to D. F. Church).
Solution
The problem description implies that that and , that and that where A and B are the heights of the walls where sides of lengths b and a respectively lean (as in the above graph).
Both solution methods below rely on the property that , , and satisfy the optic equation, i.e. , which can be seen as follows:
Divide the baseline into two parts at the point where it meets , and call the left and right parts and respectively. The angle where meets is common to two similar triangles with bases and respectively. The angle where meets is common to two similar triangles with bases and respectively. This tells us that
which we can then re-arrange (using ) to get
First method
Two statements of the Pythagorean theorem (see figure above)
and
can be subtracted one from the other to eliminate w, and the result can be combined with with alternately A or B solved out to yield the quartic equations
These can be solved algebraically or numerically for the wall heights A and B, and the Pythagorean theorem on one of the triangles can be used to solve for the width w.
Second method
The problem m
|
https://en.wikipedia.org/wiki/WISPr
|
WISPr (pronounced "whisper") or Wireless Internet Service Provider roaming is a draft protocol submitted to the Wi-Fi Alliance that allows users to roam between wireless internet service providers in a fashion similar to that which allows cellphone users to roam between carriers. A RADIUS server is used to authenticate the subscriber's credentials.
It covers best practices for authenticating users via 802.1X or the Universal Access Method (UAM), the latter being another name for browser-based login at a captive portal hotspot. It requires that RADIUS be used for AAA and defines the required RADIUS attributes. For authentication by smart-clients, Appendix D defines the Smart Client to Access Gateway Interface Protocol, which is an XML-based protocol for authentication. Smart-client software (and devices that use it) use this so-called WISPr XML to seamlessly login to HotSpots without the need for the user to interact with a captive portal.
The draft WISPr specification is no longer available from the Wi-Fi Alliance. It was submitted in a manner that does not conform with current IPR policies within the Wi-Fi Alliance.
Intel and others have started a similar proposal — IRAP, which has now been rolled into ETSI Telecommunications and Internet converged Services and Protocols for Advanced Networking (TISPAN); TS 183 019 and TS 183 020.
The WISPr 2.0 specification was published by the Wireless Broadband Alliance in March 2010.
See also
IEEE 802.11
Wireless Internet service provider
ETSI
References
External links
Firefox HotSpot Extension with WISPr XML support
HandyWi application which supports WISPr XML format for Nokia devices
Wi-Fi
|
https://en.wikipedia.org/wiki/Data%20and%20object%20carousel
|
In digital video broadcasting (DVB), a data and object carousel is used for repeatedly delivering data in a continuous cycle. Carousels allow data to be pushed from a broadcaster to multiple receivers by transmitting a data set repeatedly in a standard format. A set-top box receiver may tune to the data stream at any time and is able to reconstitute the data into a virtual file system. The carousel may therefore be considered as a transport file system or file broadcasting system that allows data files to be transmitted from the broadcaster to multiple receivers or clients simultaneously.
In a unidirectional broadcast environment, the receiver is unable to request the retransmission of any data that was missed or received incorrectly. Repeated retransmission of data allows the receiver to cope with random tuning to a channel at an unpredictable time, for instance as the user changes a channel.
The carousel cycle period generally determines the maximum time required for a receiver to acquire an application or specific datum. It is possible to reduce the access time for commonly used files by broadcasting some data more often than others.
An individual object carousel is also called a service domain in some documents. To be precise, a service domain is a group of related DSM-CC objects. In broadcast systems, there is no difference between an object carousel and a service domain except for the terminology: an object carousel is a service domain, and vice versa.
Usage and applications
Data and object carousels are most commonly used in DVB, which has standards for broadcasting digital television content using carousels. The standard format for a carousel is defined in the Digital Storage Media Command and Control (DSM-CC) toolkit in ISO/IEC 13818-6 and is part of the Digital Audio Video Council (DAVIC) DVB standard for digital video broadcasting. The specification provides support for a variety of communication models, including provision for interactive transport
|
https://en.wikipedia.org/wiki/Conjugate%20coding
|
Conjugate coding is a cryptographic tool, introduced by Stephen Wiesner in the late 1960s. It is part of the two applications Wiesner described for quantum coding, along with a method for creating fraud-proof banking notes. The application that the concept was based on was a method of transmitting multiple messages in such a way that reading one destroys the others. This is called quantum multiplexing and it uses photons polarized in conjugate bases as "qubits" to pass information. Conjugate coding also is a simple extension of a random number generator.
At the behest of Charles Bennett, Wiesner published the manuscript explaining the basic idea of conjugate coding with a number of examples but it was not embraced because it was significantly ahead of its time. Because its publication has been rejected, it was developed to the world of public-key cryptography in the 1980s as Oblivious Transfer, first by Michael Rabin and then by Shimon Even. It is used in the field of quantum computing. The initial concept of quantum cryptography developed by Bennett and Gilles Brassard was also based on this concept.
References
Cryptography
|
https://en.wikipedia.org/wiki/MSX-DOS
|
MSX-DOS is a discontinued disk operating system developed by Microsoft for the 8-bit home computer standard MSX, and is a cross between MS-DOS v1.25 and CP/M-80 v2.2.
MSX-DOS
MSX-DOS and the extended BASIC with 3½-inch floppy disk support were simultaneously developed by Microsoft and ASCII Corporation as a software and hardware standard for the MSX home computer standard, to add disk capabilities to BASIC and to give the system a cheaper software medium than Memory Cartridges, and a more powerful storage system than cassette tape. The standard BIOS of an unexpanded MSX computer had no built-in disk support, but provided hooks for a disk extension, so the additional floppy disk expansion system came with its own BIOS extension ROM (built-in on the disk controller) called the BDOS.
This BIOS not only added floppy disk support commands to MSX BASIC, but also a booting system, with which it was possible to boot a real disk operating system.
MSX-DOS was binary compatible with CP/M-80, allowing the MSX computer to easily have access to its vast library of software available for a very small cost for the time.
Boot processing
Once MSX-DOS has been loaded, the system searches the MSX-DOS disk for the COMMAND.COM file and loads it into memory. In that case, the BDOS bypassed the BASIC ROMs, so that the whole 64 KB of address space of the Z80 microprocessor inside the MSX computer could be used for the DOS or for other boot-able disks, for example disk based games. At the same time, the original BIOS ROMs could still be accessed through a "memory bank switch" mechanism, so that DOS-based software could still use BIOS calls to control the hardware and other software mechanisms the main ROMs supplied. Also, due to the BDOS ROM, basic file access capabilities were available even without a command interpreter by using extended BASIC commands.
At initial startup, COMMAND.COM looks for an optional batch file named AUTOEXEC.BAT and, if it exists, executes the commands specif
|
https://en.wikipedia.org/wiki/Variable%20Assembly%20Language
|
Variable Assembly Language (VAL) is a computer-based control system and language designed specifically for use with Unimation Inc. industrial robots.
The VAL robot language is permanently stored as a part of the VAL system. This includes the programming language used to direct the system for individual applications. The VAL language has an easy to understand syntax. It uses a clear, concise, and generally self-explanatory instruction set. All commands and communications with the robot consist of easy to understand word and number sequences. Control programs are written on the same computer that controls the robot. As a real-time system, VAL's continuous trajectory computation permits complex motions to be executed quickly, with efficient use of system memory and reduction in overall system complexity. The VAL system continuously generates robot control commands, and can simultaneously interact with a human operator, permitting on-line program generation and modification.
A convenient feature or VAL is the ability to use libraries or manipulation routines. Thus, complex operations may be easily and quickly programmed by combining predefined subtasks.
The VAL language consists of monitor commands and program instructions.
The monitor commands are used to prepare the system for execution of user-written programs. Program instructions provide the repertoire necessary to create VAL programs for controlling robot actions.
Terminology
The following terms are frequently used in VAL related operations.
Monitor
The VAL monitor is an administrative computer program that oversees operation of a system. It accepts user input and initiates the appropriate response; follows instructions from user-written programs to direct the robot; and performs the computations necessary to control the robot.
Editor
The VAL editor is an aid for entering information into a computer system, and modifying existing text. It is used to enter and modify robot control programs. It has a list of i
|
https://en.wikipedia.org/wiki/Computational%20mechanics
|
Computational mechanics is the discipline concerned with the use of computational methods to study phenomena governed by the principles of mechanics. Before the emergence of computational science (also called scientific computing) as a "third way" besides theoretical and experimental sciences, computational mechanics was widely considered to be a sub-discipline of applied mechanics. It is now considered to be a sub-discipline within computational science.
Overview
Computational mechanics (CM) is interdisciplinary. Its three pillars are mechanics, mathematics, and computer science and physics.
Mechanics
Computational fluid dynamics, computational thermodynamics, computational electromagnetics, computational solid mechanics are some of the many specializations within CM.
Mathematics
The areas of mathematics most related to computational mechanics are partial differential equations, linear algebra and numerical analysis. The most popular numerical methods used are the finite element, finite difference, and boundary element methods in order of dominance. In solid mechanics finite element methods are far more prevalent than finite difference methods, whereas in fluid mechanics, thermodynamics, and electromagnetism, finite difference methods are almost equally applicable. The boundary element technique is in general less popular, but has a niche in certain areas including acoustics engineering, for example.
Computer Science
With regard to computing, computer programming, algorithms, and parallel computing play a major role in CM. The most widely used programming language in the scientific community, including computational mechanics, is Fortran. Recently, C++ has increased in popularity. The scientific computing community has been slow in adopting C++ as the lingua franca. Because of its very natural way of expressing mathematical computations, and its built-in visualization capacities, the proprietary language/environment MATLAB is also widely used, especially for ra
|
https://en.wikipedia.org/wiki/Max%20Skladanowsky
|
Max Skladanowsky (30 April 1863 – 30 November 1939) was a German inventor and early filmmaker. Along with his brother Emil, he invented the Bioscop, an early movie projector the Skladanowsky brothers used to display a moving picture show to a paying audience on 1 November 1895, shortly before the public debut of the Lumière Brothers' Cinématographe in Paris on 28 December 1895.
Career
Born as the fourth child of glazier Carl Theodor Skladanowsky (1830–1897) and Luise Auguste Ernestine Skladanowsky, Max Skladanowsky was apprenticed as a photographer and glass painter, which led to an interest in magic lanterns. In 1879, he began to tour Germany and Central Europe with his father Carl and elder brother Emil, giving dissolving magic lantern shows. While Emil mostly took care of promotion, Max was mostly involved with the technology and for instance developed special multi-lens devices that allowed simultaneous projection of up to nine separate image sequences. Carl retired from this show business, but Max and Emil continued and added other attractions, including a type of naumachia that involved electro-mechanical effects and pyrotechnics.
Max would later claim to have constructed their first film camera on 20 August 1892, but this more likely happened in the summer or autumn of 1894. He also single-handedly constructed the Bioskop projector. Partially based on the dissolving view lantern, it featured two lenses and two separate film reels, one frame being projected alternately from each. It was hand-cranked to transport 44.5mm-wide unperforated Eastman-Kodak film-stock, which was carefully cut, perforated and re-assembled by hand and coated with an emulsion developed by Max. The projector was placed behind a screen, which was made properly transparent by keeping it wet to show the images optimally.
The Skladanowsky brothers shot several films in May 1895. Their first film recorded Emil performing overstated movements on a rooftop with a panorama of Berlin in the
|
https://en.wikipedia.org/wiki/Illumination%20%28image%29
|
Illumination is an important concept in visual arts.
The illumination of the subject of a drawing or painting is a key element in creating an artistic piece, and the interplay of light and shadow is a valuable method in the artist's toolbox. The placement of the light sources can make a considerable difference in the type of message that is being presented. Multiple light sources can wash out any wrinkles in a person's face, for instance, and give a more youthful appearance. In contrast, a single light source, such as harsh daylight, can serve to highlight any texture or interesting features.
Processing of illumination is an important concept in computer vision and computer graphics.
See also
Chiaroscuro
Artistic techniques
Lighting
Computer graphics
Image processing
|
https://en.wikipedia.org/wiki/DictyBase
|
dictyBase is an online bioinformatics database for the model organism Dictyostelium discoideum.
Tools
dictyBase offers many ways of searching and retrieving data from the database:
dictyMart - a tool for retrieving varied information on many genes (or the sequences of those genes).
Genome Browser - browse the genes of D. discoideum in their genomic context.
References
External links
dictyBase
Developmental biology
Model organism databases
Mycetozoa
|
https://en.wikipedia.org/wiki/Megamax%20C
|
Megamax C is a K&R C-based development system originally written for Macintosh and ported to the Atari ST and Apple IIGS computers. Sold by Megamax, Inc., based in Richardson, Texas, the package includes a one-pass compiler, linker, text editor, resource construction kit, and documentation. Megamax C was written by Michael Bunnell with Eric Parker providing the linker and most of the standard library. A circa-1988 version of the compiler was renamed Laser C, while the company remained Megamax.
In the early days of the Atari ST, Megamax C was the primary competitor to the Alcyon C compiler from Digital Research included in the official developer kit from Atari Corporation, and the documentation covers Atari-specific features. The company advertised that Megamax C could be used on a 520 ST with a single floppy drive. The ST version includes the source code and assets for Megaroids, a clone of the Asteroids video game, written by Mike Bunnell with sound effects by Mitch Bunnell.
Technical details
On both the Atari ST and Macintosh, the size of a compiled module is limited to 32K of code, and arrays have the same 32K restriction. The limitation stems from a requirement on the Macintosh which was carried over to the Atari. This is despite the Motorola 68000 CPU in both machines having a 24-bit address range.
Reception
According to a review of the Atari ST version in Antic by Mike Fleishman, Megamax C compiled a small benchmark program six times faster than Digital Research's compiler. In a comparison of C compilers for the Atari ST, STart magazine wrote, "For a development compiler, Megamax C is, without question, the best available on the Atari. It will reduce your compile/test turn-around time by at least a factor of five." They also pointed out that the $200 price may be steep for hobbyists and students.
The compiler was used for development by Batteries Included and FTL Games.
References
C (programming language) compilers
Atari ST software
Classic Mac OS softwa
|
https://en.wikipedia.org/wiki/Slowly%20changing%20dimension
|
A slowly changing dimension (SCD) in data management and data warehousing is a dimension which contains relatively static data which can change slowly but unpredictably, rather than according to a regular schedule. Some examples of typical slowly changing dimensions are entities such as names of geographical locations, customers, or products.
Some scenarios can cause referential integrity problems.
For example, a database may contain a fact table that stores sales records. This fact table would be linked to dimensions by means of foreign keys. One of these dimensions may contain data about the company's salespeople: e.g., the regional offices in which they work. However, the salespeople are sometimes transferred from one regional office to another. For historical sales reporting purposes it may be necessary to keep a record of the fact that a particular sales person had been assigned to a particular regional office at an earlier date, whereas that sales person is now assigned to a different regional office.
Dealing with these issues involves SCD management methodologies referred to as Type 0 through 6. Type 6 SCDs are also sometimes called Hybrid SCDs.
Type 0: retain original
The Type 0 dimension attributes never change and are assigned to attributes that have durable values or are described as 'Original'. Examples: Date of Birth, Original Credit Score. Type 0 applies to most date dimension attributes.
Type 1: overwrite
This method overwrites old with new data, and therefore does not track historical data.
Example of a supplier table:
In the above example, Supplier_Code is the natural key and Supplier_Key is a surrogate key. Technically, the surrogate key is not necessary, since the row will be unique by the natural key (Supplier_Code).
If the supplier relocates the headquarters to Illinois the record would be overwritten:
The disadvantage of the Type 1 method is that there is no history in the data warehouse. It has the advantage however that it's e
|
https://en.wikipedia.org/wiki/Indianapolis%20Prize
|
The Indianapolis Prize is a biennial prize awarded by the Indianapolis Zoo to individuals for "extraordinary contributions to conservation efforts" affecting one or more animal species.
Overview
The Indianapolis Prize was established by the Indianapolis Zoo to recognize and reward individuals who have achieved significant successes in the conservation of animal species.
Every two years, nominations of deserving individuals for the Indianapolis Prize are accepted. From those nominations, a group of conservation experts from around the world select six finalists. A second group of conservation experts, aided by representatives from the Indianapolis Zoo and the city of Indianapolis, serve as jurors to review the work of the six finalists and select the winner.
From 2006 through 2012, winners received an unrestricted cash award of US$100,000, which was increased to US$250,000 for 2014 and subsequent years. In addition, beginning in 2023, the five other finalists each receive a US$50,000 unrestricted cash award.
Many renowned conservationists and scientists have served on the nominating committee and jury, including E.O. Wilson, John Terborgh, Peter Raven, and Stuart Pimm. New nominating committee and jury members are chosen each two-year prize cycle.
The Eli Lilly and Company Foundation provides funding for the prize. In addition to the US$250,000 award, the winner also receives the Lilly Medal. The obverse of the Lilly Medal features a shepherd surrounded by nature and the rising sun. On the reverse is inscribed a quote from naturalist John Muir, "When we try to pick out anything by itself, we find it hitched to everything else in the Universe."
Indianapolis Prize Gala
The winner and finalists are celebrated at the Indianapolis Prize Gala held in downtown Indianapolis. It is designed to inspire guests to care more about animal conservation and place these dedicated heroes on the pedestal usually reserved for sports and entertainment stars.
Jane Alexander Global
|
https://en.wikipedia.org/wiki/Engineering%20change%20order
|
An engineering change order (ECO), also called an engineering change notice (ECN), engineering change (EC), or engineering release notice(ERN), is an artifact used to implement changes to components or end products. The ECO is utilized to control and coordinate changes to product designs that evolve over time.
The need for an engineering change may be triggered by a number of events and varies by industry. Typical engineering change categories are:
Product Evolution - a change resulting in applying an existing part to a new application and maintaining backwards compatibility
Cost Reduction - a change resulting in lower overall cost to produce or maintain
Product Performance - a change that improves the capabilities of the item
Safety - a change required to enhance the safety to those using or interacting with the item
Usage and contents
An ECO is defined as "[A] document approved by the design activity that describes and authorizes the implementation of an engineering change to the product and its approved configuration documentation".
In product development the need for change is caused by:
Correction of a design error that doesn't become evident until testing and modeling, or customer use reveals it.
A change in the customers' requirements necessitating the redesign of part of the product
A change in material or manufacturing method. This can be caused by a lack of material availability, a change in vendor, or to compensate for a design error.
An ECO must contain at least this information:
Identification of what needs to be changed. This should include the part number and name of the component and reference to the drawings that show the component in detail or assembly.
Reason(s) for the change.
Description of the change. This includes a drawing of the component before and after the change. Generally, these drawings are only of the detail affected by the change.
List of documents and departments affected by the change. The most important p
|
https://en.wikipedia.org/wiki/Lemniscate
|
In algebraic geometry, a lemniscate is any of several figure-eight or -shaped curves. The word comes from the Latin meaning "decorated with ribbons", from the Greek meaning "ribbon", or which alternatively may refer to the wool from which the ribbons were made.
Curves that have been called a lemniscate include three quartic plane curves: the hippopede or lemniscate of Booth, the lemniscate of Bernoulli, and the lemniscate of Gerono. The study of lemniscates (and in particular the hippopede) dates to ancient Greek mathematics, but the term "lemniscate" for curves of this type comes from the work of Jacob Bernoulli in the late 17th century.
History and examples
Lemniscate of Booth
The consideration of curves with a figure-eight shape can be traced back to Proclus, a Greek Neoplatonist philosopher and mathematician who lived in the 5th century AD. Proclus considered the cross-sections of a torus by a plane parallel to the axis of the torus. As he observed, for most such sections the cross section consists of either one or two ovals; however, when the plane is tangent to the inner surface of the torus, the cross-section takes on a figure-eight shape, which Proclus called a horse fetter (a device for holding two feet of a horse together), or "hippopede" in Greek. The name "lemniscate of Booth" for this curve dates to its study by the 19th-century mathematician James Booth.
The lemniscate may be defined as an algebraic curve, the zero set of the quartic polynomial when the parameter d is negative (or zero for the special case where the lemniscate becomes a pair of externally tangent circles). For positive values of d one instead obtains the oval of Booth.
Lemniscate of Bernoulli
In 1680, Cassini studied a family of curves, now called the Cassini oval, defined as follows: the locus of all points, the product of whose distances from two fixed points, the curves' foci, is a constant. Under very particular circumstances (when the half-distance between the points is
|
https://en.wikipedia.org/wiki/Sync%20%28Unix%29
|
sync is a standard system call in the Unix operating system, which commits all data from the kernel filesystem buffers to non-volatile storage, i.e., data which has been scheduled for writing via low-level I/O system calls. Higher-level I/O layers such as stdio may maintain separate buffers of their own.
As a function in C, the sync() call is typically declared as void sync(void) in <unistd.h>. The system call is also available via a command line utility also called sync, and similarly named functions in other languages such as Perl and Node.js (in the fs module).
The related system call fsync() commits just the buffered data relating to a specified file descriptor. fdatasync() is also available to write out just the changes made to the data in the file, and not necessarily the file's related metadata.
Some Unix systems run a kind of flush or update daemon, which calls the sync function on a regular basis. On some systems, the cron daemon does this, and on Linux it was handled by the pdflush daemon which was replaced by a new implementation and finally removed from the Linux kernel in 2012. Buffers are also flushed when filesystems are unmounted or remounted read-only, for example prior to system shutdown.
Some applications, such as LibreOffice, are also call sync function to save recovery information in an interval.
Database use
In order to provide proper durability, databases need to use some form of sync in order to make sure the information written has made it to non-volatile storage rather than just being stored in a memory-based write cache that would be lost if power failed. PostgreSQL for example may use a variety of different sync calls, including fsync() and fdatasync(), in order for commits to be durable. Unfortunately, for any single client writing a series of records, a rotating hard drive can only commit once per rotation, which makes for at best a few hundred such commits per second. Turning off the fsync requirement can therefore greatly improve
|
https://en.wikipedia.org/wiki/CoSign%20single%20sign%20on
|
Cosign is an open-source project originally designed by the Research Systems Unix Group to provide the University of Michigan with a secure single sign-on web authentication system.
Cosign authenticates a user on the web server and then provides an environment variable for the user's name. When the user accesses a part of the site that requires authentication, the presence of that variable allows access without having to sign on again.
Cosign is part of the National Science Foundation Middleware Initiative (NMI) EDIT software release.
See also
Central Authentication Service
Pubcookie
Stanford WebAuth
University of Minnesota CookieAuth
Shibboleth (Internet2)
References
External links
cosign homepage
cosign Wiki
Computer security software
Federated identity
|
https://en.wikipedia.org/wiki/OpenVZ
|
OpenVZ (Open Virtuozzo) is an operating-system-level virtualization technology for Linux. It allows a physical server to run multiple isolated operating system instances, called containers, virtual private servers (VPSs), or virtual environments (VEs). OpenVZ is similar to Solaris Containers and LXC.
OpenVZ compared to other virtualization technologies
While virtualization technologies such as VMware, Xen and KVM provide full virtualization and can run multiple operating systems and different kernel versions, OpenVZ uses a single Linux kernel and therefore can run only Linux. All OpenVZ containers share the same architecture and kernel version. This can be a disadvantage in situations where guests require different kernel versions than that of the host. However, as it does not have the overhead of a true hypervisor, it is very fast and efficient.
Memory allocation with OpenVZ is soft in that memory not used in one virtual environment can be used by others or for disk caching. While old versions of OpenVZ used a common file system (where each virtual environment is just a directory of files that is isolated using chroot), current versions of OpenVZ allow each container to have its own file system.
Kernel
The OpenVZ kernel is a Linux kernel, modified to add support for OpenVZ containers. The modified kernel provides virtualization, isolation, resource management, and checkpointing. As of vzctl 4.0, OpenVZ can work with unpatched Linux 3.x kernels, with a reduced feature set.
Virtualization and isolation
Each container is a separate entity, and behaves largely as a physical server would. Each has its own:
Files System libraries, applications, virtualized /proc and /sys, virtualized locks, etc.
Users and groups Each container has its own root user, as well as other users and groups.
Process tree A container only sees its own processes (starting from init). PIDs are virtualized, so that the init PID is 1 as it should be.
Network Virtual network device, which allo
|
https://en.wikipedia.org/wiki/Frances%20Spence
|
Frances V. Spence ( Bilas; March 2, 1922 – July 18, 2012) was one of the original programmers for the ENIAC (the first electronic digital computer). She is considered one of the first computer programmers in history.
The other five ENIAC programmers were Betty Holberton, Ruth Teitelbaum, Kathleen Antonelli, Marlyn Meltzer, and Jean Bartik.
Early life
She was born Frances V. Bilas in Philadelphia in 1922 and was the second of five sisters. Her parents both held jobs in the education sector, her father as an engineer for the Philadelphia Public School System and her mother as a teacher.
Bilas attended the South Philadelphia High School for Girls and graduated in 1938. She originally attended Temple University, but switched to Chestnut Hill College after being awarded a scholarship. She majored in mathematics with a minor in physics and graduated in 1942. While there, she met Kathleen Antonelli, who later also became an ENIAC programmer.
Personal life
In 1947, she married Homer W. Spence, an Army electrical engineer from the Aberdeen Proving Grounds who had been assigned to the ENIAC project and later became head of the Computer Research Branch. They had three sons (Joseph, Richard, and William).
Frances Spence had continued working on the ENIAC in the years after the war, but shortly after her marriage, she resigned to raise a family.
ENIAC career
The ENIAC project was a classified project by the US Army to construct the first all-electronic digital computer. While its hardware was primarily built by a team of men, its computational development was led by a team of six programmers (called "Computers"), all women from similar backgrounds as Spence. Despite her importance as one of the original programmers of the ENIAC, the role that she and the other female programmers took on was largely downplayed at the time due to the stigma that women were not interested in technology.
Photos of the women working on the computer often went without credit in newspapers at t
|
https://en.wikipedia.org/wiki/Trends%20in%20International%20Mathematics%20and%20Science%20Study
|
The IEA's Trends in International Mathematics and Science Study (TIMSS) is a series of international assessments of the mathematics and science knowledge of students around the world. The participating students come from a diverse set of educational systems (countries or regional jurisdictions of countries) in terms of economic development, geographical location, and population size. In each of the participating educational systems, a minimum of 4,000 to 5,000 students is evaluated. Contextual data about the conditions in which participating students learn mathematics and science are collected from the students and their teachers, their principals, and their parents via questionnaires.
TIMSS is one of the studies established by IEA aimed at allowing educational systems worldwide to compare students' educational achievement and learn from the experiences of others in designing effective education policy. This assessment was first conducted in 1995, and has been administered every four years thereafter. Therefore, some of the participating educational systems have trend data across assessments from 1995 to 2019. TIMSS assesses 4th and 8th grade students, while TIMSS Advanced assesses students in the final year of secondary school in advanced mathematics and physics.
Definition of Terms
"Eighth grade" in the United States is approximately 13–14 years of age and equivalent to:
Year 9 (Y9) in England and Wales
2nd Year (S2) in Scotland
2nd Year in the Republic of Ireland
1st Year in South Africa
Form 2 in Hong Kong
4ème in France
Year 9 in New Zealand
Form 2 in Malaysia
"Fourth grade" in the United States is approximately equivalent to 9–10 years of age and equivalent to:
Year 5 (Y5) in England and Wales
Primary 6 (P6) in Scotland
Group 6 in the Netherlands
CM1 in France
Fourth Class in the Republic of Ireland
Standard 3 or Year 5 in New Zealand
History
A precursor to TIMSS was the First International Mathematics Study (FIMS) performed in 1964 in 11
|
https://en.wikipedia.org/wiki/Art%20in%20Bloom
|
The phrase Art In Bloom is often used as the title of various exhibits held annually, usually in spring, in art museums. The phrase was created by a Museum of Fine Arts, Boston, volunteer, Lorraine M. Pitts who also helped found the Danforth Museum in Framingham, MA. The exhibit is composed of traditional visual art pieces and corresponding flower arrangements done by local professional florists and garden club members.
The original exhibit was held in the Museum of Fine Arts in Boston in 1976, where it is held annually; other institutions hosting such displays include the Minneapolis Institute of Arts, the Art Gallery of Greater Victoria, the North Carolina Museum of Art, and the Saint Louis Art Museum in St. Louis, Missouri. Universities such as the University of Missouri's Museum of Art and Archaeology in Columbia, Missouri also hold "Art in Bloom" exhibitions.
External links
Museum of Fine Arts, Boston – Art in Bloom 2007
Minneapolis Institute of Arts – Art in Bloom 2007
Art Gallery of Greater Victoria - Art in Bloom 2007
St. Louis Art Museum – Art in Bloom 2007
University of Missouri–Columbia Museum of Art and Archaeology – Art in Bloom 2007
Art In Bloom Show & Sale 2009 - Hamilton, Ontario, Canada
Visual arts exhibitions
Botanical art
Flower shows
Spring festivals
|
https://en.wikipedia.org/wiki/Second%20source
|
In the electronics industry, a second source is a company that is licensed to manufacture and sell components originally designed by another company (the first source).
It is common for engineers and purchasers to avoid components that are only available from a single source, to avoid the risk that a problem with the supplier would prevent a popular and profitable product from being manufactured. For simple components such as resistors and transistors, this is not usually an issue, but for complex integrated circuits, vendors often react by licensing one or more other companies to manufacture and sell the same parts as second sources. While the details of such licenses are usually confidential, they often involve cross-licensing, so that each company also obtains the right to manufacture and sell parts designed by the other.
Examples
MOS Technology licensed Rockwell and Synertek to second-source the 6502 microprocessor and its support components.
Intel licensed AMD to second-source Intel microprocessors such as the 8086 and its related support components. This second-source agreement is particularly famous for leading to much litigation between the two parties. The agreement gave AMD the rights to second-source later Intel parts, but Intel refused to provide the masks for the 386 to AMD. AMD reverse-engineered the 386, and Intel then claimed that AMD's license to the 386 microcode only allowed AMD to "use" the microcode but not to sell products incorporating it. The courts eventually decided in favor of AMD.
References
Electronics manufacturing
|
https://en.wikipedia.org/wiki/Rehabilitation%20engineering
|
Rehabilitation engineering is the systematic application of engineering sciences to design, develop, adapt, test, evaluate, apply, and distribute technological solutions to problems confronted by individuals with disabilities. These individuals may have experienced a spinal cord injury, brain trauma, or any other debilitating injury or disease (such as multiple sclerosis, Parkinson's, West Nile, ALS, etc.). Functional areas addressed through rehabilitation engineering may include mobility, communications, hearing, vision, and cognition, and activities associated with employment, independent living, education, and integration into the community.
Rehabilitation Engineering and Assistive Technology Society of North America, the association and certifying organization of professionals within the field of Rehabilitation Engineering and Assistive Technology in North America, defines the role of a Rehabilitation Engineer as well as the role of a Rehabilitation Technician, Assistive Technologist, and Rehabiltiation Technologist (not all the same) in the 2017 approved White Paper available online on their website.
Qualifications
While some rehabilitation engineers have master's degrees in rehabilitation engineering, usually a subspecialty of Biomedical engineering, most rehabilitation engineers have undergraduate or graduate degrees in biomedical engineering, mechanical engineering, or electrical engineering. A Portuguese university provides an undergraduate degree and a master's degree in Rehabilitation Engineering and Accessibility. Qualification to become a Rehab Engineer in the UK is possible via a University BSc Honours Degree course such as Health Design & Technology Institute, Coventry University.
Professional registration of NHS Rehab Engineers is with the Institute of Physics and Engineering in Medicine.
Professional, Scientific and Technical Associations
Many of the Rehabilitation Engineering professionals join multidisciplinary scientific and technical ass
|
https://en.wikipedia.org/wiki/Ruby%20%28hardware%20description%20language%29
|
Ruby is a hardware description language designed by in 1986 intended to facilitate the notation and development of integrated circuits via relational algebra and functional programming.
It should not be confused with RHDL, a hardware description language based on the 1995 Ruby programming language.
References
External links
Hardware description languages
|
https://en.wikipedia.org/wiki/Methanosarcina%20acetivorans
|
Methanosarcina acetivorans is a versatile methane producing microbe which is found in such diverse environments as oil wells, trash dumps, deep-sea hydrothermal vents, and oxygen-depleted sediments beneath kelp beds. Only M. acetivorans and microbes in the genus Methanosarcina use all three known metabolic pathways for methanogenesis. Methanosarcinides, including M. acetivorans, are also the only archaea capable of forming multicellular colonies, and even show cellular differentiation. The genome of M. acetivorans is one of the largest archaeal genomes ever sequenced. Furthermore, one strain of M. acetivorans, M. a. C2A, has been identified to possess an F-type ATPase (unusual for archaea, but common for bacteria, mitochondria and chloroplasts) along with an A-type ATPase.
Metabolism
M. acetivorans has been noted for its ability to metabolize carbon monoxide to form acetate and formate. It can also oxidize carbon monoxide into carbon dioxide. The carbon dioxide can then be converted into methane in a process which M. acetivorans uses to conserve energy. It has been suggested that this pathway may be similar to metabolic pathways used by primitive cells.
However, in the presence of minerals containing iron sulfides, as might have been found in sediments in a primordial environment, acetate would be catalytically converted into acetate thioester, a sulfur-containing derivative. Primitive microbes could obtain biochemical energy in the form of adenosine triphosphate (ATP) by converting acetate thioester back into acetate using PTS and ACK, which would then be converted back into acetate thioester to complete the process. In such an environment, a primitive "protocell" could easily produce energy through this metabolic pathway, excreting acetate as waste. Furthermore, ACK catalyzes the synthesis of ATP directly. Other pathways generate energy from ATP only through complex multi-enzyme reactions involving protein pumps and osmotic imbalances across a membrane.
Hi
|
https://en.wikipedia.org/wiki/Delay%20%28audio%20effect%29
|
Delay is an audio signal processing technique that records an input signal to a storage medium and then plays it back after a period of time. When the delayed playback is mixed with the live audio, it creates an echo-like effect, whereby the original audio is heard followed by the delayed audio. The delayed signal may be played back multiple times, or fed back into the recording, to create the sound of a repeating, decaying echo.
Delay effects range from a subtle echo effect to a pronounced blending of previous sounds with new sounds. Delay effects can be created using tape loops, an approach developed in the 1940s and 1950s and used by artists including Elvis Presley and Buddy Holly.
Analog effects units were introduced in the 1970s; digital effects pedals in 1984; and audio plug-in software in the 2000s.
History
The first delay effects were achieved using tape loops improvised on reel-to-reel audio tape recording systems. By shortening or lengthening the loop of tape and adjusting the read-and-write heads, the nature of the delayed echo could be controlled. This technique was most common among early composers of musique concrète such as Pierre Schaeffer, and composers such as Karlheinz Stockhausen, who had sometimes devised elaborate systems involving long tapes and multiple recorders and playback systems, collectively processing the input of a live performer or ensemble.
American producer Sam Phillips created a slapback echo effect with two Ampex 350 tape recorders in 1954. The effect was used by artists including Elvis Presley (such as on his track "Blue Moon of Kentucky") and Buddy Holly, and became one of Phillips' signatures. Guitarist and instrument designer Les Paul was an early pioneer in delay devices. According to Sound on Sound, "The character and depth of sound that was produced from tape echo on these old records is extremely lush, warm and wide."
Tape echoes became commercially available in the 1950s. Tape echo machines contain loops of tape tha
|
https://en.wikipedia.org/wiki/Acridine%20orange
|
Acridine orange is an organic compound that serves as a nucleic acid-selective fluorescent dye with cationic properties useful for cell cycle determination. Acridine orange is cell-permeable, which allows the dye to interact with DNA by intercalation, or RNA via electrostatic attractions. When bound to DNA, acridine orange is very similar spectrally to an organic compound known as fluorescein. Acridine orange and fluorescein have a maximum excitation at 502nm and 525 nm (green). When acridine orange associates with RNA, the fluorescent dye experiences a maximum excitation shift from 525 nm (green) to 460 nm (blue). The shift in maximum excitation also produces a maximum emission of 650 nm (red). Acridine orange is able to withstand low pH environments, allowing the fluorescent dye to penetrate acidic organelles such as lysosomes and phagolysosomes that are membrane-bound organelles essential for acid hydrolysis or for producing products of phagocytosis of apoptotic cells. Acridine orange is used in epifluorescence microscopy and flow cytometry. The ability to penetrate the cell membranes of acidic organelles and cationic properties of acridine orange allows the dye to differentiate between various types of cells (i.e., bacterial cells and white blood cells). The shift in maximum excitation and emission wavelengths provides a foundation to predict the wavelength at which the cells will stain.
Optical properties
When the pH of the environment is 3.5, acridine orange becomes excited by blue light (460 nm). When acridine orange is excited by blue light, the fluorescent dye can differentially stain human cells green and prokaryotic cells orange (600 nm), allowing for rapid detection with a fluorescent microscope. The differential staining capability of acridine orange provides quick scanning of specimen smears at lower magnifications of 400x compared to Gram stains that operate at 1000x magnification. The differentiation of cells is aided by a dark background that allow
|
https://en.wikipedia.org/wiki/Friday%20Night%20Football%20%28AFL%29
|
Friday Night Football is an Australian sports broadcast series is currently airing on the Seven Network.
History
Non-weekend night matches of Australian rules football first emerged in the late 1970s/early 1980s with the night series, a knock-out tournament featuring teams from across the country and run in parallel with the league seasons.
The first Victorian Football League matches on Friday nights were introduced in 1985. At this time, these games were irregularly scheduled, and all matches featured North Melbourne, but by 1987, Friday Night Football was played on a more regular basis, and other clubs began to host the games.
Friday night AFL is generally played in Melbourne, at either the Melbourne Cricket Ground or Docklands Stadium, but Sydney, Adelaide and Perth will generally host a few matches each year. It is less common for the games to be played in Sydney, Brisbane or the Gold Coast in order to avoid clashes with the National Rugby League, which is more popular in those cities. As it is the most lucrative broadcast timeslot of the weekend, matches between the better-performing clubs are scheduled on Friday night to ensure the games are of high quality. As recently as 2014, however, the Gold Coast Suns have pushed to be featured on Friday nights in 2015, citing their improved form in 2014.
Seven's commentary team includes James Brayshaw, Brian Taylor and Hamish McLachlan, with smaller roles involving Wayne Carey, Cameron Ling, Tim Watson, Matthew Richardson, Leigh Matthews and Luke Darcy.
Broadcast history
The Seven Network, which broadcast football for around 40 years before losing the rights after the 2001 season, was the first Australian network to broadcast Friday Night Football.
Between 2002–2006, the Nine Network had the rights to the Friday night broadcast; as the network also had the rights to the NRL, during those years in Melbourne, Adelaide and Perth, where by the AFL match would be broadcast first live at 8:30pm, followed by Nightline (o
|
https://en.wikipedia.org/wiki/Window%20of%20opportunity
|
A window of opportunity, also called a margin of opportunity or critical window, is a period of time during which some action can be taken that will achieve a desired outcome. Once this period is over, or the "window is closed", the specified outcome is no longer possible.
Examples
Windows of opportunity include:
Biology and medicine
The critical period in neurological development, during which neuroplasticity is greatest and key functions, such as imprinting and language, are acquired which may be impossible to acquire at a later stage
The golden hour or golden time, used in emergency medicine to describe the period following traumatic injury in which life-saving treatment is most likely to be successful
Economics
Market opportunities, in which one may be positioned to take advantage of a gap in a particular market, the timing of which may depend on the activities of customers, competitors, and other market context factors
Limited time offer, a critical window for making purchases that is artificially imposed (or even falsely implied) as a marketing tactic to encourage consumer action
Other examples
Planting and harvesting seasons, in agriculture, which are generally timed to maximize crop yields
Space launch and maneuver windows, which are determined by orbital dynamics and mission goals and constrained by fuel/delta-v budgets
The theorized tipping point in climatology, after which the Earth's climate is predicted to shift to a new stable equilibrium
Various transient astronomical events, which present limited (and often unpredictable) windows for observation
Characteristics
Timing
The timing and length of a critical window may be well known and predictable (as in planetary transits) or more poorly understood (in the case of medical emergencies or climate change). In some cases, there may be multiple windows during which a goal can be achieved, as in the case of space launch windows.
In situations with very brief or unpredictable windows of opportunity, au
|
https://en.wikipedia.org/wiki/Human%20Genome%20Project
|
The Human Genome Project (HGP) was an international scientific research project with the goal of determining the base pairs that make up human DNA, and of identifying, mapping and sequencing all of the genes of the human genome from both a physical and a functional standpoint. It started in 1990 and was completed in 2003. It remains the world's largest collaborative biological project. Planning for the project started after it was adopted in 1984 by the US government, and it officially launched in 1990. It was declared complete on April 14, 2003, and included about 92% of the genome. Level "complete genome" was achieved in May 2021, with a remaining only 0.3% bases covered by potential issues. The final gapless assembly was finished in January 2022.
Funding came from the United States government through the National Institutes of Health (NIH) as well as numerous other groups from around the world. A parallel project was conducted outside the government by the Celera Corporation, or Celera Genomics, which was formally launched in 1998. Most of the government-sponsored sequencing was performed in twenty universities and research centres in the United States, the United Kingdom, Japan, France, Germany, and China, working in the International Human Genome Sequencing Consortium (IHGSC).
The Human Genome Project originally aimed to map the complete set of nucleotides contained in a human haploid reference genome, of which there are more than three billion. The "genome" of any given individual is unique; mapping the "human genome" involved sequencing samples collected from a small number of individuals and then assembling the sequenced fragments to get a complete sequence for each of 24 human chromosomes (22 autosomes and 2 sex chromosomes). Therefore, the finished human genome is a mosaic, not representing any one individual. Much of the project's utility comes from the fact that the vast majority of the human genome is the same in all humans.
History
The Human G
|
https://en.wikipedia.org/wiki/Charge%20%28physics%29
|
In physics, a charge is any of many different quantities, such as the electric charge in electromagnetism or the color charge in quantum chromodynamics. Charges correspond to the time-invariant generators of a symmetry group, and specifically, to the generators that commute with the Hamiltonian. Charges are often denoted by , and so the invariance of the charge corresponds to the vanishing commutator , where is the Hamiltonian. Thus, charges are associated with conserved quantum numbers; these are the eigenvalues of the generator .
Abstract definition
Abstractly, a charge is any generator of a continuous symmetry of the physical system under study. When a physical system has a symmetry of some sort, Noether's theorem implies the existence of a conserved current. The thing that "flows" in the current is the "charge", the charge is the generator of the (local) symmetry group. This charge is sometimes called the Noether charge.
Thus, for example, the electric charge is the generator of the U(1) symmetry of electromagnetism. The conserved current is the electric current.
In the case of local, dynamical symmetries, associated with every charge is a gauge field; when quantized, the gauge field becomes a gauge boson. The charges of the theory "radiate" the gauge field. Thus, for example, the gauge field of electromagnetism is the electromagnetic field; and the gauge boson is the photon.
The word "charge" is often used as a synonym for both the generator of a symmetry, and the conserved quantum number (eigenvalue) of the generator. Thus, letting the upper-case letter Q refer to the generator, one has that the generator commutes with the Hamiltonian [Q, H] = 0. Commutation implies that the eigenvalues (lower-case) q are time-invariant: = 0.
So, for example, when the symmetry group is a Lie group, then the charge operators correspond to the simple roots of the root system of the Lie algebra; the discreteness of the root system accounting for the quantization of the ch
|
https://en.wikipedia.org/wiki/Nomen%20oblitum
|
In zoological nomenclature, a nomen oblitum (plural: nomina oblita; Latin for "forgotten name") is a disused scientific name which has been declared to be obsolete (figuratively 'forgotten') in favour of another 'protected' name.
In its present meaning, the nomen oblitum came into being with the fourth, 1999, edition of the International Code of Zoological Nomenclature. After 1 January 2000, a scientific name may be formally declared to be a nomen oblitum when it has been shown not to have been used as a valid name within the scientific community since 1899, and when it is either a senior synonym (there is also a more recent name which applies to the same taxon, and which is in common use) or a homonym (it is spelled the same as another name, which is in common use), and when the preferred junior synonym or homonym has been shown to be in wide use in 50 or more publications in the past few decades. Once a name has formally been declared to be a nomen oblitum, the now obsolete name is to be 'forgotten'. By the same act, the next available name must be declared to be protected under the title nomen protectum. Thereafter it takes precedence.
An example is the case of the scientific name for the leopard shark. Despite the name Mustelus felis being the senior synonym, an error in recording the dates of publication resulted in the widespread use of Triakis semifasciata as the leopard shark's scientific name. After this long-standing error was discovered, T. semifasciata was made the valid name (as a nomen protectum) and Mustelus felis was declared invalid (as a nomen oblitum).
Use in taxonomy
The designation nomen oblitum has been used relatively frequently to keep the priority of old, sometimes disused names, and, controversially, often without establishing that a name actually meets the criteria for the designation. Some taxonomists have regarded the failure to properly establish the nomen oblitum designation as a way to avoid doing taxonomic research or to retain a
|
https://en.wikipedia.org/wiki/List%20of%20centroids
|
The following is a list of centroids of various two-dimensional and three-dimensional objects. The centroid of an object in -dimensional space is the intersection of all hyperplanes that divide into two parts of equal moment about the hyperplane. Informally, it is the "average" of all points of . For an object of uniform composition, the centroid of a body is also its center of mass. In the case of two-dimensional objects shown below, the hyperplanes are simply lines.
2-D Centroids
For each two-dimensional shape below, the area and the centroid coordinates are given:
Where the centroid coordinates are marked as zero, the coordinates are at the origin, and the equations to get those points are the lengths of the included axes divided by two, in order to reach the center which in these cases are the origin and thus zero.
3-D Centroids
For each three-dimensional body below, the volume and the centroid coordinates are given:
See also
List of moments of inertia
List of second moments of area
References
External links
http://www.engineering.com/Library/ArticlesPage/tabid/85/articleType/ArticleView/articleId/109/Centroids-of-Common-Shapes.aspx
Mechanics
Physics-related lists
Geometric centers
|
https://en.wikipedia.org/wiki/Centralized%20computing
|
Centralized computing is computing done at a central location, using terminals that are attached to a central computer. The computer itself may control all the peripherals directly (if they are physically connected to the central computer), or they may be attached via a terminal server. Alternatively, if the terminals have the capability, they may be able to connect to the central computer over the network. The terminals may be text terminals or thin clients, for example.
It offers greater security over decentralized systems because all of the processing is controlled in a central location. In addition, if one terminal breaks down, the user can simply go to another terminal and log in again, and all of their files will still be accessible. Depending on the system, they may even be able to resume their session from the point they were at before, as if nothing had happened.
This type of arrangement does have some disadvantages. The central computer performs the computing functions and controls the remote terminals. This type of system relies totally on the central computer. Should the central computer crash, the entire system will "go down" (i.e. will be unavailable).
Another disadvantage is that central computing relies heavily on the quality of administration and resources provided to its users. Should the central computer be inadequately supported by any means (e.g. size of home directories, problems regarding administration), then your usage will suffer greatly. The reverse situation, however, (i.e., a system supported better than your needs) is one of the key advantages to centralized computing.
History
The very first computers did not have separate terminals as such; their primitive input/output devices were built in. However, soon it was found to be extremely useful for multiple people to be able to use a computer at the same time, for reasons of cost – early computers were very expensive, both to produce and maintain, and occupied large amounts of floo
|
https://en.wikipedia.org/wiki/Cell%20damage
|
Cell damage (also known as cell injury) is a variety of changes of stress that a cell suffers due to external as well as internal environmental changes. Amongst other causes, this can be due to physical, chemical, infectious, biological, nutritional or immunological factors. Cell damage can be reversible or irreversible. Depending on the extent of injury, the cellular response may be adaptive and where possible, homeostasis is restored. Cell death occurs when the severity of the injury exceeds the cell's ability to repair itself. Cell death is relative to both the length of exposure to a harmful stimulus and the severity of the damage caused. Cell death may occur by necrosis or apoptosis.
Causes
Physical agents such as heat or radiation can damage a cell by literally cooking or coagulating their contents.
Impaired nutrient supply, such as lack of oxygen or glucose, or impaired production of adenosine triphosphate (ATP) may deprive the cell of essential materials needed to survive.
Metabolic: Hypoxia and Ischemia
Chemical Agents
Microbial Agents: Virus & Bacteria
Immunologic Agents: Allergy and autoimmune diseases such as Parkinson's and Alzheimer's disease.
Genetic factors: Such as Down's syndrome and sickle cell anemia
Targets
The most notable components of the cell that are targets of cell damage are the DNA and the cell membrane.
DNA damage: In human cells, both normal metabolic activities and environmental factors such as ultraviolet light and other radiations can cause DNA damage, resulting in as many as one million individual molecular lesions per cell per day.
Membrane damage: Damage to the cell membrane disturbs the state of cell electrolytes, e.g. calcium, which when constantly increased, induces apoptosis.
Mitochondrial damage: May occur due to ATP decrease or change in mitochondrial permeability.
Ribosome damage: Damage to ribosomal and cellular proteins such as protein misfolding, leading to apoptotic enzyme activation.
Types of damage
Some cell da
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.