source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Daniel%20J.%20Shanefield
|
Daniel Jay Shanefield (April 29, 1930 – November 13, 2013) was a United States ceramic engineer.
Shanefield was born in Orange, New Jersey, and earned a bachelor's degree in chemistry from Rutgers University in 1956; he went on to graduate studies at the same university, receiving his Ph.D. in physical chemistry from Rutgers in 1962. He worked from 1962 to 1967 at ITT Research Laboratories, and from 1967 to 1986 at Bell Laboratories. In 1986 he returned to Rutgers as a Professor II (a professorial rank at Rutgers that is one step above a normal full professor).
At Bell Laboratories, Shanefield was the co-inventor with Richard E. Mistler of the tape casting technique for forming thin ceramic films. He pioneered the development of a phase-change memory system based on an earlier patent of Stanford R. Ovshinsky; Shanefield's work in this area "represented the first proof of the phase change memory concept". Beginning in the mid-1970s, Shanefield was an early proponent of double-blind ABX testing of high-end audio electronics; in 1980 he reported in High Fidelity magazine that there were no audible differences between several different power amplifiers, setting off what became known in audiophile circles as "the great debate".
Shanefield is the author of two books, Organic Additives and Ceramic Processing (Kluwer, 1995; 2nd ed., Kluwer, 1996) and Industrial Electronics for Engineers, Chemists, and Technicians (William Andrew Publishing, 2001).
He was a four-time winner of the AT&T Outstanding Achievement Award and was elected as a Fellow of the American Ceramic Society in 1993.
Shanefield died in Honolulu, Hawaii, aged 83.
References
1930 births
2013 deaths
American engineers
Rutgers University alumni
People from Orange, New Jersey
American physical chemists
Ceramic engineering
Engineers from New Jersey
Fellows of the American Ceramic Society
|
https://en.wikipedia.org/wiki/Interaction%20design%20pattern
|
Interaction design patterns are design patterns applied in the context human-computer interaction, describing common designs for graphical user interfaces.
A design pattern is a formal way of documenting a solution to a common design problem. The idea was introduced by the architect Christopher Alexander for use in urban planning and building architecture and has been adapted for various other disciplines, including teaching and pedagogy, development organization and process, and software architecture and design.
Thus, interaction design patterns are a way to describe solutions to common usability or accessibility problems in a specific context. They document interaction models that make it easier for users to understand an interface and accomplish their tasks.
History
Patterns originated as an architectural concept by Christopher Alexander. Patterns are ways to describe best practices, explain good designs, and capture experience so that other people can reuse these solutions.
Design patterns in computer science are used by software engineers during the actual design process and when communicating designs to others. Design patterns gained popularity in computer science after the book Design Patterns: Elements of Reusable Object-Oriented Software was published. Since then a pattern community has emerged that specifies patterns for problem domains including architectural styles and object-oriented frameworks. The Pattern Languages of Programming Conference (annual, 1994—) proceedings includes many examples of domain-specific patterns.
Applying a pattern language approach to interaction design was first suggested in Norman and Draper's book User Centered System Design (1986). The Apple Computer's Macintosh Human Interface Guidelines also quotes Christopher Alexander's works in its recommended reading.
Libraries
Alexander envisioned a pattern language as a structured system in which the semantic relationships between the patterns create a whole that is greater
|
https://en.wikipedia.org/wiki/Latent%20tuberculosis
|
Latent tuberculosis (LTB), also called latent tuberculosis infection (LTBI) is when a person is infected with Mycobacterium tuberculosis, but does not have active tuberculosis (TB). Active tuberculosis can be contagious while latent tuberculosis is not, and it is therefore not possible to get TB from someone with latent tuberculosis. The main risk is that approximately 10% of these people (5% in the first two years after infection and 0.1% per year thereafter) will go on to develop active tuberculosis. This is particularly true, and there is added risk, in particular situations such as medication that suppresses the immune system or advancing age.
The identification and treatment of people with latent TB is an important part of controlling this disease. Various treatment regimens are in use for latent tuberculosis. They generally need to be taken for several months.
Transmission
Latent disease
TB Bacteria Are Spread Only from a Person with Active TB Disease ... In people who develop active TB of the lungs, also called pulmonary TB, the TB skin test will often be positive. In addition, they will show all the signs and symptoms of TB disease, and can pass the bacteria to others. So, if a person with TB of the lungs sneezes, coughs, talks, sings, or does anything that forces the bacteria into the air, other people nearby may breathe in TB bacteria. Statistics show that approximately one-third of people exposed to pulmonary TB become infected with the bacteria, but only one in ten of these infected people develops active TB disease during their lifetimes.However, exposure to tuberculosis is very unlikely to happen when one is exposed for a few minutes in a store or in a few minutes social contact. "It usually takes prolonged exposure to someone with active TB disease for someone to become infected.
After exposure, it usually takes 8 to 10 weeks before the TB test would show if someone had become infected."Depending on ventilation and other factors, these tiny drople
|
https://en.wikipedia.org/wiki/Vertical%20handover
|
Vertical handover or vertical handoff refers to a network node changing the type of connectivity it uses to access a supporting infrastructure, usually to support node mobility. For example, a suitably equipped laptop might be able to use both high-speed wireless LAN and cellular technology for Internet access. Wireless LAN connections generally provide higher speeds, while cellular technologies generally provide more ubiquitous coverage. Thus the laptop user might want to use a wireless LAN connection whenever one is available and to revert to a cellular connection when the wireless LAN is unavailable. Vertical handovers refer to the automatic transition from one technology to another in order to maintain communication. This is different from a horizontal handover between different wireless access points that use the same technology.
Vertical handoffs between WLAN and UMTS (WCDMA) have attracted a great deal of attention in all the research areas of the 4G wireless network, due to the benefit of utilizing the higher bandwidth and lower cost of WLAN as well as better mobility support and larger coverage of UMTS. Vertical handovers among a range of wired and wireless access technologies including WiMAX can be achieved using Media independent handover which is standardized as IEEE 802.21.
Related issues
Dual mode card
To support vertical handover, a mobile terminal needs to have a dual mode card, for example one that can work under both WLAN and UMTS frequency bands and modulation schemes.
Interworking architecture
For the vertical handover between UMTS and WLAN, there are two main interworking architecture: tight coupling and loose coupling.
The tight coupling scheme, which 3GPP adopted, introduces two more elements: WAG (Wireless Access Gateway) and PDG (Packet Data Gateway). So the data transfers from WLAN AP to a Corresponding Node on the internet must go through the Core Network of UMTS.
Loose coupling is more used when the WLAN is not operated by cellular o
|
https://en.wikipedia.org/wiki/Separation%20logic
|
In computer science, separation logic is an extension of Hoare logic, a way of reasoning about programs.
It was developed by John C. Reynolds, Peter O'Hearn, Samin Ishtiaq and Hongseok Yang, drawing upon early work by Rod Burstall. The assertion language of separation logic is a special case of the logic of bunched implications (BI). A CACM review article by O'Hearn charts developments in the subject to early 2019.
Overview
Separation logic facilitates reasoning about:
programs that manipulate pointer data structures—including information hiding in the presence of pointers;
"transfer of ownership" (avoidance of semantic frame axioms); and
virtual separation (modular reasoning) between concurrent modules.
Separation logic supports the developing field of research described by Peter O'Hearn and others as local reasoning, whereby specifications and proofs of a program component mention only the portion of memory used by the component, and not the entire global state of the system. Applications include automated program verification (where an algorithm checks the validity of another algorithm) and automated parallelization of software.
Assertions: operators and semantics
Separation logic assertions describe "states" consisting of a store and a heap, roughly corresponding to the state of local (or stack-allocated) variables and dynamically-allocated objects in common programming languages such as C and Java. A store is a function mapping variables to values. A heap is a partial function mapping memory addresses to values. Two heaps and are disjoint (denoted ) if their domains do not overlap (i.e., for every memory address , at least one of and is undefined).
The logic allows to prove judgements of the form , where is a store, is a heap, and is an assertion over the given store and heap. Separation logic assertions (denoted as , , ) contain the standard boolean connectives and, in addition, , , , and , where and are expressions.
The constant asserts t
|
https://en.wikipedia.org/wiki/Richard%20Bornat
|
Richard Bornat (born 1944), is a British author and researcher in the field of computer science. He is also professor of Computer programming at Middlesex University. Previously he was at Queen Mary, University of London.
Research
Bornat's research interests includes program proving in separation logic. His focus is on the proofs themselves; as opposed to any logical underpinnings. Much of the work involves discovering ways to state the properties of independent modules, in a manner that makes their composition into useful systems conducive.
Bornat (in conjunction with Bernard Sufrin of the Oxford University Computing Laboratory) developed Jape, a proof calculator; he is involved in research on the usability of this tool for exploration of novel proofs.
Richard Bornat's PhD students have included Samson Abramsky in the early 1980s.
In 2004, one of Bornat's students developed an aptitude test to "divide people up into programmers and non-programmers before they ever come into contact with programming." The test was first given to a group of students in 2005 during an experiment on the use of mental models in programming. In 2008 and 2014, Bornat partially retracted some of the claims, impugning its validity as a test for programming capability.
Publications
Bornat published a book entitled "Understanding and Writing Compilers: A Do It Yourself Guide", which is regarded as one of the most extensive resources on compiler development. Although it has been out of print for some time, he has now made it available as an online edition.
Other publications from Bornat include:
R. Bornat; 1987; Programming from First Principles; Prentice Hall International Series in Computer Science; .
Richard Bornat and Harold Thimbleby; 1989; The life and times of ded, display editor; in J.B. Long & A. Whitefield (eds); Cognitive Ergonomics and Human-Computer Interaction; Cambridge University Press; pp. 225–255.
Richard Bornat and Bernard Sufrin;1999; Animating Formal Proof at
|
https://en.wikipedia.org/wiki/Utility%20Radio
|
The Utility Radio or Wartime Civilian Receiver was a valve domestic radio receiver, manufactured in Great Britain during World War II starting in July 1944. It was designed by G.D. Reynolds of Murphy Radio. Both AC and battery-operated versions were made.
History
When war broke out in 1939, British radio manufacturers devoted their resources to producing a range of military radio equipment required for the armed forces. This resulted in a shortage of consumer radio sets and spare parts, particularly valves, as all production was for the services. The war also prompted a shortage of radio repairmen, as virtually all of them were needed in the services to maintain vital radio and radar equipment. This meant it was very difficult for the average citizen to get a radio repaired, and with very few new sets available, there was a desperate need to overcome the problem.
The government solved this by arranging for over forty radio manufacturers to produce sets to a standard design with as few components as possible consistent with the ability to source them. Earlier, the government had introduced the "Utility" brand to ensure that all clothing, which was rationed, was produced to a reasonable quality standard as, prior to its introduction, a lot of shoddy goods had appeared on the market; the brand was therefore adopted for this wartime radio.
The Utility Set had limited reception on medium wave and lacked a longwave band to simplify the design. The tuning scale listed only BBC stations. After the war a version with LW was made available and modification kits to retrofit existing sets were marketed.
About 175,000 sets were sold, at a price of £12 3s 4d each. The set is sometimes characterized as the British equivalent of the German Volksempfänger "Peoples' Receiver"; however there were dissimilarities. The Volksempfänger were radio sets designed to be inexpensive enough for any German citizen to purchase one but higher quality consumer radios were always available to G
|
https://en.wikipedia.org/wiki/Sleeping%20Beauty%20problem
|
The Sleeping Beauty problem is a puzzle in decision theory in which whenever an ideally rational epistemic agent is awoken from sleep, they have no memory of whether they have been awoken before. Upon being told that they have been woken once or twice according to the toss of a coin, once if heads and twice if tails, they are asked their degree of belief for the coin having come up heads.
History
The problem was originally formulated in unpublished work in the mid-1980s by Arnold Zuboff (the work was later published as "One Self: The Logic of Experience") followed by a paper by Adam Elga. A formal analysis of the problem of belief formation in decision problems with imperfect recall was provided first by Michele Piccione and Ariel Rubinstein in their paper: "On the Interpretation of Decision Problems with Imperfect Recall" where the "paradox of the absent minded driver" was first introduced and the Sleeping Beauty problem discussed as Example 5. The name "Sleeping Beauty" was given to the problem by Robert Stalnaker and was first used in extensive discussion in the Usenet newsgroup rec.puzzles in 1999.
The problem
As originally published by Elga, the problem was:
Some researchers are going to put you to sleep. During the two days that your sleep will last, they will briefly wake you up either once or twice, depending on the toss of a fair coin (Heads: once; Tails: twice). After each waking, they will put you to back to sleep with a drug that makes you forget that waking. When you are first awakened, to what degree ought you believe that the outcome of the coin toss is Heads?
The only significant difference from Zuboff's unpublished versions is the number of potential wakings; Zuboff used a large number. Elga created a schedule within which to implement his solution, and this has become the canonical form of the problem:
Sleeping Beauty volunteers to undergo the following experiment and is told all of the following details: On Sunday she will be put to sleep. O
|
https://en.wikipedia.org/wiki/Group%20with%20operators
|
In abstract algebra, a branch of mathematics, the algebraic structure group with operators or Ω-group can be viewed as a group with a set Ω that operates on the elements of the group in a special way.
Groups with operators were extensively studied by Emmy Noether and her school in the 1920s. She employed the concept in her original formulation of the three Noether isomorphism theorems.
Definition
A group with operators can be defined as a group together with an action of a set on :
that is distributive relative to the group law:
For each , the application is then an endomorphism of G. From this, it results that a Ω-group can also be viewed as a group G with an indexed family of endomorphisms of G.
is called the operator domain. The associate endomorphisms are called the homotheties of G.
Given two groups G, H with same operator domain , a homomorphism of groups with operators is a group homomorphism satisfying
for all and
A subgroup S of G is called a stable subgroup, -subgroup or -invariant subgroup if it respects the homotheties, that is
for all and
Category-theoretic remarks
In category theory, a group with operators can be defined as an object of a functor category GrpM where M is a monoid (i.e. a category with one object) and Grp denotes the category of groups. This definition is equivalent to the previous one, provided is a monoid (otherwise we may expand it to include the identity and all compositions).
A morphism in this category is a natural transformation between two functors (i.e., two groups with operators sharing same operator domain M). Again we recover the definition above of a homomorphism of groups with operators (with f the component of the natural transformation).
A group with operators is also a mapping
where is the set of group endomorphisms of G.
Examples
Given any group G, (G, ∅) is trivially a group with operators
Given a module M over a ring R, R acts by scalar multiplication on the underlying abelian g
|
https://en.wikipedia.org/wiki/Total%20ring%20of%20fractions
|
In abstract algebra, the total quotient ring or total ring of fractions is a construction that generalizes the notion of the field of fractions of an integral domain to commutative rings R that may have zero divisors. The construction embeds R in a larger ring, giving every non-zero-divisor of R an inverse in the larger ring. If the homomorphism from R to the new ring is to be injective, no further elements can be given an inverse.
Definition
Let be a commutative ring and let be the set of elements which are not zero divisors in ; then is a multiplicatively closed set. Hence we may localize the ring at the set to obtain the total quotient ring .
If is a domain, then and the total quotient ring is the same as the field of fractions. This justifies the notation , which is sometimes used for the field of fractions as well, since there is no ambiguity in the case of a domain.
Since in the construction contains no zero divisors, the natural map is injective, so the total quotient ring is an extension of .
Examples
For a product ring , the total quotient ring is the product of total quotient rings . In particular, if A and B are integral domains, it is the product of quotient fields.
For the ring of holomorphic functions on an open set D of complex numbers, the total quotient ring is the ring of meromorphic functions on D, even if D is not connected.
In an Artinian ring, all elements are units or zero divisors. Hence the set of non-zero-divisors is the group of units of the ring, , and so . But since all these elements already have inverses, .
In a commutative von Neumann regular ring R, the same thing happens. Suppose a in R is not a zero divisor. Then in a von Neumann regular ring a = axa for some x in R, giving the equation a(xa − 1) = 0. Since a is not a zero divisor, xa = 1, showing a is a unit. Here again, .
In algebraic geometry one considers a sheaf of total quotient rings on a scheme, and this may be used to give the definition of a Cartie
|
https://en.wikipedia.org/wiki/Beaufort%20cipher
|
The Beaufort cipher, created by Sir Francis Beaufort, is a substitution cipher similar to the Vigenère cipher, with a slightly modified enciphering mechanism and tableau. Its most famous application was in a rotor-based cipher machine, the Hagelin M-209. The Beaufort cipher is based on the Beaufort square which is essentially the same as a Vigenère square but in reverse order starting with the letter "Z" in the first row, where the first row and the last column serve the same purpose.
Using the cipher
To encrypt, first choose the plaintext character from the top row of the tableau; call this column P. Secondly, travel down column P to the corresponding key letter K. Finally, move directly left from the key letter to the left edge of the tableau, the ciphertext encryption of plaintext P with key K will be there.
For example if encrypting plain text character "d" with key "m" the steps would be:
find the column with "d" on the top,
travel down that column to find key "m",
travel to the left edge of the tableau to find the ciphertext letter ("K" in this case).
To decrypt, the process is reversed. Unlike the otherwise very similar Vigenère cipher, the Beaufort cipher is a reciprocal cipher, that is, decryption and encryption algorithms are the same. This obviously reduces errors in handling the table which makes it useful for encrypting larger volumes of messages by hand, for example in the manual DIANA crypto system, used by U.S. Special Forces during the Vietnam War (compare DIANA-table in the image).
In the above example in the column with "m" on top one would find in the reciprocal "d" row the ciphertext "K". The same is true for decryption where ciphertext "K" combined with key "m" results in plaintext "d" as well as combining "K" with "d" results in "m". This results in "trigram" combinations where two parts suffice to identify the third. After eliminating the identical trigrams only 126 of the initial 676 combinations remain (see below) and could be memo
|
https://en.wikipedia.org/wiki/Wigner%E2%80%93Weyl%20transform
|
In quantum mechanics, the Wigner–Weyl transform or Weyl–Wigner transform (after Hermann Weyl and Eugene Wigner) is the invertible mapping between functions in the quantum phase space formulation and Hilbert space operators in the Schrödinger picture.
Often the mapping from functions on phase space to operators is called the Weyl transform or Weyl quantization, whereas the inverse mapping, from operators to functions on phase space, is called the Wigner transform. This mapping was originally devised by Hermann Weyl in 1927 in an attempt to map symmetrized classical phase space functions to operators, a procedure known as Weyl quantization. It is now understood that Weyl quantization does not satisfy all the properties one would require for consistent quantization and therefore sometimes yields unphysical answers. On the other hand, some of the nice properties described below suggest that if one seeks a single consistent procedure mapping functions on the classical phase space to operators, the Weyl quantization is the best option: a sort of normal coordinates of such maps. (Groenewold's theorem asserts that no such map can have all the ideal properties one would desire.)
Regardless, the Weyl–Wigner transform is a well-defined integral transform between the phase-space and operator representations, and yields insight into the workings of quantum mechanics. Most importantly, the Wigner quasi-probability distribution is the Wigner transform of the quantum density matrix, and, conversely, the density matrix is the Weyl transform of the Wigner function.
In contrast to Weyl's original intentions in seeking a consistent quantization scheme, this map merely amounts to a change of representation within quantum mechanics; it need not connect "classical" with "quantum" quantities. For example, the phase-space function may depend explicitly on Planck's constant ħ, as it does in some familiar cases involving angular momentum. This invertible representation change then all
|
https://en.wikipedia.org/wiki/Cellular%20repeater
|
A cellular repeater (also known as cell phone signal booster or cell phone signal amplifier) is a type of bi-directional amplifier used to improve cell phone reception. A cellular repeater system commonly consists of a donor antenna that receives and transmits signal from nearby cell towers, coaxial cables, a signal amplifier, and an indoor rebroadcast antenna.
Common components
Donor antenna
A "donor antenna" is typically installed by a window or on the roof a building and used to communicate back to a nearby cell tower. A donor antenna can be any of several types, but is usually directional or omnidirectional. An omnidirectional antenna (which broadcast in all directions) is typically used for a repeater system that amplify coverage for all cellular carriers. A directional antenna is used when a particular tower or carrier needs to be isolated for improvement. The use of a highly directional antenna can help improve the donor's signal-to-noise ratio, thus improving the quality of signal redistributed inside a building.
Indoor antenna
Some cellular repeater systems can also include an omnidirectional antenna for rebroadcasting the signal indoors. Depending on attenuation from obstacles, the advantage of using an omnidirectional antenna is that the signal will be equally distributed in all directions.
Motor vehicle antenna
When it is raining and the motor vehicle windows are closed a cell phone could lose between 50% and 100% of its reception. To rectify the reception an antenna is placed outside the vehicle and is wired to the inside of the vehicle to another antenna and amplifier to transmit the mobile phone signal to the cell phone inside the vehicle.
Signal amplifier
Cellular repeater systems include a signal amplifier. Standard GSM channel selective repeaters (operated by telecommunication operators for coverage of large areas and big buildings) have output power around 2 W, high power repeaters have output power around 10 W. The power gain is calcul
|
https://en.wikipedia.org/wiki/Milliken%E2%80%93Taylor%20theorem
|
In mathematics, the Milliken–Taylor theorem in combinatorics is a generalization of both Ramsey's theorem and Hindman's theorem. It is named after Keith Milliken and Alan D. Taylor.
Let denote the set of finite subsets of , and define a partial order on by α<β if and only if max α<min β. Given a sequence of integers and , let
Let denote the k-element subsets of a set S. The Milliken–Taylor theorem says that for any finite partition , there exist some and a sequence such that .
For each , call an MTk set. Then, alternatively, the Milliken–Taylor theorem asserts that the collection of MTk sets is partition regular for each k.
References
.
.
Ramsey theory
Theorems in discrete mathematics
|
https://en.wikipedia.org/wiki/166%20%28number%29
|
166 (one hundred [and] sixty-six) is the natural number following 165 and preceding 167.
In mathematics
166 is an even number and a composite number. It is a centered triangular number.
Given 166, the Mertens function returns 0. 166 is a Smith number in base 10.
In astronomy
166 Rhodope is a dark main belt asteroid, in the Adeona family of asteroids
166P/NEAT is a periodic comet and centaur in the outer Solar System
HD 166 is the 6th magnitude star in the constellation Andromeda
In the military
166th Signal Photo Company was the official photo unit in the 89th Division of George Patton's Third Army in World War II
Convoy ON-166 was the 166th of the numbered ON series of merchant ship convoys outbound from the British Isles to North America departing February 11, 1943
Marine Medium Helicopter Squadron 166 is a United States Marine Corps helicopter
was a United States Coast Guard cutter during World War II
was a United States Navy yacht. She was the first American vessel lost in World War I
was a United States Navy during World War II
was a United States Navy during the World War I
was a United States Navy during World War II
was a United States Navy ship during World War II
USS Jamestown (AGTR-3/AG-166) was a United States Navy Oxford-class technical research ship following World War II
In sports
Sam Thompson’s 166 RBIs in 1887 stood as a Major League Baseball record until Babe Ruth broke the record in 1921
In transportation
British Rail Class 166
The now-defunct elevated IRT Third Avenue Line, 166th Street station in the Bronx, New York
London Buses route 166
Piaggio P.166 is a twin-engined push prop-driven utility aircraft developed by the Italian aircraft manufacturer Piaggio
Banat Air Flight 166 crashed on take-off en route from Romania on December 13, 1995
Alfa Romeo 166 and 166 2.4 JTD produced from 1998 to 2007
Ferrari 166 model cars produced from 1948 to 1953
Ferrari 166 Inter (1949) Coachbuilt street coupe and cabriolet
|
https://en.wikipedia.org/wiki/Correlation%20immunity
|
In mathematics, the correlation immunity of a Boolean function is a measure of the degree to which its outputs are uncorrelated with some subset of its inputs. Specifically, a Boolean function is said to be correlation-immune of order m if every subset of m or fewer variables in is statistically independent of the value of .
Definition
A function is -th order correlation immune if for any independent binary random variables , the random variable is independent from any random vector with .
Results in cryptography
When used in a stream cipher as a combining function for linear feedback shift registers, a Boolean function with low-order correlation-immunity is more susceptible to a correlation attack than a function with correlation immunity of high order.
Siegenthaler showed that the correlation immunity m of a Boolean function of algebraic degree d of n variables satisfies m + d ≤ n; for a given set of input variables, this means that a high algebraic degree will restrict the maximum possible correlation immunity. Furthermore, if the function is balanced then m + d ≤ n − 1.
References
Further reading
Cusick, Thomas W. & Stanica, Pantelimon (2009). "Cryptographic Boolean functions and applications". Academic Press. .
Cryptography
Boolean algebra
|
https://en.wikipedia.org/wiki/Reich%20Technologies
|
Reich Technologies was one of the UML Partners, a consortium that was instrumental to the development of standards for the Unified Modeling Language (UML). The CEO for the company (Georges-Pierre Reich) represented Reich Technologies on the committee, and was involved in the development of the proposal. The proposal was submitted to the Object Management Group (OMG), which approved the proposal, circa late 1997.
Profile
Reich Technologies is an international group of companies, providing a coordinated suite of products and services to support object-oriented (OO) software development in large corporations. With a presence throughout Europe and North America, Reich Technologies occupies leading positions in the world markets for integrated OO CASE tools, fine-grained object repositories and OO team programming environments.
The Intelligent Software Factory (ISF) offers an integrated object-oriented CASE tool suite. It is built on the concept of model-driven development in which the work done at the beginning of a project creates an environment for configuration management and cost containment for software maintenance. ISF has been originally built by Franck Barbier, a French researcher on OO modeling.
The Intelligent Artifact Repository (IAR) provides an enterprise-wide resource for the management and reuse of Information System assets. This concept is so powerful that the development team uses ISF and IAR for production, making ISF the first CASE tool to be self-generated. Recognizing the impact of introducing tools, Reich Technologies offers success oriented services including training, consulting and tool customizations. Corporations combine tools, services and processes with their own organizations to implement a Corporate Software Ecology.
Reich Technologies worked with Alistair Cockburn (special advisor to the Central Bank of Norway) and Ralph Hodgson (founder of TopQuadrant) to flesh out the concept of Use Case and integrate it in the context of Responsibi
|
https://en.wikipedia.org/wiki/Chow%20group
|
In algebraic geometry, the Chow groups (named after Wei-Liang Chow by ) of an algebraic variety over any field are algebro-geometric analogs of the homology of a topological space. The elements of the Chow group are formed out of subvarieties (so-called algebraic cycles) in a similar way to how simplicial or cellular homology groups are formed out of subcomplexes. When the variety is smooth, the Chow groups can be interpreted as cohomology groups (compare Poincaré duality) and have a multiplication called the intersection product. The Chow groups carry rich information about an algebraic variety, and they are correspondingly hard to compute in general.
Rational equivalence and Chow groups
For what follows, define a variety over a field to be an integral scheme of finite type over . For any scheme of finite type over , an algebraic cycle on means a finite linear combination of subvarieties of with integer coefficients. (Here and below, subvarieties are understood to be closed in , unless stated otherwise.) For a natural number , the group of -dimensional cycles (or -cycles, for short) on is the free abelian group on the set of -dimensional subvarieties of .
For a variety of dimension and any rational function on which is not identically zero, the divisor of is the -cycle
where the sum runs over all -dimensional subvarieties of and the integer denotes the order of vanishing of along . (Thus is negative if has a pole along .) The definition of the order of vanishing requires some care for singular.
For a scheme of finite type over , the group of -cycles rationally equivalent to zero is the subgroup of generated by the cycles for all -dimensional subvarieties of and all nonzero rational functions on . The Chow group of -dimensional cycles on is the quotient group of by the subgroup of cycles rationally equivalent to zero. Sometimes one writes for the class of a subvariety in the Chow group, and if two subvarieties and have , then a
|
https://en.wikipedia.org/wiki/SMPTE%20292
|
SMPTE 292 is a digital video transmission line standard published by the Society of Motion Picture and Television Engineers (SMPTE). This technical standard is usually referred to as HD-SDI; it is part of a family of standards that define a Serial Digital Interface based on a coaxial cable, intended to be used for transport of uncompressed digital video and audio in a television studio environment.
SMPTE 292 which expands upon SMPTE 259 and SMPTE 344 allowing for bit-rates of 1.485 Gbit/s, and 1.485/1.001 Gbit/s. These bit-rates are sufficient for and often used to transfer uncompressed high-definition video.
Nomenclature
The "M" designator was originally introduced to signify metric dimensions. It is no longer used in listings or filenames. Units of the International System of Units (SI) are the preferred units of measurement in all SMPTE Engineering Documents.
Technical details
The SMPTE 292 standard is a nominally 1.5 Gbit/s interface. Two exact bitrates are defined; 1.485 Gbit/s, and 1.485/1.001 Gbit/s. The factor of 1/1.001 is provided to allow SMPTE 292 to support video formats with frame rates of 59.94 Hz, 29.97 Hz, and 23.98 Hz, in order to be upwards compatible with existing NTSC systems. The 1.485 Gbit/s version of the standard supports other frame rates in widespread use, including 60 Hz, 50 Hz, 30 Hz, 25 Hz, and 24 Hz.
The standard also defines nominal bitrates of 3 Gbit/s, for 50/60 frame per second 1080P applications. This version of the interface is not used (and has not been commercially implemented); instead, either a dual-link extension of SMPTE 292M known as SMPTE 372 or a version running twice as fast known as SMPTE 424 is used for e.g. 1080p60 applications.
Electrical interface
Originally, both electrical and optical interfaces were defined by SMPTE, over concerns that an electrical interface at that bitrate would be expensive or unreliable, and that an optical interface would be necessary. Such fears have not been realized, and
|
https://en.wikipedia.org/wiki/Algebraic%20expression
|
In mathematics, an algebraic expression is an expression built up from constant algebraic numbers, variables, and the algebraic operations (addition, subtraction, multiplication, division and exponentiation by an exponent that is a rational number). For example, is an algebraic expression. Since taking the square root is the same as raising to the power , the following is also an algebraic expression:
An algebraic equation is an equation involving only algebraic expressions.
By contrast, transcendental numbers like and are not algebraic, since they are not derived from integer constants and algebraic operations. Usually, is constructed as a geometric relationship, and the definition of requires an infinite number of algebraic operations.
A rational expression is an expression that may be rewritten to a rational fraction by using the properties of the arithmetic operations (commutative properties and associative properties of addition and multiplication, distributive property and rules for the operations on the fractions). In other words, a rational expression is an expression which may be constructed from the variables and the constants by using only the four operations of arithmetic. Thus,
is a rational expression, whereas
is not.
A rational equation is an equation in which two rational fractions (or rational expressions) of the form
are set equal to each other. These expressions obey the same rules as fractions. The equations can be solved by cross-multiplying. Division by zero is undefined, so that a solution causing formal division by zero is rejected.
Terminology
Algebra has its own terminology to describe parts of an expression:
1 – Exponent (power), 2 – coefficient, 3 – term, 4 – operator, 5 – constant, - variables
In roots of polynomials
The roots of a polynomial expression of degree n, or equivalently the solutions of a polynomial equation, can always be written as algebraic expressions if n < 5 (see quadratic formula, cubic function, an
|
https://en.wikipedia.org/wiki/Featural%20writing%20system
|
In a featural writing system, the shapes of the symbols (such as letters) are not arbitrary but encode phonological features of the phonemes that they represent. The term featural was introduced by Geoffrey Sampson to describe the Korean alphabet and Pitman shorthand.
Joe Martin introduced the term featural notation to describe writing systems that include symbols to represent individual features rather than phonemes. He asserts that "alphabets have no symbols for anything smaller than a phoneme".
A featural script represents finer detail than an alphabet. Here, symbols do not represent whole phonemes, but rather the elements (features) that make up the phonemes, such as voicing or its place of articulation. In the Korean alphabet, the featural symbols are combined into alphabetic letters, and these letters are in turn joined into syllabic blocks, so the system combines three levels of phonological representation.
Some scholars (e.g. John DeFrancis) reject this class or at least labeling the Korean alphabet as such. Others include stenographies and constructed scripts of hobbyists and fiction writers (such as Tengwar), many of which feature advanced graphic designs corresponding to phonologic properties. The basic unit of writing in these systems can map to anything from phonemes to words. It has been shown that even the Latin script has sub-character "features".
Examples of featural systems
This is a small list of examples of featural writing systems by date of creation. The languages for which each system was developed are also shown.
15th century
Hangul Korean
19th century
Canadian Aboriginal syllabics several Algonquian, Eskimo-Aleut and Athabaskan languages
Gregg shorthand many languages from different families
Duployan shorthand originally French, later English, German, Spanish, Romanian, Chinook Jargon and others
Visible Speech (a phonetic script) no specific language. Developed to aid the deaf and teach them to speak properly
20th century
Shavian a
|
https://en.wikipedia.org/wiki/Glass-ceramic-to-metal%20seals
|
A glass-ceramic-to-metal seal is a type of mechanical seal which binds glass-ceramic and metal surfaces. They are related to glass-to-metal seals, and like them are hermetic (airtight).
Properties
Glass-ceramics are polycrystalline ceramic materials prepared by the controlled crystallization of suitable glasses, normally silicates. Depending on the starting glass composition and the heat-treatment schedule adopted, glass-ceramics can be prepared with tailored thermal expansion characteristics. This makes them ideal for sealing to a variety of different metals, ranging from low expansion tungsten (W) or molybdenum (Mo) to high expansion stainless steels and nickel-based superalloys.
Glass-ceramic-to-metal seals offer superior properties over their glass equivalents including more refractory behaviour, in addition to their ability to seal successfully to many different metals and alloys. They have been used in electrical feed-through seals for such applications as vacuum interrupter envelopes and pyrotechnic actuators, in addition to many applications where a higher temperature capability than is possible with glass-to-metal seals is required, including solid oxide fuel cells.
Process
In the formation of a glass-ceramic-to-metal seal, the parts to be joined are first heated, normally under inert atmosphere, in order to melt the glass and allow it to wet and flow into the metal parts, in much the same way as when preparing a more conventional glass-to-metal seal. The temperature is then normally reduced into a temperature regime where many microscopic nuclei are formed in the glass. The temperature is then raised again into a regime where the major crystalline phases can form and grow to create the polycrystalline ceramic material with thermal expansion characteristics matched to that of the particular metal parts.
Examples
The white opaque "glue" between the panel and the funnel of a colour TV cathode ray tube is a
devitrified solder glass based on the system --.
|
https://en.wikipedia.org/wiki/Poisson%20ring
|
In mathematics, a Poisson ring is a commutative ring on which an anticommutative and distributive binary operation satisfying the Jacobi identity and the product rule is defined. Such an operation is then known as the Poisson bracket of the Poisson ring.
Many important operations and results of symplectic geometry and Hamiltonian mechanics may be formulated in terms of the Poisson bracket and, hence, apply to Poisson algebras as well. This observation is important in studying the classical limit of quantum mechanics—the non-commutative algebra of operators on a Hilbert space has the Poisson algebra of functions on a symplectic manifold as a singular limit, and properties of the non-commutative algebra pass over to corresponding properties of the Poisson algebra.
Definition
The Poisson bracket must satisfy the identities
(skew symmetry)
(distributivity)
(derivation)
(Jacobi identity)
for all in the ring.
A Poisson algebra is a Poisson ring that is also an algebra over a field. In this case, add the extra requirement
for all scalars s.
For each g in a Poisson ring A, the operation defined as is a derivation. If the set generates the set of derivations of A, then A is said to be non-degenerate.
If a non-degenerate Poisson ring is isomorphic as a commutative ring to the algebra of smooth functions on a manifold M, then M must be a symplectic manifold and is the Poisson bracket defined by the symplectic form.
References
Ring theory
Symplectic geometry
|
https://en.wikipedia.org/wiki/Dynamic%20logic%20%28digital%20electronics%29
|
In integrated circuit design, dynamic logic (or sometimes clocked logic) is a design methodology in combinational logic circuits, particularly those implemented in metal–oxide–semiconductor (MOS) technology. It is distinguished from the so-called static logic by exploiting temporary storage of information in stray and gate capacitances. It was popular in the 1970s and has seen a recent resurgence in the design of high-speed digital electronics, particularly central processing units (CPUs). Dynamic logic circuits are usually faster than static counterparts and require less surface area, but are more difficult to design. Dynamic logic has a higher average rate of voltage transitions than static logic, but the capacitive loads being transitioned are smaller so the overall power consumption of dynamic logic may be higher or lower depending on various tradeoffs. When referring to a particular logic family, the dynamic adjective usually suffices to distinguish the design methodology, e.g. dynamic CMOS or dynamic SOI design.
Besides its use of dynamic state storage via voltages on capacitances, dynamic logic is distinguished from so-called static logic in that dynamic logic uses a clock signal in its implementation of combinational logic. The usual use of a clock signal is to synchronize transitions in sequential logic circuits. For most implementations of combinational logic, a clock signal is not even needed. The static/dynamic terminology used to refer to combinatorial circuits is related to the use of the same adjectives used to distinguish memory devices, e.g. static RAM from dynamic RAM, in that dynamic RAM stores state dynamically as voltages on capacitances, which must be periodically refreshed. But there are also differences in usage; the clock can be stopped in the appropriate phase in a system with dynamic logic and static storage.
Static versus dynamic logic
The largest difference between static and dynamic logic is that in dynamic logic, a clock signa
|
https://en.wikipedia.org/wiki/Ancillary%20data
|
Ancillary data is data that has been added to given data and uses the same form of transport. Common examples are cover art images for media files or streams, or digital data added to radio or television broadcasts.
Television
Ancillary data (commonly abbreviated as ANC data), in the context of television systems, refers to a means which by non-video information (such as audio, other forms of essence, and metadata) may be embedded within the serial digital interface. Ancillary data is standardized by SMPTE as SMPTE 291M: Ancillary Data Packet and Space Formatting.
Ancillary data can be located in non-picture portions of horizontal scan lines. This is known as horizontal ancillary data (HANC). Ancillary data can also be located in non-picture regions of the frame, This is known as vertical ancillary data (VANC).
Technical details
Location
Ancillary data packets may be located anywhere within a serial digital data stream, with the following exceptions:
They should not be located in the lines identified as a switch point (which may be lost when switching sources).
They should not be located in the active picture area.
They may not cross the TRS (timing reference signal) packets.
Ancillary data packets are commonly divided into two types, depending on where they are located—specific packet types are often constrained to be in one location or another.
Ancillary packets located in the horizontal blanking region (after EAV but before SAV), regardless of line, are known as horizontal ancillary data, or HANC. HANC is commonly used for higher-bandwidth data, and/or for things that need to be synchronized to a particular line; the most common type of HANC is embedded audio.
Ancillary packets located in the vertical blanking region, and after SAV but before EAV, are known as vertical ancillary data, or VANC. VANC is commonly used for low-bandwidth data, or for things that only need be updated on a per-field or per-frame rate. Closed caption data and VPID are ge
|
https://en.wikipedia.org/wiki/Alphabet%20%28formal%20languages%29
|
In formal language theory, an alphabet, sometimes called a vocabulary, is a non-empty set of indivisible symbols/glyphs, typically thought of as representing letters, characters, digits, phonemes, or even words. Alphabets in this technical sense of a set are used in a diverse range of fields including logic, mathematics, computer science, and linguistics. An alphabet may have any cardinality ("size") and depending on its purpose maybe be finite (e.g., the alphabet of letters "a" through "z"), countable (e.g., ), or even uncountable (e.g., ).
Strings, also known as "words" or "sentences", over an alphabet are defined as a sequence of the symbols from the alphabet set. For example, the alphabet of lowercase letters "a" through "z" can be used to form English words like "iceberg" while the alphabet of both upper and lower case letters can also be used to form proper names like "Wikipedia". A common alphabet is {0,1}, the binary alphabet, and a "00101111" is an example of a binary string. Infinite sequence of symbols may be considered as well (see Omega language).
It is often necessary for practical purposes to restrict the symbols in an alphabet so that they are unambiguous when interpreted. For instance, if the two-member alphabet is {00,0}, a string written on paper as "000" is ambiguous because it is unclear if it is a sequence of three "0" symbols, a "00" followed by a "0", or a "0" followed by a "00".
Notation
If L is a formal language, i.e. a (possibly infinite) set of finite-length strings, the alphabet of L is the set of all symbols that may occur in any string in L.
For example, if L is the set of all variable identifiers in the programming language C, Ls alphabet is the set { a, b, c, ..., x, y, z, A, B, C, ..., X, Y, Z, 0, 1, 2, ..., 7, 8, 9, _ }.
Given an alphabet , the set of all strings of length over the alphabet is indicated by . The set of all finite strings (regardless of their length) is indicated by the Kleene star operator as , and is also
|
https://en.wikipedia.org/wiki/Saccharimeter
|
A saccharimeter is an instrument for measuring the concentration of sugar solutions.
This is commonly achieved using a measurement of refractive index (refractometer) or the angle of rotation of polarization of optically active sugars (polarimeter).
Saccharimeters are used in food processing industries, brewing, and the distilled alcoholic drinks industry.
External links
Historical
Bates Type Saccharimeter NIST Museum object.
Measuring instruments
|
https://en.wikipedia.org/wiki/Magnetic%20detector
|
The magnetic detector or Marconi magnetic detector, sometimes called the "Maggie", was an early radio wave detector used in some of the first radio receivers to receive Morse code messages during the wireless telegraphy era around the turn of the 20th century. Developed in 1902 by radio pioneer Guglielmo Marconi from a method invented in 1895 by New Zealand physicist Ernest Rutherford it was used in Marconi wireless stations until around 1912, when it was superseded by vacuum tubes. It was widely used on ships because of its reliability and insensitivity to vibration. A magnetic detector was part of the wireless apparatus in the radio room of the RMS Titanic which was used to summon help during its famous 15 April 1912 sinking.
History
The primitive spark gap radio transmitters used during the first three decades of radio (1886-1916) could not transmit audio (sound) and instead transmitted information by wireless telegraphy; the operator switched the transmitter on and off with a telegraph key, creating pulses of radio waves to spell out text messages in Morse code. So the radio receiving equipment of the time did not have to convert the radio waves into sound like modern receivers, but merely detect the presence or absence of the radio signal. The device that did this was called a detector. The first widely used detector was the coherer, invented in 1890. The coherer was a very poor detector, insensitive and prone to false triggering due to impulsive noise, which motivated much research to find better radio wave detectors.
Ernest Rutherford had first used the hysteresis of iron to detect Hertzian waves in 1896 by the demagnetization of an iron needle when a radio signal passed through a coil around the needle, however the needle had to be remagnetized so this was not suitable for a continuous detector. Many other wireless researchers such as E. Wilson, C. Tissot, Reginald Fessenden, John Ambrose Fleming, Lee De Forest, J.C. Balsillie, and L. Tieri
|
https://en.wikipedia.org/wiki/HelenOS
|
HelenOS is an operating system based on a multiserver microkernel design. The source code of HelenOS is written in C and published under the BSD-3-Clause license.
The system is described as a “research development open-source operating system”.
Technical overview
The microkernel handles multitasking, memory management and inter-process communication. It also provides kernel-based threads and supports symmetric multiprocessing.
Typical to microkernel design, file systems, networking, device drivers and graphical user interface are isolated from each other into a collection of user space components that communicate via a message bus.
Each process (called task) can contain several threads (preemptively scheduled by the kernel) which, in turn, can contain several fibers scheduled cooperatively in user space. Device and file-system drivers, as well as other system services, are implemented by a collection of user-space tasks (servers), creating thus the multiserver nature of HelenOS.
Tasks communicate via HelenOS IPC, which is connection oriented and asynchronous. It can be used to send small fixed-size messages, blocks of bytes or to negotiate sharing of memory. Messages can be forwarded without copying bulk data or mapping memory to the address space of middle-men tasks.
Development
HelenOS development is community-driven. The developer community consists of a small core team, mainly staff and former and contemporary students of the Faculty of Mathematics and Physics at Charles University in Prague, and a number of contributors around the world. In 2011, 2012 and 2014, HelenOS participated in the Google Summer of Code as a mentoring organization. In 2013, the project was a mentoring organization in the ESA Summer of Code in Space 2013 program.
The source code of HelenOS is published under the BSD-3-Clause license, while some third-party components are available under the GNU General Public License. Both of these licences are free software licenses, making Hele
|
https://en.wikipedia.org/wiki/Link%20%28simplicial%20complex%29
|
The link in a simplicial complex is a generalization of the neighborhood of a vertex in a graph. The link of a vertex encodes information about the local structure of the complex at the vertex.
Link of a vertex
Given an abstract simplicial complex and a vertex in , its link is a set containing every face such that and is a face of .
In the special case in which is a 1-dimensional complex (that is: a graph), contains all vertices such that is an edge in the graph; that is, the neighborhood of in the graph.
Given a geometric simplicial complex and , its link is a set containing every face such that and there is a simplex in that has as a vertex and as a face. Equivalently, the join is a face in .
As an example, suppose v is the top vertex of the tetrahedron at the left. Then the link of v is the triangle at the base of the tetrahedron. This is because, for each edge of that triangle, the join of v with the edge is a triangle (one of the three triangles at the sides of the tetrahedron); and the join of v with the triangle itself is the entire tetrahedron.
An alternative definition is: the link of a vertex is the graph constructed as follows. The vertices of are the edges of incident to . Two such edges are adjacent in iff they are incident to a common 2-cell at .
The graph is often given the topology of a ball of small radius centred at ; it is an analog to a sphere centered at a point.
Link of a face
The definition of a link can be extended from a single vertex to any face.
Given an abstract simplicial complex and any face of , its link is a set containing every face such that are disjoint and is a face of : .
Given a geometric simplicial complex and any face , its link is a set containing every face such that are disjoint and there is a simplex in that has both and as faces.
Examples
The link of a vertex of a tetrahedron is a triangle – the three vertices of the link corresponds to the three edges incident to
|
https://en.wikipedia.org/wiki/Object-oriented%20modeling
|
Object-oriented modeling (OOM) is an approach to modeling an application that is used at the beginning of the software life cycle when using an object-oriented approach to software development.
The software life cycle is typically divided up into stages going from abstract descriptions of the problem to designs then to code and testing and finally to deployment. Modeling is done at the beginning of the process. The reasons to model a system before writing the code are:
Communication. Users typically cannot understand programming language or code. Model diagrams can be more understandable and can allow users to give developers feedback on the appropriate structure of the system. A key goal of the Object-Oriented approach is to decrease the "semantic gap" between the system and the real world by using terminology that is the same as the functions that users perform. Modeling is an essential tool to facilitate achieving this goal .
Abstraction. A goal of most software methodologies is to first address "what" questions and then address "how" questions. I.e., first determine the functionality the system is to provide without consideration of implementation constraints and then consider how to take this abstract description and refine it into an implementable design and code given constraints such as technology and budget. Modeling enables this by allowing abstract descriptions of processes and objects that define their essential structure and behavior.
Object-oriented modeling is typically done via use cases and abstract definitions of the most important objects. The most common language used to do object-oriented modeling is the Object Management Group's Unified Modeling Language (UML).
See also
Object-oriented analysis and design
References
Object-oriented programming
Software design
|
https://en.wikipedia.org/wiki/Charles%20Read%20%28mathematician%29
|
Charles John Read (16 February 1958 – 14 August 2015) was a British mathematician known for his work in functional analysis. In operator theory, he is best known for his work in the 1980s on the invariant subspace problem, where he constructed operators with only trivial invariant subspaces on particular Banach spaces, especially on . He won the 1985 Junior Berwick Prize for his work on the invariant subspace problem.
Read has also published on Banach algebras and hypercyclicity; in particular, he constructed the first example of an amenable, commutative, radical Banach algebra.
Education and career
Read won a scholarship to study mathematics at Trinity College, Cambridge in October 1975, and was awarded a first-class degree in Mathematics in 1978. He completed his PhD thesis entitled Some Problems in the Geometry of Banach Spaces at the University of Cambridge under the supervision of Béla Bollobás. He spent the year 1981–82 at Louisiana State University. From 2000 until his death, he was a Professor of Pure Mathematics at the University of Leeds after having been a fellow of Trinity College for several years.
Personal life
Christianity
On his personal website, formerly hosted on a server at the University of Leeds, Read described himself first and foremost as a Born-Again Christian. Some biographical details could be found in what he described as his "Christian Testimony" on that site, where he described his conversion process.
He described losing his father to cancer in 1970 when he was 11 years old, and that this loss prompted him to ask questions about whether, and in what form, we might continue to live after we die - and that consciousness may be independent of the body. He came to the conclusion that the conscious mind must survive after death. This also led him to believe that since we are "immortal beings" that we must always try to "do the right thing".
Some time later the article described an incident where he had pushed a smaller boy out of the
|
https://en.wikipedia.org/wiki/Perillaldehyde
|
Perillaldehyde, perillic aldehyde or perilla aldehyde, is a natural organic compound found most abundantly in the annual herb perilla, but also in a wide variety of other plants and essential oils. It is a monoterpenoid containing an aldehyde functional group.
Perillaldehyde, or volatile oils from perilla that are rich in perillaldehyde, are used as food additives for flavoring and in perfumery to add spiciness. Perillaldehyde can be readily converted to perilla alcohol, which is also used in perfumery. It has a mint-like, cinnamon odor and is primarily responsible for the flavor of perilla.
The oxime of perillaldehyde is known as perillartine or perilla sugar and is about 2000 times sweeter than sucrose and is used in Japan as a sweetener. It is presented in lower concentrations in the body odor of persons suffering from Parkinson's disease.
See also
Icosane
References
External links
Aldehydes
Food additives
Perfume ingredients
Sugar substitutes
Monoterpenes
Cyclohexenes
|
https://en.wikipedia.org/wiki/COGO
|
COGO is a suite of programs used in civil engineering for modelling horizontal and vertical alignments and solving coordinate geometry problems. Cogo alignments are used as controls for the geometric design of roads, railways, and stream relocations or restorations.
COGO was originally a subsystem of MIT's Integrated Civil Engineering System (ICES), developed in the 1960s. Other ICES subsystems included STRUDL, BRIDGE, LEASE, PROJECT, ROADS and TRANSET, and the internal languages ICETRAN and CDL. Evolved versions of COGO are still widely used.
Some basic types of elements of COGO are points, Euler spirals, lines and horizontal curves (circular arcs).
More complex elements can be developed such as alignments or chains which are made up of a combination of points, curves or spirals.
See also
Civil engineering software
References
"Engineer's Guide to ICES COGO I", R67-46, Civil Engineering Dept MIT (Aug 1967)
"An Integrated Computer System for Engineering Problem Solving", D. Roos, Proc SJCC 27(2), AFIPS (Spring 1965). Sammet 1969, pp.615-620.
Mathematical software
Surveying
History of software
|
https://en.wikipedia.org/wiki/Completeness%20%28cryptography%29
|
In cryptography, a boolean function is said to be complete if the value of each output bit depends on all input bits.
This is a desirable property to have in an encryption cipher, so that if one bit of the input (plaintext) is changed, every bit of the output (ciphertext) has an average of 50% probability of changing. The easiest way to show why this is good is the following: consider that if we changed our 8-byte plaintext's last byte, it would only have any effect on the 8th byte of the ciphertext. This would mean that if the attacker guessed 256 different plaintext-ciphertext pairs, he would always know the last byte of every 8byte sequence we send (effectively 12.5% of all our data). Finding out 256 plaintext-ciphertext pairs is not hard at all in the internet world, given that standard protocols are used, and standard protocols have standard headers and commands (e.g. "get", "put", "mail from:", etc.) which the attacker can safely guess. On the other hand, if our cipher has this property (and is generally secure in other ways, too), the attacker would need to collect 264 (~1020) plaintext-ciphertext pairs to crack the cipher in this way.
See also
Correlation immunity
Cryptography
|
https://en.wikipedia.org/wiki/Audio%20noise%20measurement
|
Audio noise measurement is a process carried out to assess the quality of audio equipment, such as the kind used in recording studios, broadcast engineering, and in-home high fidelity.
In audio equipment noise is a low-level hiss or buzz that intrudes on audio output. Every piece of equipment which the recorded signal subsequently passes through will add a certain amount of electronic noise the process of removing this and other noises is called noise reduction.
Origins of noise – the need for weighting
Microphones, amplifiers and recording systems all add some electronic noise to the signals passing through them, generally described as hum, buzz or hiss. All buildings have low-level magnetic and electrostatic fields in and around them emanating from mains supply wiring, and these can induce hum into signal paths, typically 50 Hz or 60 Hz (depending on the country's electrical supply standard) and lower harmonics. Shielded cables help to prevent this, and on professional equipment where longer interconnections are common, balanced signal connections (most often with XLR or phone connectors) are usually employed. Hiss is the result of random signals, often arising from the random motion of electrons in transistors and other electronic components, or the random distribution of oxide particles on analog magnetic tape. It is predominantly heard at high frequencies, sounding like steam or compressed air.
Attempts to measure noise in audio equipment as RMS voltage, using a simple level meter or voltmeter, do not produce useful results; a special noise-measuring instrument is required. This is because noise contains energy spread over a wide range of frequencies and levels, and different sources of noise have different spectral content. For measurements to allow fair comparison of different systems they must be made using a measuring instrument that responds in a way that corresponds to how we hear sounds. From this, three requirements follow. Firstly, it is important
|
https://en.wikipedia.org/wiki/Rasterscan
|
Rasterscan is a video game published in 1987 by Mastertronic for the ZX Spectrum, Commodore 64, Amstrad CPC, and MSX. It was written by Binary Design based in Parsonage Gardens, Manchester with the C64 version programmed by Phillip Allsopp.
Plot
The Rasterscan, a large damaged spacecraft, is drifting uncontrollably towards a nearby star. The Rasterscan can still be controlled and piloted to safety but only by a droid called MSB. Unfortunately, MSB is also damaged and (without help) can only repair toasters. The player needs to control MSB and, hopefully, use it to save the unfortunate spacecraft.
Gameplay
The player controls MSB, a spherical droid who can float through the interior of the ship in all directions. MSB can interact with the craft's machinery and instruments, which all serve a purpose. It also needs to solve logic-puzzles in order to open doors (different puzzles for each door) to allow it access to more parts of the spacecraft.
External links
Rasterscan at CPC WIKI
1987 video games
Amstrad CPC games
Binary Design games
Commodore 64 games
Mastertronic games
MSX games
Puzzle video games
Single-player video games
Video games developed in the United Kingdom
ZX Spectrum games
|
https://en.wikipedia.org/wiki/Lindisfarne%20Association
|
The Lindisfarne Association (1972–2012) was a nonprofit foundation and diverse group of intellectuals organized by cultural historian William Irwin Thompson for the "study and realization of a new planetary culture".
It was inspired by the philosophy of Alfred North Whitehead's idea of an integral philosophy of organism, and by Teilhard de Chardin's idea of planetization.
History
Thompson conceived the idea for the Lindisfarne association while touring spiritual sites and experimental communities around the world. The Lindisfarne Association is named for Lindisfarne Priory—a monastery, known for the Lindisfarne Gospels, founded on the British island of Lindisfarne in the 7th century.
Advertising executive Gene Fairly had just left his position at Interpublic Group of Companies and begun studying Zen Buddhism when he read a review of Thompson's At the Edge of History in the New York Times. Fairly visited Thompson at York University in Toronto to discuss forming a group for the promotion of planetary culture. Upon returning to New York he raised $150,000 from such donors as Nancy Wilson Ross and Sydney and Jean Lanier. Support from these donors served as an entrée to the Rockefeller Brothers Fund.
Incorporation and first years in New York
Lindisfarne was incorporated as a non-profit educational foundation in December 1972. It began operations at a refitted summer camp in Southampton, New York on August 31, 1973.
From 1974–1977 Lindisfarne held an annual conference "to explore the new planetary culture" with the following themes:
Planetary Culture and the New Image of Humanity, 1974
Conscious Evolution and the Evolution of Consciousness, 1975
A Light Governance for America: the Cultures and Strategies of Decentralization, 1976
Mind in Nature, 1977
Earth's answer : explorations of planetary culture at the Lindisfarne conferences (1977) reprints some of the lectures given at the 1974 and 1975 conferences.
The Lindisfarne Association was first based in South
|
https://en.wikipedia.org/wiki/Respiratory%20quotient
|
The respiratory quotient (RQ or respiratory coefficient) is a dimensionless number used in calculations of basal metabolic rate (BMR) when estimated from carbon dioxide production. It is calculated from the ratio of carbon dioxide produced by the body to oxygen consumed by the body. Such measurements, like measurements of oxygen uptake, are forms of indirect calorimetry. It is measured using a respirometer. The respiratory quotient value indicates which macronutrients are being metabolized, as different energy pathways are used for fats, carbohydrates, and proteins. If metabolism consists solely of lipids, the respiratory quotient is approximately 0.7, for proteins it is approximately 0.8, and for carbohydrates it is 1.0. Most of the time, however, energy consumption is composed of both fats and carbohydrates. The approximate respiratory quotient of a mixed diet is 0.8. Some of the other factors that may affect the respiratory quotient are energy balance, circulating insulin, and insulin sensitivity.
It can be used in the alveolar gas equation.
Calculation
The respiratory quotient (RQ) is the ratio:
RQ = CO2 eliminated / O2 consumed
where the term "eliminated" refers to carbon dioxide (CO2) removed from the body.
In this calculation, the CO2 and O2 must be given in the same units, and in quantities proportional to the number of molecules. Acceptable inputs would be either moles, or else volumes of gas at standard temperature and pressure.
Many metabolized substances are compounds containing only the elements carbon, hydrogen, and oxygen. Examples include fatty acids, glycerol, carbohydrates, deamination products, and ethanol. For complete oxidation of such compounds, the chemical equation is
CxHyOz + (x + y/4 - z/2) O2
→ x CO2 + (y/2) H2O
and thus metabolism of this compound gives an RQ of x/(x + y/4 - z/2).
For glucose, with the molecular formula, C6H12O6, the complete oxidation equation is C6H12O6 + 6 O2
→ 6 CO2 + 6 H2O. Thus, the RQ= 6 CO2/ 6 O2=1.
Fo
|
https://en.wikipedia.org/wiki/Armstrong%20Audio
|
Armstrong Audio, originally called Armstrong Wireless and Television Ltd. was a British manufacturer of radios and other audio equipment based in London, England. Founded by Claude Charles Jackson in 1932.
History
Initially created to manufacture portable radios, during World War II their factory was used to manufacture radios, public address systems, and various electronic parts. After the war, they began to produce television sets, as well as long range radios for ships, but eventually ceased production of those lines to manufacture radios, amplifiers and tuners for home consumer use. In the 1950s when the high fidelity market began to take shape, the company name was changed to Armstrong Audio and they focused their marketing and manufacturing at becoming hi-fi specialists.
During the 1960s and 1970s they were extremely successful, creating several durable radio models which are still in use by consumers today, but by the end of the 1970s their lease on their factory ran out and it was decided not to invest in a new one. The building was torn down and the owners redeveloped it. Using plans developed for a further radio model, some of the staff continued on as Armstrong Amplifiers, but due to a lack of capital and suitable manufacturing space, production did not last long.
Today, what once was Armstrong Audio is now called Armstrong Hi-Fi and Video Services and is based in Walthamstow, and they provide maintenance contract to a number of retail stores.
Armstrong 521
The Armstrong 521 was a stereo hi-fi amplifier from the Armstrong Audio company and was marketed as 2 x 25W amplifier.
It employed germanium AL102 transistors in its output stages and these had a reputation for failure and are now unobtainable although it is possible, with modification to replace these with newer, silicon transistors. The amplifier was a single rail design and employed an electrolytic output capacitor in the output stage. The amplifier featured inputs for tape, tuner and MM g
|
https://en.wikipedia.org/wiki/SCSI%20architectural%20model
|
The SCSI architectural model provides an abstract view of the way that SCSI devices communicate. It is intended to show how the different SCSI standards are inter-related. The main concepts and terminology of the SCSI architectural model are:
Only the externally observable behavior is defined in SCSI standards.
The relationship between SCSI devices is described by a client-server service-delivery model. The client is called a SCSI initiator and the server is called a SCSI target.
A SCSI domain consists of at least one SCSI device, at least one SCSI target and at least one SCSI initiator interconnected by a service delivery subsystem.
A SCSI device has one or more SCSI ports, and a SCSI port may have an optional SCSI port identifier (SCSI ID or PID).
A SCSI device can have an optional SCSI device name which must be unique within the SCSI domain in which the SCSI device has SCSI ports. This is often called a World Wide Name. Note that the "world" may only consist of a very small number of SCSI devices.
A SCSI target consists of one or more logical units (LUNs), which are identified by logical unit numbers.
A LUN may have dependent LUNs embedded within it. This can recur up to a maximum nesting depth of four addressable levels.
There are three type of SCSI ports: initiator ports, target ports and target/initiator ports. A SCSI device may contain any combination of initiator ports, target ports and target/initiator ports.
SCSI distributed objects are considered to communicate in a three layer model:
The highest level of abstraction is the SCSI Application Layer (SAL) where an initiator and a target are considered to communicate using SCSI commands sent via the SCSI application protocol.
The SCSI Transport Protocol Layer (STPL) is where an initiator and a target are considered to communicate using a SCSI transport protocol. Examples of SCSI transport protocols are Fibre Channel, SSA, SAS, UAS, iSCSI and the SCSI Parallel Interface.
The lowest level i
|
https://en.wikipedia.org/wiki/Opportunistic%20encryption
|
Opportunistic encryption (OE) refers to any system that, when connecting to another system, attempts to encrypt communications channels, otherwise falling back to unencrypted communications. This method requires no pre-arrangement between the two systems.
Opportunistic encryption can be used to combat passive wiretapping. (an active wiretapper, on the other hand, can disrupt encryption negotiation to either force an unencrypted channel or perform a man-in-the-middle attack on the encrypted link.) It does not provide a strong level of security as authentication may be difficult to establish and secure communications are not mandatory. However, it does make the encryption of most Internet traffic easy to implement, which removes a significant impediment to the mass adoption of Internet traffic security.
Opportunistic encryption on the Internet is described in "Opportunistic Encryption using the Internet Key Exchange (IKE)", "Opportunistic Security: Some Protection Most of the Time", and in "Opportunistic Security for HTTP/2".
Routers
The FreeS/WAN project was one of the early proponents of OE. The effort is continued by the former freeswan developers now working on Libreswan. Libreswan aims to support different authentication hooks for Opportunistic Encryption with IPsec. Version 3.16, which was released in December 2015, had support for Opportunistic IPsec using AUTH-NULL which is based on RFC 7619. The Libreswan Project is currently working on (forward) Domain Name System Security Extensions (DNSSEC) and Kerberos support for Opportunistic IPsec.
Openswan has also been ported to the OpenWrt project. Openswan used reverse DNS records to facilitate the key exchange between the systems.
It is possible to use OpenVPN and networking protocols to set up dynamic VPN links which act similar to OE for specific domains.
Linux and Unix-like systems
The FreeS/WAN and forks such as Openswan and strongSwan offer VPNs that can also operate in OE mode using IPsec-based tech
|
https://en.wikipedia.org/wiki/Reciprocal%20rule
|
In calculus, the reciprocal rule gives the derivative of the reciprocal of a function f in terms of the derivative of f. The reciprocal rule can be used to show that the power rule holds for negative exponents if it has already been established for positive exponents. Also, one can readily deduce the quotient rule from the reciprocal rule and the product rule.
The reciprocal rule states that if f is differentiable at a point x and f(x) ≠ 0 then g(x) = 1/f(x) is also differentiable at x and
Proof
This proof relies on the premise that is differentiable at and on the theorem that is then also necessarily continuous there. Applying the definition of the derivative of at with gives
The limit of this product exists and is equal to the product of the existing limits of its factors:
Because of the differentiability of at the first limit equals and because of and the continuity of at the second limit thus yielding
A weak reciprocal rule that follows algebraically from the product rule
It may be argued that since
an application of the product rule says that
and this may be algebraically rearranged to say
However, this fails to prove that 1/f is differentiable at x; it is valid only when differentiability of 1/f at x is already established. In that way, it is a weaker result than the reciprocal rule proved above. However, in the context of differential algebra, in which there is nothing that is not differentiable and in which derivatives are not defined by limits, it is in this way that the reciprocal rule and the more general quotient rule are established.
Application to generalization of the power rule
Often the power rule, stating that , is proved by methods that are valid only when n is a nonnegative integer. This can be extended to negative integers n by letting , where m is a positive integer.
Application to a proof of the quotient rule
The reciprocal rule is a special case of the quotient rule, which states that if f and g are differentiab
|
https://en.wikipedia.org/wiki/Geometric%20dynamic%20recrystallization
|
Geometric Dynamic Recrystallization (GDR) is a recrystallization mechanism that has been proposed to occur in several alloys, particularly aluminium, at high temperatures and low strain rates. It is a variant of dynamic recrystallization.
The basic mechanism is that during deformation the grains will be increasingly flattened until the boundaries on each side are separated by only a small distance. The deformation is accompanied by the serration of the grain boundaries due to surface tension effects where they are in contact with low-angle grain boundaries belonging to sub-grains.
Eventually the points of the serrations will come into contact. Since the contacting boundaries are defects of opposite 'sign' they are able to annihilate and so reduce the total energy in the system. In effect the grain will pinch in two new grains.
The grain size is known to decrease as the applied stress is increased. However, high stresses require a high strain rate and at some point statically recrystallized grains will begin to nucleate and consume the GDRX microstructure.
There are features that are unique to GDRX:
The recrystallisation spreads throughout the specimen over a strain range (0.5-1 in Al-Mg-Mn alloys) without any change in flow stress. This is in contrast to discontinuous mechanisms where the flow stress normally decreases by ~25% as the recrystallized grains form.
GDRX results in grains that are around 3 times the sub-grain size. Statically recrystallized grains are normally 20-30 times the sub-grain size.
Metallurgy
|
https://en.wikipedia.org/wiki/Grain%20growth
|
In materials science, grain growth is the increase in size of grains (crystallites) in a material at high temperature. This occurs when recovery and recrystallisation are complete and further reduction in the internal energy can only be achieved by reducing the total area of grain boundary. The term is commonly used in metallurgy but is also used in reference to ceramics and minerals. The behaviors of grain growth is analogous to the coarsening behaviors of grains, which implied that both of grain growth and coarsening may be dominated by the same physical mechanism.
Importance of grain growth
The practical performances of polycrystalline materials are strongly affected by the formed microstructure inside, which is mostly dominated by grain growth behaviors. For example, most materials exhibit the Hall–Petch effect at room-temperature and so display a higher yield stress when the grain size is reduced (assuming abnormal grain growth has not taken place). At high temperatures the opposite is true since the open, disordered nature of grain boundaries means that vacancies can diffuse more rapidly down boundaries leading to more rapid Coble creep. Since boundaries are regions of high energy they make excellent sites for the nucleation of precipitates and other second-phases e.g. Mg–Si–Cu phases in some aluminium alloys or martensite platlets in steel. Depending on the second phase in question this may have positive or negative effects.
Rules of grain growth
Grain growth has long been studied primarily by the examination of sectioned, polished and etched samples under the optical microscope. Although such methods enabled the collection of a great deal of empirical evidence, particularly with regard to factors such as temperature or composition, the lack of crystallographic information limited the development of an understanding of the fundamental physics. Nevertheless, the following became well-established features of grain growth:
Grain growth occurs by the movement
|
https://en.wikipedia.org/wiki/Wine%20cave
|
Wine caves are subterranean structures for the storage and the aging of wine. They are an integral component of the wine industry worldwide. The design and construction of wine caves represents a unique application of underground construction techniques.
The storage of wine in extensive underground space is an extension of the culture of wine cellar rooms, both offering the benefits of energy efficiency and optimum use of limited land area. Wine caves naturally provide both high humidity and cool temperatures, which are key to the storage and aging of wine.
History
The history of wine cave construction in the United States dates back to the 1860s in Sonoma, and the 1870s in the Napa Valley region. In 1857, Agoston Harazsthy founded Buena Vista Winery and in 1862, Buena Vista Winery's Press House was completed, and in 1864, a second building now called the Champagne Cellars was completed. In total, Buena Vista Winery had five caves among the two buildings in operation in 1864. Jacob Schram, a German immigrant and barber, founded Schramsberg Vineyards near Calistoga, California in 1862. Eight years later, Schram found new employment for the Chinese laborers who had recently finished constructing tunnels and grades over the Sierra Nevada Mountains for the Union Pacific Transcontinental Railroad. He hired them to dig a network of caves through the soft Sonoma Volcanics Formation rock underlying his vineyard.
Another Chinese workforce took time away from their regular vineyard work to excavate a labyrinth of wine-aging caves beneath the Beringer Vineyards near St. Helena, California. These caves exceeded 1,200 ft (365 m) long, 17 ft (5 m) wide and 7 ft (2 m) high. The workers used pick-axes and shovels – and on occasion, chisel steel, double jacks and black powder – to break the soft rock. They worked by candlelight, and removed the excavated material in wicker baskets. At least 12 wine storage caves were constructed by these methods.
From the late 19th century to t
|
https://en.wikipedia.org/wiki/Brunton%20compass
|
A Brunton compass, properly known as the Brunton Pocket Transit, is a precision compass made by Brunton, Inc. of Riverton, Wyoming. The instrument was patented in 1894 by Canadian-born geologist David W. Brunton. Unlike most modern compasses, the Brunton Pocket Transit utilizes magnetic induction damping rather than fluid to damp needle oscillation. Although Brunton, Inc. makes many other types of magnetic compasses, the Brunton Pocket Transit is a specialized instrument used widely by those needing to make accurate navigational and slope-angle measurements in the field. Users are primarily geologists, but archaeologists, environmental engineers, mining engineers and surveyors also make use of the Brunton's capabilities. The United States Army has adopted the Pocket Transit as the M2 Compass for use by crew-served artillery.
Overview
The Pocket Transit may be adjusted for declination angle according to one's location on the Earth. It is used to get directional degree measurements (azimuth) through use of the Earth's magnetic field. Holding the compass at waist-height, the user looks down into the mirror and lines up the target, needle, and guide line that is on the mirror. Once all three are lined up and the compass is level, the reading for that azimuth can be made. Arguably the most frequent use for the Brunton in the field is the calculation of the strike and dip of geological features (faults, contacts, foliation, sedimentary strata, etc.). Strike is measured by leveling (with the bull's eye level) the compass along the plane being measured. Dip is taken by laying the side of the compass perpendicular to the strike measurement and rotating horizontal level until the bubble is stable and the reading has been made. If field conditions allow, additional features of the compass allow users to measure such geological attributes from a distance.
As with most traditional compasses, directional measurements are made in reference to the Earth's magnetic field. T
|
https://en.wikipedia.org/wiki/Singular%20measure
|
In mathematics, two positive (or signed or complex) measures and defined on a measurable space are called singular if there exist two disjoint measurable sets whose union is such that is zero on all measurable subsets of while is zero on all measurable subsets of This is denoted by
A refined form of Lebesgue's decomposition theorem decomposes a singular measure into a singular continuous measure and a discrete measure. See below for examples.
Examples on Rn
As a particular case, a measure defined on the Euclidean space is called singular, if it is singular with respect to the Lebesgue measure on this space. For example, the Dirac delta function is a singular measure.
Example. A discrete measure.
The Heaviside step function on the real line,
has the Dirac delta distribution as its distributional derivative. This is a measure on the real line, a "point mass" at However, the Dirac measure is not absolutely continuous with respect to Lebesgue measure nor is absolutely continuous with respect to but if is any open set not containing 0, then but
Example. A singular continuous measure.
The Cantor distribution has a cumulative distribution function that is continuous but not absolutely continuous, and indeed its absolutely continuous part is zero: it is singular continuous.
Example. A singular continuous measure on
The upper and lower Fréchet–Hoeffding bounds are singular distributions in two dimensions.
See also
References
Eric W Weisstein, CRC Concise Encyclopedia of Mathematics, CRC Press, 2002. .
J Taylor, An Introduction to Measure and Probability, Springer, 1996. .
Integral calculus
Measures (measure theory)
|
https://en.wikipedia.org/wiki/Latitudinal%20gradients%20in%20species%20diversity
|
Species richness, or biodiversity, increases from the poles to the tropics for a wide variety of terrestrial and marine organisms, often referred to as the latitudinal diversity gradient. The latitudinal diversity gradient is one of the most widely recognized patterns in ecology. It has been observed to varying degrees in Earth's past. A parallel trend has been found with elevation (elevational diversity gradient), though this is less well-studied.
Explaining the latitudinal diversity gradient has been called one of the great contemporary challenges of biogeography and macroecology (Willig et al. 2003, Pimm and Brown 2004, Cardillo et al. 2005). The question "What determines patterns of species diversity?" was among the 25 key research themes for the future identified in 125th Anniversary issue of Science (July 2005). There is a lack of consensus among ecologists about the mechanisms underlying the pattern, and many hypotheses have been proposed and debated. A recent review noted that among the many conundrums associated with the latitudinal diversity gradient (or latitudinal biodiversity gradient) the causal relationship between rates of molecular evolution and speciation has yet to be demonstrated.
Understanding the global distribution of biodiversity is one of the most significant objectives for ecologists and biogeographers. Beyond purely scientific goals and satisfying curiosity, this understanding is essential for applied issues of major concern to humankind, such as the spread of invasive species, the control of diseases and their vectors, and the likely effects of global climate change on the maintenance of biodiversity (Gaston 2000). Tropical areas play prominent roles in the understanding of the distribution of biodiversity, as their rates of habitat degradation and biodiversity loss are exceptionally high.
Patterns in the past
The latitudinal diversity gradient is a noticeable pattern among modern organisms that has been described qualitatively and
|
https://en.wikipedia.org/wiki/Data%20transfer%20object
|
In the field of programming a data transfer object (DTO) is an object that carries data between processes. The motivation for its use is that communication between processes is usually done resorting to remote interfaces (e.g., web services), where each call is an expensive operation. Because the majority of the cost of each call is related to the round-trip time between the client and the server, one way of reducing the number of calls is to use an object (the DTO) that aggregates the data that would have been transferred by the several calls, but that is served by one call only.
The difference between data transfer objects and business objects or data access objects is that a DTO does not have any behavior except for storage, retrieval, serialization and deserialization of its own data (mutators, accessors, serializers and parsers). In other words,
DTOs are simple objects that should not contain any business logic but may contain serialization and deserialization mechanisms for transferring data over the wire.
This pattern is often incorrectly used outside of remote interfaces. This has triggered a response from its author where he reiterates that the whole purpose of DTOs is to shift data in expensive remote calls.
Terminology
A "Value Object" is not a DTO. The two terms have been conflated by Sun/Java community in the past.
References
External links
Summary from Fowler's book
Data Transfer Object - Microsoft MSDN Library
GeDA - generic dto assembler is an open source Java framework for enterprise level solutions
Local DTO
Architectural pattern (computer science)
Concurrent computing
Software design patterns
|
https://en.wikipedia.org/wiki/Union-closed%20sets%20conjecture
|
The union-closed sets conjecture, also known as Frankl’s conjecture, is an open problem in combinatorics posed by Péter Frankl in 1979. A family of sets is said to be union-closed if the union of any two sets from the family belongs to the family. The conjecture states: For every finite union-closed family of sets, other than the family containing only the empty set, there exists an element that belongs to at least half of the sets in the family.
Professor Timothy Gowers has called this "one of the best known open problems in combinatorics" and has said that the conjecture "feels as though it ought to be easy (and as a result has attracted a lot of false proofs over the years). A good way to understand why it isn't easy is to spend an afternoon trying to prove it. That clever averaging argument you had in mind doesn't work ..."
Example
The family of setsconsists of five different sets and is union-closed. The element is contained in three of the five sets (and so is the element ), thus the conjecture holds in this case.
Basic results
It is easy to show that if a union-closed family contains a singleton (as in the example above), then the element must occur in at least half of the sets of the family.
If there is a counterexample to the conjecture, then there is also a counterexample consisting only of finite sets. Therefore, without loss of generality, we will assume that all sets in the given union-closed family are finite.
Given a finite non-empty set , the power set consisting of all subsets of is union-closed. Each element of is contained in exactly half of the subsets of . Therefore, in general we cannot ask for an element contained in more than half of the sets of the family: the bound of the conjecture is sharp.
Equivalent forms
Intersection formulation
The union-closed set conjecture is true if and only if a set system which is intersection-closed contains an element of in at most half of the sets of , where is the universe set, i.e. the
|
https://en.wikipedia.org/wiki/MMARP
|
The Multicast MAnet Routing Protocol (MMARP) aims to provide multicast routing in Mobile Ad Hoc Networks (MANETs) taking into account interoperation with fixed IP networks with support of IGMP/MLD protocol. This is achieved by the Multicast Internet Gateway (MIG) which is an ad hoc node itself and is responsible for notifying access routers about the interest revealed by common ad hoc nodes. Any of these nodes may become a MIG at any time but needs to be one hop away from the network access router. Once it self-configures as MIG it should then broadcast periodically its address as being the one of the default multicast gateway. Whoever besides this proactive advertisement the protocol states a reactive component the ad hoc mesh is created and maintained.
When a source node has multicast traffic to send it broadcast a message warning potential receivers of such data. Receivers should then manifest interest sending a Join message towards the source creating a multicast shortest path. Also in the same way the MIG should inform all the ad hoc nodes about the path towards multicast sources in the fixed network.
See also
List of ad hoc routing protocols
References
External links
MMARP PROTOCOL
Wireless networking
Networks
|
https://en.wikipedia.org/wiki/Tactile%20transducer
|
A tactile transducer or "bass shaker" is a device which is made on the principle that low bass frequencies can be felt as well as heard. They can be compared with a common loudspeaker, just that the diaphragm is missing. Instead, another object is used as a diaphragm. A shaker transmits low-frequency vibrations into various surfaces so that they can be felt by people. This is called tactile sound. Tactile transducers may augment or in some cases substitute for a subwoofer. One benefit of tactile transducers is they produce little or no noise, if properly installed, as compared with a subwoofer speaker enclosure.
Applications
A bass-shaker is meant to be firmly attached to some surface such as a seat, couch or floor. The shaker houses a small weight which is driven by a voice coil similar to those found in dynamic loudspeakers. The voice-coil is driven by a low-frequency audio signal from an amplifier; common shakers typically handle 25 to 50 watts of amplifier power. The voice coil exerts force on both the weight and the body of the shaker, with the latter forces being transmitted into the mounting surface. Tactile transducers may be used in a home theater, a video gaming chair or controller, a commercial movie theater, or for special effects in an arcade game, amusement park ride or other application.
Related to bass shakers are a newer type of transducer referred to as linear actuators. These piston-like electromagnetic devices transmit motion in a direct fashion by lifting home theater seating in the vertical plane rather than transferring vibrations (by mounting within a seat, platform or floor). This technology is said to transmit a high-fidelity sound-motion augmentation, whereas "Shakers" may require heavy equalization and/or multiple units to approach a realistic effect.
Virtual reality
There are other products which employ hydraulic (long-throw) linear actuators and outboard motion processors for home applications as popularized in "virtual reality" ride
|
https://en.wikipedia.org/wiki/History%20of%20the%20Amiga
|
The Amiga is a family of home computers that were designed and sold by the Amiga Corporation (and later by Commodore Computing International) from 1985 to 1994.
Amiga Corporation
The Amiga's Original Chip Set, code-named Lorraine, was designed by the Amiga Corporation during the end of the first home video game boom. Development of the Lorraine project was done using a Sage IV machine nicknamed "Agony" which had 64-kbit memory modules with a capacity of 1 mbit and a 8 MHz . Amiga Corp. funded the development of the Lorraine by manufacturing game controllers, and later with an initial bridge loan from Atari Inc. while seeking further investors. The chipset was to be used in a video game machine, but following the video game crash of 1983, the Lorraine was reconceived as a multi-tasking multi-media personal computer.
The company demonstrated a prototype at the January 1984 Consumer Electronics Show (CES) in Chicago, attempting to attract investors. The Sage acted as the CPU, and BYTE described "big steel boxes" substituting for the chipset that did not yet exist. The magazine reported in April 1984 that Amiga Corporation "is developing a 68000-based home computer with a custom graphics processor. With 128K bytes of RAM and a floppy-disk drive, the computer will reportedly sell for less than $1000 late this year."
Further presentations were made at the following CES in June 1984, to Sony, HP, Philips, Apple, Silicon Graphics, and others. Steve Jobs of Apple, who had just introduced the Macintosh in January, was shown the original prototype for the first Amiga and stated that there was too much hardware – even though the newly redesigned board consisted of just three silicon chips which had yet to be shrunk down. Investors became increasingly wary of new computer companies in an industry dominated by the IBM PC. Jay Miner, co-founder, lead engineer and architect, took out a second mortgage on his home to keep the company from going bankrupt.
In July 1984, Atari Inc
|
https://en.wikipedia.org/wiki/Refer%20%28software%29
|
refer is a program for managing bibliographic references, and citing them in troff, nroff, and groff documents. It is implemented as a preprocessor.
refer was written by Mike Lesk at Bell Laboratories in or before 1978, and is now available as part of most Unix-like operating systems. A free reimplementation exists as part of the groff package.
, refer sees little use, primarily because troff itself is not used much for longer technical writing that might need software support for reference and citation management. , some reference management software (for instance, RefWorks) will import refer data.
Example
refer works with a "reference file", a text file where the author lists works to which they might want to refer. One such reference, to an article in a journal in this case, might look like:
%A Brian W. Kernighan
%A Lorinda L. Cherry
%T A System for Typesetting Mathematics
%J J. Comm. ACM
%V 18
%N 3
%D March 1978
%P 151-157
%K eqn
The author then can refer to it in their troff, groff, or nroff document by listing keywords which uniquely match this reference:
.[
kernighan cherry eqn
.]
Database fields
A refer bibliographic database is a text file consisting of a series of records, separated by one or more blank lines. Within each record, each field starts with a at the beginning of the line and one character immediately after. The name of the field should be followed by exactly one space, and then by the contents of the field. Empty fields are ignored. The conventional meaning of each field is shown in the table below. Compare this scheme with the newer EndNote scheme which uses a similar syntax.
See also
Data schemes
– a text-based data format used by LaTeX
– a similar, but not identical, data scheme used by the EndNote program
RIS – a text-based data scheme from Research Information Systems
Other
Comparison of reference management software
Pybliographer
References
External links
Some Applications of Inverted Ind
|
https://en.wikipedia.org/wiki/Verifiable%20secret%20sharing
|
In cryptography, a secret sharing scheme is verifiable if auxiliary information is included that allows players to verify their shares as consistent. More formally, verifiable secret sharing ensures that even if the dealer is malicious there is a well-defined secret that the players can later reconstruct. (In standard secret sharing, the dealer is assumed to be honest.)
The concept of verifiable secret sharing (VSS) was first introduced in 1985 by Benny Chor, Shafi Goldwasser, Silvio Micali and Baruch Awerbuch.
In a VSS protocol a distinguished player who wants to share the secret is referred to as the dealer. The protocol consists of two phases: a sharing phase and a reconstruction phase.
Sharing: Initially the dealer holds secret as input and each player holds an independent random input. The sharing phase may consist of several rounds. At each round each player can privately send messages to other players and can also broadcast a message. Each message sent or broadcast by a player is determined by its input, its random input and messages received from other players in previous rounds.
Reconstruction: In this phase each player provides its entire view from the sharing phase and a reconstruction function is applied and is taken as the protocol's output.
An alternative definition given by Oded Goldreich defines VSS as a secure multi-party protocol for computing the randomized functionality corresponding to some (non-verifiable) secret sharing scheme. This definition is stronger than that of the other definitions and is very convenient to use in the context of general secure multi-party computation.
Verifiable secret sharing is important for secure multiparty computation. Multiparty computation is typically accomplished by making secret shares of the inputs, and manipulating the shares to compute some function. To handle "active" adversaries (that is, adversaries that corrupt nodes and then make them deviate from the protocol), the secret sharing scheme needs
|
https://en.wikipedia.org/wiki/Factory%20Interface%20Network%20Service
|
FINS, Factory Interface Network Service, is a network protocol used by Omron PLCs, over different physical networks like Ethernet, Controller Link, DeviceNet and RS-232C.
The FINS communications service was developed by Omron to provide a consistent way for PLCs and computers on various networks to communicate. Compatible network types include Ethernet, Host Link, Controller Link, SYSMAC LINK, SYSMAC WAY, and Toolbus. FINS allows communications between nodes up to three network levels. A direct connection between a computer and a PLC via Host Link is not considered a network level.
References
Omron FINS Ethernet
Network protocols
|
https://en.wikipedia.org/wiki/CPUID
|
In the x86 architecture, the CPUID instruction (identified by a CPUID opcode) is a processor supplementary instruction (its name derived from CPU Identification) allowing software to discover details of the processor. It was introduced by Intel in 1993 with the launch of the Pentium and SL-enhanced 486 processors.
A program can use the CPUID to determine processor type and whether features such as MMX/SSE are implemented.
History
Prior to the general availability of the CPUID instruction, programmers would write esoteric machine code which exploited minor differences in CPU behavior in order to determine the processor make and model. With the introduction of the 80386 processor, EDX on reset indicated the revision but this was only readable after reset and there was no standard way for applications to read the value.
Outside the x86 family, developers are mostly still required to use esoteric processes (involving instruction timing or CPU fault triggers) to determine the variations in CPU design that are present.
In the Motorola 680x0 family — that never had a CPUID instruction of any kind — certain specific instructions required elevated privileges. These could be used to tell various CPU family members apart. In the Motorola 68010 the instruction MOVE from SR became privileged. This notable instruction (and state machine) change allowed the 68010 to meet the Popek and Goldberg virtualization requirements. Because the 68000 offered an unprivileged MOVE from SR the 2 different CPUs could be told apart by a CPU error condition being triggered.
While the CPUID instruction is specific to the x86 architecture, other architectures (like ARM) often provide on-chip registers which can be read in prescribed ways to obtain the same sorts of information provided by the x86 CPUID instruction.
Calling CPUID
The CPUID opcode is 0F A2.
In assembly language, the CPUID instruction takes no parameters as CPUID implicitly uses the EAX register to determine the main category
|
https://en.wikipedia.org/wiki/Erd%C5%91s%20conjecture%20on%20arithmetic%20progressions
|
Erdős' conjecture on arithmetic progressions, often referred to as the Erdős–Turán conjecture, is a conjecture in arithmetic combinatorics (not to be confused with the Erdős–Turán conjecture on additive bases). It states that if the sum of the reciprocals of the members of a set A of positive integers diverges, then A contains arbitrarily long arithmetic progressions.
Formally, the conjecture states that if A is a large set in the sense that
then A contains arithmetic progressions of any given length, meaning that for every positive integer k there are an integer a and a non-zero integer c such that .
History
In 1936, Erdős and Turán made the weaker conjecture that any set of integers with positive natural density contains infinitely many 3 term arithmetic progressions. This was proven by Klaus Roth in 1952, and generalized to arbitrarily long arithmetic progressions by Szemerédi in 1975 in what is now known as Szemerédi's theorem.
In a 1976 talk titled "To the memory of my lifelong friend and collaborator Paul Turán," Paul Erdős offered a prize of US$3000 for a proof of this conjecture. As of 2008 the problem is worth US$5000.
Progress and related results
Erdős' conjecture on arithmetic progressions can be viewed as a stronger version of Szemerédi's theorem. Because the sum of the reciprocals of the primes diverges, the Green–Tao theorem on arithmetic progressions is a special case of the conjecture.
The weaker claim that A must contain infinitely many arithmetic progressions of length 3 is a consequence of an improved bound in Roth's theorem. A 2016 paper by Bloom proved that if contains no non-trivial three-term arithmetic progressions then .
In 2020 a preprint by Bloom and Sisask improved the bound to for some absolute constant .
In 2023 a preprint by Kelley and Meka gave a new bound of and four days later Bloom and Sisask simplified the result and with a little improvement to .
See also
Problems involving arithmetic progressions
List of sums
|
https://en.wikipedia.org/wiki/Texas%20Math%20and%20Science%20Coaches%20Association
|
The Texas Math and Science Coaches Association or TMSCA is an organization for coaches of academic University Interscholastic League teams in Texas middle schools and high schools, specifically those that compete in mathematics and science-related tests.
Events
There are four events in the TMSCA at both the middle and high school level: Number Sense, General Mathematics, Calculator Applications, and General Science.
Number Sense is an 80-question exam that students are given only 10 minutes to solve. Additionally, no scratch work or paper calculations are allowed. These questions range from simple calculations such as 99+98 to more complicated operations such as 1001×1938. Each calculation is able to be done with a certain trick or shortcut that makes the calculations easier.
The high school exam includes calculus and other difficult topics in the questions also with the same rules applied as to the middle school version.
It is well known that the grading for this event is particularly stringent as errors such as writing over a line or crossing out potential answers are considered as incorrect answers.
General Mathematics is a 50-question exam that students are given only 40 minutes to solve. These problems are usually more challenging than questions on the Number Sense test, and the General Mathematics word problems take more thinking to figure out. Every problem correct is worth 5 points, and for every problem incorrect, 2 points are deducted. Tiebreakers are determined by the person that misses the first problem and by percent accuracy.
Calculator Applications is an 80-question exam that students are given only 30 minutes to solve. This test requires practice on the calculator, knowledge of a few crucial formulas, and much speed and intensity. Memorizing formulas, tips, and tricks will not be enough. In this event, plenty of practice is necessary in order to master the locations of the keys and develop the speed necessary. All correct questions are worth 5
|
https://en.wikipedia.org/wiki/PowerHouse%20%28programming%20language%29
|
PowerHouse is a byte-compiled fourth-generation programming language (or 4GL) originally produced by Quasar Corporation (later renamed Cognos Incorporated) for the Hewlett-Packard HP3000 mini-computer, as well as Data General and DEC VAX/VMS systems. It was initially composed of five components:
QDD, or Quasar Data Dictionary: for building a central data dictionary used by all other components
QDesign: a character-based screen generator
Quick: an interactive, character-based screen processor (running screens generated by QDesign)
Quiz: a report writer
QTP: a batch transaction processor.
History
PowerHouse was introduced in 1982 and bundled together in a single product Quiz and Quick/QDesign, both of which had been previously available separately, with a new batch processor QTP. In 1983, Quasar changed its name to Cognos Corporation and began porting their application development tools to other platforms, notably Digital Equipment Corporation's VMS, Data General's AOS/VS II, and IBM's OS/400, along with the UNIX platforms from these vendors. Cognos also began extending their product line with add-ons to PowerHouse (for example, Architect) and end-user applications written in PowerHouse (for example, MultiView). Subsequent development of the product added support for platform-specific relational databases, such as HP's Allbase/SQL, DEC's Rdb, and Microsoft's SQL Server, as well as cross-platform relational databases such as Oracle, Sybase, and IBM's DB2.
The PowerHouse language represented a considerable achievement. Compared with languages like COBOL, Pascal and PL/1, PowerHouse substantially cut the amount of labour required to produce useful applications on its chosen platforms. It achieved this through the use of a central data-dictionary, a compiled file that extended the attributes of data fields natively available in the DBMS with frequently used programming idioms such as:
display masks
help and message strings
range and pattern checks
help an
|
https://en.wikipedia.org/wiki/IETF%20Administrative%20Support%20Activity
|
The full name of IETF is "The Internet Engineering Task Force"which is the premier Internet standards body. It develops open standards through collaboration for open processes.
The IETF Administrative Support Activity (IASA) is an activity housed within the Internet Society (ISOC).
The IASA is described by , an IETF Request for Comments document, released in April, 2005.
See also
Computer-supported collaboration
References
External links
The Internet Society
The Internet Engineering Task Force
Internet Standards
|
https://en.wikipedia.org/wiki/WHOIS
|
WHOIS (pronounced as the phrase "who is") is a query and response protocol that is used for querying databases that store an Internet resource's registered users or assignees. These resources include domain names, IP address blocks and autonomous systems, but it is also used for a wider range of other information. The protocol stores and delivers database content in a human-readable format. The current iteration of the WHOIS protocol was drafted by the Internet Society, and is documented in .
Whois is also the name of the command-line utility on most UNIX systems used to make WHOIS protocol queries. In addition, WHOIS has a sister protocol called Referral Whois (RWhois).
History
Elizabeth Feinler and her team (who had created the Resource Directory for ARPANET) were responsible for creating the first WHOIS directory in the early 1970s. Feinler set up a server in Stanford's Network Information Center (NIC) which acted as a directory that could retrieve relevant information about people or entities. She and the team created domains, with Feinler's suggestion that domains be divided into categories based on the physical address of the computer.
The process of registration was established in . WHOIS was standardized in the early 1980s to look up domains, people, and other resources related to domain and number registrations. As all registration was done by one organization at that time, one centralized server was used for WHOIS queries. This made looking up such information very easy.
At the time of the emergence of the internet from the ARPANET, the only organization that handled all domain registrations was the Defense Advanced Research Projects Agency (DARPA) of the United States government (created during 1958.). The responsibility of domain registration remained with DARPA as the ARPANET became the Internet during the 1980s. UUNET began offering domain registration service; however, they simply handled the paperwork which they forwarded to the DARPA Network In
|
https://en.wikipedia.org/wiki/Radio%20Reconnaissance%20Platoon
|
The Radio Reconnaissance Platoon is a specially trained Marine Corps Intelligence element of a United States Marine Corps Radio Battalion. A Radio Reconnaissance Team (RRT) was assigned as the tactical signals intelligence collection element for the Marine Corps Special Operations Command, Detachment One. Regular RRTs also participate in SOC operations during Marine Expeditionary Unit (Special Operations Capable), or MEU(SOC), deployments.
Mission
The mission of the Radio Reconnaissance Platoon is to conduct tactical signals intelligence and electronic warfare operations in support of the Marine Air-Ground Task Force (MAGTF) commander during advance force, pre-assault, and deep post-assault operations, as well as maritime special purpose operations.
The RRT is used when the use of conventionally-trained radio battalion elements is inappropriate or not feasible.
While deployed with a MEU (SOC), the Radio Reconnaissance Team is also a part of the Maritime Special Purpose Force (MSPF) as a unit of the Reconnaissance & Surveillance Element (MSPF). The MSPF is a sub-element of the MEU(SOC), as a whole, and is responsible for performing specialized maritime missions. These missions include, but are not limited to:
Direct Action Missions
Maritime interdiction Operations (MIO)
Deep reconnaissance
Capabilities
Indications and warnings
Limited electronic warfare
Communications support
Reconnaissance and surveillance via NATO format
Insertion/Extraction Techniques
Patrolling
Helicopter Touchdown
Helocast
Small Boat (Hard Duck, Soft Duck, Rolled Duck)
Rappel
Fast Rope
Special Patrol Insertion/Extraction (SPIE)
Wet
Dry
Static Line
Over-the-Horizon Combat Rubber Raiding Craft (CRRC)
SIGINT
Foreign languages
Arabic
Russian
Korean
Turkish
Spanish
Persian
Croatian/Serbian/Bosnian
Morse Code intercept (>20 GPM)
Analysis and reporting
Training
RRP begins with completion of Army Airborne School, which is followed by the Basic Reconnaissance Course,
|
https://en.wikipedia.org/wiki/Lebesgue%27s%20decomposition%20theorem
|
In mathematics, more precisely in measure theory, Lebesgue's decomposition theorem states that for every two σ-finite signed measures and on a measurable space there exist two σ-finite signed measures and such that:
(that is, is absolutely continuous with respect to )
(that is, and are singular).
These two measures are uniquely determined by and
Refinement
Lebesgue's decomposition theorem can be refined in a number of ways.
First, the decomposition of a regular Borel measure on the real line can be refined:
where
νcont is the absolutely continuous part
νsing is the singular continuous part
νpp is the pure point part (a discrete measure).
Second, absolutely continuous measures are classified by the Radon–Nikodym theorem, and discrete measures are easily understood. Hence (singular continuous measures aside), Lebesgue decomposition gives a very explicit description of measures. The Cantor measure (the probability measure on the real line whose cumulative distribution function is the Cantor function) is an example of a singular continuous measure.
Related concepts
Lévy–Itō decomposition
The analogous decomposition for a stochastic processes is the Lévy–Itō decomposition: given a Lévy process X, it can be decomposed as a sum of three independent Lévy processes where:
is a Brownian motion with drift, corresponding to the absolutely continuous part;
is a compound Poisson process, corresponding to the pure point part;
is a square integrable pure jump martingale that almost surely has a countable number of jumps on a finite interval, corresponding to the singular continuous part.
See also
Decomposition of spectrum
Hahn decomposition theorem and the corresponding Jordan decomposition theorem
Citations
References
Integral calculus
Theorems in measure theory
|
https://en.wikipedia.org/wiki/Axostyle
|
An axostyle is a sheet of microtubules found in certain protists. It arises from the bases of the flagella, sometimes projecting beyond the end of the cell, and is often flexible or contractile, and so may be involved in movement and provides support for the cell. Axostyles originate in association with a flagellar microtubular root and occur in two groups, the oxymonads and parabasalids; they have different structures and are not homologous. Within Trichomonads the axostyle has been theorised to participate in locomotion and cell adhesion, but also karyokinesis during cell division.
References
Cell biology
|
https://en.wikipedia.org/wiki/Milliken%27s%20tree%20theorem
|
In mathematics, Milliken's tree theorem in combinatorics is a partition theorem generalizing Ramsey's theorem to infinite trees, objects with more structure than sets.
Let T be a finitely splitting rooted tree of height ω, n a positive integer, and the collection of all strongly embedded subtrees of T of height n. In one of its simple forms, Milliken's tree theorem states that if then for some strongly embedded infinite subtree R of T, for some i ≤ r.
This immediately implies Ramsey's theorem; take the tree T to be a linear ordering on ω vertices.
Define where T ranges over finitely splitting rooted trees of height ω. Milliken's tree theorem says that not only is partition regular for each n < ω, but that the homogeneous subtree R guaranteed by the theorem is strongly embedded in T.
Strong embedding
Call T an α-tree if each branch of T has cardinality α. Define Succ(p, P)= , and to be the set of immediate successors of p in P. Suppose S is an α-tree and T is a β-tree, with 0 ≤ α ≤ β ≤ ω. S is strongly embedded in T if:
, and the partial order on S is induced from T,
if is nonmaximal in S and , then ,
there exists a strictly increasing function from to , such that
Intuitively, for S to be strongly embedded in T,
S must be a subset of T with the induced partial order
S must preserve the branching structure of T; i.e., if a nonmaximal node in S has n immediate successors in T, then it has n immediate successors in S
S preserves the level structure of T; all nodes on a common level of S must be on a common level in T.
References
Keith R. Milliken, A Ramsey Theorem for Trees J. Comb. Theory (Series A) 26 (1979), 215-237
Keith R. Milliken, A Partition Theorem for the Infinite Subtrees of a Tree, Trans. Amer. Math. Soc. 263 No.1 (1981), 137-148.
Ramsey theory
Theorems in discrete mathematics
Trees (set theory)
|
https://en.wikipedia.org/wiki/MicroVAX
|
The MicroVAX is a discontinued family of low-cost minicomputers developed and manufactured by Digital Equipment Corporation (DEC). The first model, the MicroVAX I, was introduced in 1983. They used processors that implemented the VAX instruction set architecture (ISA) and were succeeded by the VAX 4000. Many members of the MicroVAX family had corresponding VAXstation variants, which primarily differ by the addition of graphics hardware. The MicroVAX family supports Digital's VMS and ULTRIX operating systems. Prior to VMS V5.0, MicroVAX hardware required a dedicated version of VMS named MicroVMS.
MicroVAX I
The MicroVAX I, code-named Seahorse, introduced in October 1984, was one of DEC's first VAX computers to use very-large-scale integration (VLSI) technology. The KA610 CPU module (also known as the KD32) contained two custom chips which implemented the ALU and FPU while TTL chips were used for everything else. Two variants of the floating point chips were supported, with the chips differing by the type of floating point instructions supported, F and G, or F and D. The system was implemented on two quad-height Q-bus cards - a Data Path Module (DAP) and Memory Controller (MCT). The MicroVAX I used Q-bus memory cards, which limited the maximum memory to 4MiB. The performance of the MicroVAX I was rated at 0.3 VUPs, equivalent to the earlier VAX-11/730.
MicroVAX II
The MicroVAX II, code-named Mayflower, was a mid-range MicroVAX introduced in May 1985 and shipped shortly thereafter. It ran VAX/VMS or, alternatively, ULTRIX, the DEC native Unix operating system. At least one non-DEC operating system was available, BSD Unix from MtXinu.
It used the KA630-AA CPU module, a quad-height Q22-Bus module, which featured a MicroVAX 78032 microprocessor and a MicroVAX 78132 floating-point coprocessor operating at 5 MHz (200 ns cycle time). Two gate arrays on the module implemented the external interface for the microprocessor, Q22-bus interface and the scatter-gather map for DM
|
https://en.wikipedia.org/wiki/Baltimore%20classification
|
Baltimore classification is a system used to classify viruses based on their manner of messenger RNA (mRNA) synthesis. By organizing viruses based on their manner of mRNA production, it is possible to study viruses that behave similarly as a distinct group. Seven Baltimore groups are described that take into consideration whether the viral genome is made of deoxyribonucleic acid (DNA) or ribonucleic acid (RNA), whether the genome is single- or double-stranded, and whether the sense of a single-stranded RNA genome is positive or negative.
Baltimore classification also closely corresponds to the manner of replicating the genome, so Baltimore classification is useful for grouping viruses together for both transcription and replication. Certain subjects pertaining to viruses are associated with multiple, specific Baltimore groups, such as specific forms of translation of mRNA and the host range of different types of viruses. Structural characteristics such as the shape of the viral capsid, which stores the viral genome, and the evolutionary history of viruses are not necessarily related to Baltimore groups.
Baltimore classification was created in 1971 by virologist David Baltimore. Since then, it has become common among virologists to use Baltimore classification alongside standard virus taxonomy, which is based on evolutionary history. In 2018 and 2019, Baltimore classification was partially integrated into virus taxonomy based on evidence that certain groups were descended from common ancestors. Various realms, kingdoms, and phyla now correspond to specific Baltimore groups.
Overview
Baltimore classification groups viruses together based on their manner of mRNA synthesis. Characteristics directly related to this include whether the genome is made of deoxyribonucleic acid (DNA) or ribonucleic acid (RNA), the strandedness of the genome, which can be either single- or double-stranded, and the sense of a single-stranded genome, which is either positive or negative. The
|
https://en.wikipedia.org/wiki/Eddy%20%28fluid%20dynamics%29
|
In fluid dynamics, an eddy is the swirling of a fluid and the reverse current created when the fluid is in a turbulent flow regime. The moving fluid creates a space devoid of downstream-flowing fluid on the downstream side of the object. Fluid behind the obstacle flows into the void creating a swirl of fluid on each edge of the obstacle, followed by a short reverse flow of fluid behind the obstacle flowing upstream, toward the back of the obstacle. This phenomenon is naturally observed behind large emergent rocks in swift-flowing rivers.
An eddy is a movement of fluid that deviates from the general flow of the fluid. An example for an eddy is a vortex which produces such deviation. However, there are other types of eddies that are not simple vortices. For example, a Rossby wave is an eddy which is an undulation that is a deviation from mean flow, but does not have the local closed streamlines of a vortex.
Swirl and eddies in engineering
The propensity of a fluid to swirl is used to promote good fuel/air mixing in internal combustion engines.
In fluid mechanics and transport phenomena, an eddy is not a property of the fluid, but a violent swirling motion caused by the position and direction of turbulent flow.
Reynolds number and turbulence
In 1883, scientist Osborne Reynolds conducted a fluid dynamics experiment involving water and dye, where he adjusted the velocities of the fluids and observed the transition from laminar to turbulent flow, characterized by the formation of eddies and vortices. Turbulent flow is defined as the flow in which the system's inertial forces are dominant over the viscous forces. This phenomenon is described by Reynolds number, a unit-less number used to determine when turbulent flow will occur. Conceptually, the Reynolds number is the ratio between inertial forces and viscous forces.
The general form for the Reynolds number flowing through a tube of radius (or diameter ):
where is the velocity of the fluid, is its density, is
|
https://en.wikipedia.org/wiki/Function%20point
|
The function point is a "unit of measurement" to express the amount of business functionality an information system (as a product) provides to a user. Function points are used to compute a functional size measurement (FSM) of software. The cost (in dollars or hours) of a single unit is calculated from past projects.
Standards
There are several recognized standards and/or public specifications for sizing software based on Function Point.
1. ISO Standards
FiSMA: ISO/IEC 29881:2010 Information technology – Systems and software engineering – FiSMA 1.1 functional size measurement method.
IFPUG: ISO/IEC 20926:2009 Software and systems engineering – Software measurement – IFPUG functional size measurement method.
Mark-II: ISO/IEC 20968:2002 Software engineering – Ml II Function Point Analysis – Counting Practices Manual
Nesma: ISO/IEC 24570:2018 Software engineering – Nesma functional size measurement method version 2.3 – Definitions and counting guidelines for the application of Function Point Analysis
COSMIC: ISO/IEC 19761:2011 Software engineering. A functional size measurement method.
OMG: ISO/IEC 19515:2019 Information technology — Object Management Group Automated Function Points (AFP), 1.0
The first five standards are implementations of the over-arching standard for Functional Size Measurement ISO/IEC 14143. The OMG Automated Function Point (AFP) specification, led by the Consortium for IT Software Quality, provides a standard for automating the Function Point counting according to the guidelines of the International Function Point User Group (IFPUG) However, the current implementations of this standard have a limitation in being able to distinguish External Output (EO) from External Inquiries (EQ) out of the box, without some upfront configuration.
Introduction
Function points were defined in 1979 in Measuring Application Development Productivity by Allan Albrecht at IBM. The functional user requirements of the software are identified and each one is c
|
https://en.wikipedia.org/wiki/International%20Year%20of%20Planet%20Earth
|
The United Nations General Assembly declared 2008 as the International Year of Planet Earth to increase awareness of the importance of Earth sciences for the advancement of sustainable development. UNESCO was designated as the lead agency. The Year's activities spanned the three years 2006–2009.
Goals
The Year aimed to raise $20 million from industry and governments, of which half was to be spent on co-funding research, and half on "outreach" activities. It was intended to be the biggest ever international effort to promote the Earth sciences.
Apart from researchers, who were expected to benefit under the Year's Science Programme, the principal target groups for the Year's broader messages were:
Decision makers and politicians, to better inform them about the how Earth scientific knowledge can be used for sustainable development
The voting public, to communicate to them how Earth scientific knowledge can contribute to a better society
Geoscientists, to help them use their knowledge of various aspects of the Earth for the benefit of the world’s population.
The research themes of the year, set out in ten science prospectuses, were chosen for their societal relevance, multidisciplinary nature, and outreach potential. The Year had twelve founding partners, 23 associate partners, and was backed politically by 97 countries representing 87% of the world’s population. The Year was promoted politically at UNESCO and at the United Nations in New York by the People’s Republic of Tanzania.
The Year encouraged contributions from researchers within ten separate themes. The outreach programme worked in a similar way, receiving bids for support from individuals and organisations worldwide.
The Year's Project Leader was former IUGS President Professor Eduardo F J de Mulder. The Year's Science Committee was chaired by Professor Edward Derbyshire (Royal Holloway) and its Outreach Committee by Dr Ted Nield (Geological Society of London).
The International Year of Planet Eart
|
https://en.wikipedia.org/wiki/Electrostatic%20fieldmeter
|
An electrostatic fieldmeter, also called a static meter is a tool used in the static control industry. It is used for non-contact measurement of electrostatic charge on an object. It measures the force between the induced charges in a sensor and the charge present on the surface of an object. This force is converted to volts, measuring both the initial peak voltage and the rate at which it falls away.
Operation
a charge monitoring probe is placed close (1 mm to 5 mm) to the surface to be measured and the probe body is driven to the same potential as the measured unknown by an electronic circuit. This achieves a high accuracy measurement that is virtually insensitive to variations in probe-to-surface distances. The technique also prevents arc-over between the probe and measured surface when measuring high voltages.
Alternative method
The operation of an electrostatic field meter is based on the charge-discharge process of an electrically floating electrode: Corona source charges a floating electrode, which discharges with a regular repetition frequency to the earth-electrode. The discharge repetition frequency is the measured variable which is a function of the background electrostatic field.
Beside static charge control in electrostatic discharge (ESD) sensitive environments, another possible application is the measurement of the atmospheric electric field, if sufficient sensitivity is available.
See also
Coulombmeter
Electrometer
Electroscope
Electrostatic voltmeter
Faraday cup
References
Further reading
– Electrostatic field meter - Texaco, Inc., 1987 (filed 1983)
– Non-contact autoranging electrostatic fieldmeter with automatic distance indicator, Simco-Ion, Hatfield, PA, 1987 (filed 1985)
Electrical meters
Electrical test equipment
Electronic test equipment
Electronics work tools
Electrostatics
Measuring instruments
|
https://en.wikipedia.org/wiki/Fill%20device
|
A fill device or key loader is a module used to load cryptographic keys into electronic encryption machines. Fill devices are usually hand held and electronic ones are battery operated.
Older mechanical encryption systems, such as rotor machines, were keyed by setting the positions of wheels and plugs from a printed keying list. Electronic systems required some way to load the necessary cryptovariable data. In the 1950s and 1960s, systems such as the U.S. National Security Agency KW-26 and the Soviet Union's Fialka used punched cards for this purpose. Later NSA encryption systems incorporated a serial port fill connector and developed several common fill devices (CFDs) that could be used with multiple systems. A CFD was plugged in when new keys were to be loaded. Newer NSA systems allow "over the air rekeying" (OTAR), but a master key often must still be loaded using a fill device.
NSA uses two serial protocols for key fill, DS-101 and DS-102. Both employ the same U-229 6-pin connector type used for U.S. military audio handsets, with the DS-101 being the newer of the two serial fill protocols. The DS-101 protocol can also be used to load cryptographic algorithms and software updates for crypto modules.
Besides encryption devices, systems that can require key fill include IFF, GPS and frequency hopping radios such as Have Quick and SINCGARS.
Common fill devices employed by NSA include:
KYK-28 pin gun used with the NESTOR (encryption) system
KYK-13 Electronic Transfer Device
KYX-15 Net Control Device
MX-10579 ECCM Fill Device (SINCGARS)
KOI-18 paper tape reader. Can read 8-level paper or PET tape, which is manually pulled through the reader slot by the operator. It is battery powered and has no internal storage, so it can load keys of different lengths, including the 128-bit keys used by more modern systems. The KOI-18 can also be used to load keys into other fill devices that do have internal storage, such as the KYK-13 and AN/CYZ-10. The KOI-18 only supp
|
https://en.wikipedia.org/wiki/Peter%20Mosses
|
Peter David Mosses (born 1948) is a British computer scientist.
Peter Mosses studied mathematics as an undergraduate at Trinity College, Oxford, and went on to undertake a DPhil supervised by Christopher Strachey in the Programming Research Group while at Wolfson College, Oxford in the early 1970s. He was the last student to submit his thesis under Strachey before Strachey's death.
In 1978, Mosses published his compiler-compiler, the Semantic Implementation System (SIS), which uses a denotational semantics description of the input language.
Mosses has spent most of his career at BRICS in Denmark. He returned to a chair at Swansea University, Wales. His main contribution has been in the area of formal program semantics. In particular, with David Watt he developed action semantics, a combination of denotational, operational and algebraic semantics.
Currently, Mosses is a visitor at TU Delft, working with the Programming Languages Group.
References
External links
Home page
Living people
Alumni of Trinity College, Oxford
Alumni of Wolfson College, Oxford
Members of the Department of Computer Science, University of Oxford
British computer scientists
Academics of Swansea University
Formal methods people
1948 births
|
https://en.wikipedia.org/wiki/David%20Watt%20%28computer%20scientist%29
|
David Anthony Watt (born 5 November 1946) is a British computer scientist.
Watt is a professor at the University of Glasgow, Scotland. With Peter Mosses he developed action semantics, a combination of denotational semantics, operational and algebraic semantics. He currently teaches a third year programming languages course, and a postgraduate course on algorithms and data structures. He is recognisable around campus for his more formal attire compared to the department's normally casual dress code.
References
External links
Home page
1946 births
Living people
British computer scientists
Academics of the University of Glasgow
Formal methods people
Place of birth missing (living people)
|
https://en.wikipedia.org/wiki/Grapefruit%20mercaptan
|
Grapefruit mercaptan is the common name for a natural organic compound found in grapefruit. It is a monoterpenoid that contains a thiol (also known as a mercaptan) functional group. Structurally a hydroxy group of terpineol is replaced by the thiol in grapefruit mercaptan, so it also called thioterpineol. Volatile thiols typically have very strong, often unpleasant odors that can be detected by humans in very low concentrations. Grapefruit mercaptan has a very potent, but not unpleasant, odor, and it is the chemical constituent primarily responsible for the aroma of grapefruit. This characteristic aroma is a property of only the R enantiomer.
Pure grapefruit mercaptan, or citrus-derived oils rich in grapefruit mercaptan, are sometimes used in perfumery and the flavor industry to impart citrus aromas and flavors. However, both industries actively seek substitutes for grapefruit mercaptans for use as a grapefruit flavorant, since its decomposition products are often highly disagreeable to the human sense of smell.
The detection threshold for the (+)-(R) enantiomer of grapefruit mercaptan is 2×10−5 ppb, or equivalently a concentration of 2×10−14. This corresponds to being able to detect 2×10−5 mg in one metric ton of water - one of the lowest detection thresholds ever recorded for a naturally occurring compound.
See also
Nootkatone, another aroma compound in grapefruit
Terpineol, where a hydroxyl is in place of the thiol
References
Thiols
Flavors
Perfume ingredients
Monoterpenes
Cyclohexenes
|
https://en.wikipedia.org/wiki/Nutritional%20science
|
Nutritional science (also nutrition science, sometimes short nutrition, dated trophology) is the science that studies the physiological process of nutrition (primarily human nutrition), interpreting the nutrients and other substances in food in relation to maintenance, growth, reproduction, health and disease of an organism.
History
Before nutritional science emerged as an independent study disciplines, mainly chemists worked in this area. The chemical composition of food was examined. Macronutrients, especially protein, fat and carbohydrates, have been the focus components of the study of (human) nutrition since the 19th century. Until the discovery of vitamins and vital substances, the quality of nutrition was measured exclusively by the intake of nutritional energy.
The early years of the 20th century were summarized by Kenneth John Carpenter in his Short History of Nutritional Science as "the vitamin era". The first vitamin was isolated and chemically defined in 1926 (thiamine). The isolation of vitamin C followed in 1932 and its effects on health, the protection against scurvy, was scientifically documented for the first time.
At the instigation of the British physiologist John Yudkin at the University of London, the degrees Bachelor of Science and Master of Science in nutritional science were established in the 1950s.
Nutritional science as a separate discipline was institutionalized in Germany in November 1956 when Hans-Diedrich Cremer was appointed to the chair for human nutrition in Giessen. The Institute for Nutritional Science was initially located at the Academy for Medical Research and Further Education, which was transferred to the Faculty of Human Medicine when the Justus Liebig University was reopened. Over time, seven other universities with similar institutions followed in Germany.
From the 1950s to 1970s, a focus of nutritional science was on dietary fat and sugar. From the 1970s to the 1990s, attention was put on diet-related chronic diseas
|
https://en.wikipedia.org/wiki/Geometry%20Center
|
The Geometry Center was a mathematics research and education center at the University of Minnesota. It was established by the National Science Foundation in the late 1980s and closed in 1998. The focus of the center's work was the use of computer graphics and visualization for research and education in pure mathematics and geometry.
The center's founding director was Al Marden. Richard McGehee directed the center during its final years. The center's governing board was chaired by David P. Dobkin.
Geomview
Much of the work done at the center was for the development of Geomview, a three-dimensional interactive geometry program. This focused on mathematical visualization with options to allow hyperbolic space to be visualised. It was originally written for Silicon Graphics workstations, and has been ported to run on Linux systems; it is available for installation in most Linux distributions through the package management system. Geomview can run under Windows using Cygwin and under Mac OS X. Geomview has a web site at .
Geomview is built on the Object Oriented Graphics Library (OOGL). The displayed scene and the attributes of the objects in it may be manipulated by the graphical command language (GCL) of Geomview. Geomview may be set as a default 3-D viewer for Mathematica.
Videos
Geomview was used in the construction of several mathematical movies including:
Not Knot, exploring hyperbolic space rendering of knot complements.
Outside In, a movie about sphere eversion.
The shape of space, exploring possible three dimensional spaces.
Other software
Other programs developed at the Center included:
WebEQ, a web browser plugin allowing mathematical equations to be viewed and edited.
Kali, to explore plane symmetry groups.
The Orrery, a Solar System visualizer.
SaVi, a satellite visualisation tool for examining the orbits and coverage of satellite constellations.
Crafter, for structural design of spacecraft.
Surface Evolver, to explore minimal surfaces.
SnapP
|
https://en.wikipedia.org/wiki/David%20Harel
|
David Harel (; born 12 April 1950) is a computer scientist, currently serving as President of the Israel Academy of Sciences and Humanities. He has been on the faculty of the Weizmann Institute of Science in Israel since 1980, and holds the William Sussman Professorial Chair of Mathematics. Born in London, England, he was Dean of the Faculty of Mathematics and Computer Science at the institute for seven years.
Biography
Harel is best known for his work on dynamic logic, computability, database theory, software engineering and modelling biological systems. In the 1980s he invented the graphical language of Statecharts for specifying and programming reactive systems, which has been adopted as part of the UML standard. Since the late 1990s he has concentrated on a scenario-based approach to programming such systems, launched by his co-invention (with W. Damm) of Live Sequence Charts. He has published expository accounts of computer science, such as his award winning 1987 book "Algorithmics: The Spirit of Computing" and his 2000 book "Computers Ltd.: What They Really Can’t do", and has presented series on computer science for Israeli radio and television. He has also worked on other diverse topics, such as graph layout, computer science education, biological modeling and the analysis and communication of odors.
Harel completed his PhD at MIT between 1976 and 1978. In 1987, he co-founded the software company I-Logix, which in 2006 became part of IBM. He has advocated building a full computer model of the Caenorhabditis elegans nematode, which was the first multicellular organism to have its genome completely sequenced. The eventual completeness of such a model depends on his updated version of the Turing test. He is a fellow of the ACM, the IEEE, the AAAS, and the EATCS, and a member of several international academies. Harel is active in a number of peace and human rights organizations in Israel.
Awards and honors
1986 Stevens Award for Software Development Methods
|
https://en.wikipedia.org/wiki/Kulkarni%E2%80%93Nomizu%20product
|
In the mathematical field of differential geometry, the Kulkarni–Nomizu product (named for Ravindra Shripad Kulkarni and Katsumi Nomizu) is defined for two -tensors and gives as a result a -tensor.
Definition
If h and k are symmetric -tensors, then the product is defined via:
where the Xj are tangent vectors and is the matrix determinant. Note that , as it is clear from the second expression.
With respect to a basis of the tangent space, it takes the compact form
where denotes the total antisymmetrisation symbol.
The Kulkarni–Nomizu product is a special case of the product in the graded algebra
where, on simple elements,
( denotes the symmetric product).
Properties
The Kulkarni–Nomizu product of a pair of symmetric tensors has the algebraic symmetries of the Riemann tensor. For instance, on space forms (i.e. spaces of constant sectional curvature) and two-dimensional smooth Riemannian manifolds, the Riemann curvature tensor has a simple expression in terms of the Kulkarni–Nomizu product of the metric with itself; namely, if we denote by
the -curvature tensor and by
the Riemann curvature tensor with , then
where is the scalar curvature and
is the Ricci tensor, which in components reads .
Expanding the Kulkarni–Nomizu product using the definition from above, one obtains
This is the same expression as stated in the article on the Riemann curvature tensor.
For this very reason, it is commonly used to express the contribution that the Ricci curvature (or rather, the Schouten tensor) and the Weyl tensor each makes to the curvature of a Riemannian manifold. This so-called Ricci decomposition is useful in differential geometry.
When there is a metric tensor g, the Kulkarni–Nomizu product of g with itself is the identity endomorphism of the space of 2-forms, Ω2(M), under the identification (using the metric) of the endomorphism ring End(Ω2(M)) with the tensor product Ω2(M) ⊗ Ω2(M).
A Riemannian manifold has constant sectional curvature k if and only
|
https://en.wikipedia.org/wiki/Ipsilon%20Networks
|
Ipsilon Networks was a computer networking company which specialised in IP switching during the 1990s.
The first product called the IP Switch ATM 1600 was announced in March 1996 for US$46,000.
Its switch used Asynchronous Transfer Mode (ATM) hardware combined with Internet Protocol routing.
The company had a role in the development of the Multiprotocol Label Switching (MPLS) network protocol. The company published early proposals related to label switching, but did not manage to achieve the market share hoped for and was purchased for $120 million by Nokia in December 1997. The president at the time was Brian NeSmith, and it was located in Sunnyvale, California.
References
External links
Archive.org's image of Ipsilon's web site taken several months prior to the acquisition by Nokia.
Defunct networking companies
Companies disestablished in 1997
Companies based in Sunnyvale, California
Defunct computer companies of the United States
|
https://en.wikipedia.org/wiki/Filtered%20category
|
In category theory, filtered categories generalize the notion of directed set understood as a category (hence called a directed category; while some use directed category as a synonym for a filtered category). There is a dual notion of cofiltered category, which will be recalled below.
Filtered categories
A category is filtered when
it is not empty,
for every two objects and in there exists an object and two arrows and in ,
for every two parallel arrows in , there exists an object and an arrow such that .
A filtered colimit is a colimit of a functor where is a filtered category.
Cofiltered categories
A category is cofiltered if the opposite category is filtered. In detail, a category is cofiltered when
it is not empty,
for every two objects and in there exists an object and two arrows and in ,
for every two parallel arrows in , there exists an object and an arrow such that .
A cofiltered limit is a limit of a functor where is a cofiltered category.
Ind-objects and pro-objects
Given a small category , a presheaf of sets that is a small filtered colimit of representable presheaves, is called an ind-object of the category . Ind-objects of a category form a full subcategory in the category of functors (presheaves) . The category of pro-objects in is the opposite of the category of ind-objects in the opposite category .
κ-filtered categories
There is a variant of "filtered category" known as a "κ-filtered category", defined as follows. This begins with the following observation: the three conditions in the definition of filtered category above say respectively that there exists a cocone over any diagram in of the form , , or . The existence of cocones for these three shapes of diagrams turns out to imply that cocones exist for any finite diagram; in other words, a category is filtered (according to the above definition) if and only if there is a cocone over any finite diagram .
Extending this, given a regular cardinal κ, a
|
https://en.wikipedia.org/wiki/Vakarel%20radio%20transmitter
|
The Vakarel Transmitter was a large broadcasting facility for long- and medium wave near Vakarel, Bulgaria. The Vakarel Transmitter was inaugurated in 1937. It had one directional antenna consisting of three guyed masts and another consisting of two masts.
The most remarkable mast of the Vakarel Transmitter was the Blaw-Knox tower, built in 1937 by the company Telefunken. Along with Lakihegy Tower, Hungary, Riga LVRTC Transmitter, Latvia and Lisnagarvey Radio Mast, Northern Ireland it was one of the few Blaw-Knox towers in Europe until its demolition on 16 September 2020.
The transmitter was shut down at 22:00 UTC on 31 December 2014.
Transmitter internal structure
The modulation method used by the transmitter in Vakarel is called a tube voltage modulation and was successfully used in all powerful AM transmitters at that time. The Vakarel transmitter is supplied with electricity from a substation in Samokov via a medium voltage transmission line. The transmitter uses six stages of amplification. The first stage contains a single radio tube, which generates alternating current at a carrier frequency of 850 kHz. The electrical oscillations of the anode circuit in the tube are coupled in series to the second and third stage. The signals in these three stages are only amplified, without any other changes.
In the special fourth modulation stage, the form of signals is modulated with speech or music. The audio recordings are sent to the transmitter with an underground communication cable from the main radio studio in Sofia. Due to the large distance of almost , the audio signal is amplified at both ends by separate blocks of amplifiers.
The fifth stage consists of six transmitting tubes, two of which are in reserve, and four others can be switched on, if necessary. All of them are water-cooled.
The final sixth stage consists of four high-power transmitting tubes amplifying the final output up to 100 kW. The energy is filtered by a high-power tuned circuit and sent
|
https://en.wikipedia.org/wiki/Semantic%20service-oriented%20architecture
|
A Semantic Service Oriented Architecture (SSOA) is an architecture that allows for scalable and controlled Enterprise Application Integration solutions. SSOA describes an approach to enterprise-scale IT infrastructure. It leverages rich, machine-interpretable descriptions of data, services, and processes to enable software agents to autonomously interact to perform critical mission functions. SSOA is technically founded on three notions:
The principles of Service-oriented architecture (SOA);
Standard Based Design (SBD); and
Semantics-based computing.
SSOA combines and implements these computer science concepts into a robust, extensible architecture capable of enabling complex, powerful functions.
Applications
In the health care industry, SSOA of HL7 has long been implemented. Other protocols include LOINC, PHIN, and HIPAA related standards. There is a series of SSOA-related ISO standards published for financial services, which can be found at the ISO's website,,. Some financial sectors also adopt EMV standards to facilitate European consumers. A part of SSOA on transport and trade are in the ISO sections of 03.220.20 and 35.240.60,. Some general guidelines of the technology and the standards in other fields are partially located at 25.040.40, 35.240.99,,.
See also
Cyber security standards
ISO/IEC 7816
ISO 8583
ISO/IEC 8859
ISO 9241
ISO 9660
ISO/IEC 11179
ISO/IEC 15408
ISO/IEC 17799
ISO/IEC 27000-series
Service component architecture
Semantic web
EMML
Business Intelligence 2.0 (BI 2.0)
References
External links
OSGi Alliance
Web services
Semantic Web
Enterprise application integration
Service-oriented (business computing)
Software architecture
|
https://en.wikipedia.org/wiki/Immune%20complex
|
An immune complex, sometimes called an antigen-antibody complex or antigen-bound antibody, is a molecule formed from the binding of multiple antigens to antibodies. The bound antigen and antibody act as a unitary object, effectively an antigen of its own with a specific epitope. After an antigen-antibody reaction, the immune complexes can be subject to any of a number of responses, including complement deposition, opsonization, phagocytosis, or processing by proteases. Red blood cells carrying CR1-receptors on their surface may bind C3b-coated immune complexes and transport them to phagocytes, mostly in liver and spleen, and return to the general circulation.
The ratio of antigen to antibody determines size and shape of immune complex. This, in turn, determines the effect of the immune complex. Many innate immune cells have FcRs, which are membrane-bound receptors that bind the constant regions of antibodies. Most FcRs on innate immune cells have low affinity for a singular antibody, and instead need to bind to an immune complex containing multiple antibodies in order to begin their intracellular signaling pathway and pass along a message from outside to inside of the cell. Additionally, the grouping and binding together of multiple immune complexes allows for an increase in the avidity, or strength of binding, of the FcRs. This allows innate immune cells to get multiple inputs at once and prevents them from being activated early.
Immune complexes may themselves cause illness when they are deposited in organs, for example, in certain forms of vasculitis. This is the third form of hypersensitivity in the Gell-Coombs classification, called type III hypersensitivity. Such hypersensitivity progressing to disease states produces the immune complex diseases.
Immune complex deposition is a prominent feature of several autoimmune diseases, including rheumatoid arthritis, scleroderma and Sjögren's syndrome. An inability to degrade immune complexes in the lysosome and subs
|
https://en.wikipedia.org/wiki/Moore%20neighborhood
|
In cellular automata, the Moore neighborhood is defined on a two-dimensional square lattice and is composed of a central cell and the eight cells that surround it.
Name
The neighborhood is named after Edward F. Moore, a pioneer of cellular automata theory.
Importance
It is one of the two most commonly used neighborhood types, the other one being the von Neumann neighborhood, which excludes the corner cells. The well known Conway's Game of Life, for example, uses the Moore neighborhood. It is similar to the notion of 8-connected pixels in computer graphics.
The Moore neighbourhood of a cell is the cell itself and the cells at a Chebyshev distance of 1.
The concept can be extended to higher dimensions, for example forming a 26-cell cubic neighborhood for a cellular automaton in three dimensions, as used by 3D Life. In dimension d, where , the size of the neighborhood is 3d − 1.
In two dimensions, the number of cells in an extended Moore neighbourhood of range r is (2r + 1)2.
Algorithm
The idea behind the formulation of Moore neighborhood is to find the contour of a given graph. This idea was a great challenge for most analysts of the 18th century, and as a result an algorithm was derived from the Moore graph which was later called the Moore Neighborhood algorithm.
The pseudocode for the Moore-Neighbor tracing algorithm is
Input: A square tessellation, T, containing a connected component P of black cells.
Output: A sequence B (b1, b2, ..., bk) of boundary pixels i.e. the contour.
Define M(a) to be the Moore neighborhood of pixel a.
Let p denote the current boundary pixel.
Let c denote the current pixel under consideration i.e. c is in M(p).
Let b denote the backtrack of c (i.e. neighbor pixel of p that was previously tested)
Begin
Set B to be empty.
From bottom to top and left to right scan the cells of T until a black pixel, s, of P is found.
Insert s in B.
Set the current boundary point p to s i.e. p=s
Let b = the pixel from which s
|
https://en.wikipedia.org/wiki/OldVersion.com
|
OldVersion.com is an archive website that stores and distributes older versions of primarily Internet-related IBM PC compatible and Apple Macintosh freeware and shareware application software. Alex Levine and Igor Dolgalev founded the site in 2001.
Levine created the site because "Companies make a lot of new versions. They're not always better for the consumer." As reported in The Wall Street Journal, 'Users often try to downgrade when they find confusing changes in a new version or encounter software bugs, or just decide they want to go back to a more familiar version,' said David Smith, an analyst at research firm Gartner. 'Often, they discover that the downgrade process is complicated, if not impossible.'
When OldVersion.com was launched it offered 80 versions of 14 programs.
By 2005, over 500 versions were posted.
By 28 August 2007, this had grown to 2388 versions of 179 programs, in categories such as "graphics", "file-sharing", "security" and "enterprise". The site also carries 600+ versions of 35 Macintosh programs.
In 2007, PC World labeled the site "a treasure trove ... of older-but-better software";
In 2005, National Review called OldVersion.com a "champion" for "software conservatives".
According to Alexander Levine's own words, he has received threats from proprietary software developers for running an archive of obsolete internet browsers with known critical security flaws.
See also
Abandonware
Legacy code
Planned obsolescence
Technology acceptance model
Switching barriers
References
External links
Internet properties established in 2001
Download websites
Web archiving initiatives
|
https://en.wikipedia.org/wiki/BigDog
|
BigDog is a dynamically stable quadruped military robot that was created in 2005 by Boston Dynamics with Foster-Miller, the NASA Jet Propulsion Laboratory, and the Harvard University Concord Field Station. It was funded by DARPA, but the project was shelved after the BigDog was deemed too loud for combat.
History
BigDog was funded by the Defense Advanced Research Projects Agency (DARPA) in the hopes that it would be able to serve as a robotic pack mule to accompany soldiers in terrain too rough for conventional vehicles. Instead of wheels or treads, BigDog uses four legs for movement, allowing it to move across surfaces that would defeat wheels. The legs contain a variety of sensors, including joint position and ground contact. BigDog also features a laser gyroscope and a stereo vision system.
BigDog is long, stands tall, and weighs , making it about the size of a small mule. It is capable of traversing difficult terrain, running at , carrying , and climbing a 35 degree incline. Locomotion is controlled by an onboard computer that receives input from the robot's various sensors. Navigation and balance are also managed by the control system.
BigDog's walking pattern is controlled through four legs, each equipped with four low-friction hydraulic cylinder actuators that power the joints. BigDog's locomotion behaviors can vary greatly. It can stand up, sit down, walk with a crawling gait that lifts one leg at a time, walk with a trotting gait lifting diagonal legs, or trot with a running gait. The travel speed of BigDog varies from a crawl to a trot.
The BigDog project was headed by Dr. Martin Buehler, who received the Joseph Engelberger Award from the Robotics Industries Association in 2012 for the work. Dr. Buehler while previously a professor at McGill University, headed the robotics lab there, developing four-legged walking and running robots.
Built onto the actuators are sensors for joint position and force, and movement is ultimately controlled through
|
https://en.wikipedia.org/wiki/Test%20engineer
|
A test engineer is a professional who determines how to create a process that would best test a particular product in manufacturing and related disciplines, in order to assure that the product meets applicable specifications. Test engineers are also responsible for determining the best way a test can be performed in order to achieve adequate test coverage. Often test engineers also serve as a liaison between manufacturing, design engineering, sales engineering and marketing communities as well.
Test engineer expertises
Test engineers can have different expertise, which depends on what test process they are more familiar with (although many test engineers have full familiarity from the PCB level processes like ICT, JTAG, and AXI) to PCBA and system level processes like board functional test (BFT or FT), burn-in test, system level test (ST). Some of the processes used in manufacturing where a test engineer is needed are:
In-circuit test (ICT)
Stand-alone JTAG test
Automated x-ray inspection (AXI) (also known as X-ray test)
Automated optical inspection (AOI) test
Center of Gravity (CG) test
Continuity or flying probe test
Electromagnetic compatibility or EMI test
(Board) functional test (BFT/FT)
Burn-in test
Environmental stress screening (ESS) test
Highly Accelerated Life Test (HALT)
Highly accelerated stress screening (HASS) test
Insulation test
Ongoing reliability test (ORT)
Regression test
System test (ST)
Vibration test
Final quality audit process (FQA) test
Early project involvement from design phase
Ideally, a test engineer's involvement with a product begins with the very early stages of the engineering design process, i.e. the requirements engineering stage and the design engineering stage. Depending on the culture of the firm, these early stages could involve a Product Requirements Document (PRD) and Marketing Requirements Document (MRD)—some of the earliest work done during a new product introduction (NPI).
By working with or as part o
|
https://en.wikipedia.org/wiki/Hann%20function
|
The Hann function is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing. The function, with length and amplitude is given by:
For digital signal processing, the function is sampled symmetrically (with spacing and amplitude ):
which is a sequence of samples, and can be even or odd. (see ) It is also known as the raised cosine window, Hann filter, von Hann window, etc.
Fourier transform
The Fourier transform of is given by:
Discrete transforms
The Discrete-time Fourier transform (DTFT) of the length, time-shifted sequence is defined by a Fourier series, which also has a 3-term equivalent that is derived similarly to the Fourier transform derivation:
The truncated sequence is a DFT-even (aka periodic) Hann window. Since the truncated sample has value zero, it is clear from the Fourier series definition that the DTFTs are equivalent. However, the approach followed above results in a significantly different-looking, but equivalent, 3-term expression:
An N-length DFT of the window function samples the DTFT at frequencies for integer values of From the expression immediately above, it is easy to see that only 3 of the N DFT coefficients are non-zero. And from the other expression, it is apparent that all are real-valued. These properties are appealing for real-time applications that require both windowed and non-windowed (rectangularly windowed) transforms, because the windowed transforms can be efficiently derived from the non-windowed transforms by convolution.
Name
The function is named in honor of von Hann, who used the three-term weighted average smoothing technique on meteorological data. However, the term Hanning function is also conventionally used, derived from the paper in which the term hanning a signal was used to mean applying the Hann window to it. The confusion arose from the similar Hamming function, named after Richard Hamming.
See also
Window function
Apod
|
https://en.wikipedia.org/wiki/Dodgem
|
Dodgem is a simple abstract strategy game invented by Colin Vout in 1972 while he was a mathematics student at the University of Cambridge as described in the book Winning Ways. It is played on an n×n board with n-1 cars for each player—two cars each on a 3×3 board is enough for an interesting game, but larger sizes are also possible.
Play
The board is initially set up with n-1 blue cars along the left edge and n-1 red cars along the bottom edge, the bottom left square remaining empty. Turns alternate: player 1 ("Left")'s turn is to move any one of the blue cars one space forwards (right) or sideways (up or down). Player 2 ("Right")'s turn is to move any one of the red cars one space forwards (up) or sideways (left or right).
Cars may not move onto occupied spaces. They may leave the board, but only by a forward move. A car which leaves the board is out of the game. There are no captures. A player must always leave their opponent a legal move or else forfeit the game.
The winner is the player who first gets all their pieces off the board, or has all their cars blocked in by their opponent.
The game can also be played in Misere, where you force your opponent to move their pieces off the board.
Theory
The 3×3 game can be completely analyzed (strongly solved) and is a win for the first player—a table showing who wins from every possible position is given in Winning Ways, and given this information it is easy to read off a winning strategy.
David des Jardins showed in 1996 that the 4×4 and 5×5 games never end with perfect play—both players get stuck shuffling their cars from side to side to prevent the other from winning. He conjectures that this is true for all larger boards.
For a 3x3 board, there are 56 reachable positions. Out of the 56 reachable positions, 8 of them are winning, 4 of them are losing, and 44 are draws.
References
.
.
.
External links
"Dodgem" . . . any info? Thread from discussion group rec.games.abstract, 1996, containing David
|
https://en.wikipedia.org/wiki/Jeffrey%20Lagarias
|
Jeffrey Clark Lagarias (born November 16, 1949 in Pittsburgh, Pennsylvania, United States) is a mathematician and professor at the University of Michigan.
Education
While in high school in 1966, Lagarias studied astronomy at the Summer Science Program.
He completed an S.B. and S.M. in Mathematics at the Massachusetts Institute of Technology in 1972. The title of his thesis was "Evaluation of certain character sums". He was a Putnam Fellow at MIT in 1970. He received his Ph.D. in Mathematics from MIT for his thesis "The 4-part of the class group of a quadratic field", in 1974. His advisor for both his masters and Ph.D was Harold Stark.
Career
In 1975, he joined AT&T Bell Laboratories and eventually became Distinguished Member of Technical Staff. Since 1995, he has been a Technology Consultant at AT&T Research Laboratories. In 2002, he moved to Michigan to work at the University and settle down with his family.
While his recent work has been in theoretical computer science, his original training was in analytic algebraic number theory. He has since worked in many areas, both pure and applied, and considers himself a mathematical generalist.
Lagarias discovered an elementary problem that is equivalent to the Riemann hypothesis, namely whether
for all n > 0, we have
with equality only when n = 1. Here Hn is the nth harmonic number, the sum of the reciprocals of the first positive integers, and σ(n) is the divisor function, the sum of the positive divisors of n.
He disproved Keller's conjecture in dimensions at least 10. Lagarias has also done work on the Collatz conjecture and Li's criterion and has written several highly cited papers in symbolic computation with Dave Bayer.
Awards and honors
He received in 1986 a Lester R. Ford award from the Mathematical Association of America and again in 2007.
In 2012, he became a fellow of the American Mathematical Society.
References
External links
Jeffrey Clark Lagarias homepage, University of Michigan
1949 bir
|
https://en.wikipedia.org/wiki/ProCurve
|
HP ProCurve was the name of the networking division of Hewlett-Packard from 1998 to 2010 and was associated with the products that it sold. The name of the division was changed to HP Networking in September 2010 after HP bought 3Com Corporation.
History
The HP division that became the HP ProCurve division began in Roseville, California, in 1979. Originally it was part of HP's Data Systems Division (DSD) and known as DSD-Roseville. Later, it was called the Roseville Networks Division (RND), then the Workgroup Networks Division (WND), before becoming the ProCurve Networking Business (PNB). The trademark filing date for the ProCurve name was February 25, 1998. On August 11, 2008, HP announced the acquisition of Colubris Networks, a maker of wireless networking products. This was completed on October 1, 2008. In November 2008, HP ProCurve was moved into HP's largest business division, the Technology Services Group organization, with HP Enterprise Account Managers being compensated for sales.
In November 2009, HP announced its intent to acquire 3Com for $2.7 billion. In April 2010, HP completed its acquisition.
At Interop Las Vegas in April 2010, HP began publicly using HP Networking as the name for its networking division. Following HP's 2015 acquisition of Aruba Networks and the company's subsequent split later that year, HP Networks was combined with Aruba to form HPE's "Intelligent Edge" business unit under the Aruba Networks brand.
Products
A variety of different networking products have been made by HP. The first products were named EtherTwist while printer connectivity products carried the JetDirect name. As the EtherTwist name faded, most of HP's networking products were given AdvanceStack names. Later, the then-ProCurve division began to offer LAN switches, Core, Datacenter, Distribution, Edge, Web managed and Unmanaged Switches. The ProCurve was also used with Network Management, Routing and Security products.
Notable uses
The International Space Station m
|
https://en.wikipedia.org/wiki/Samsung%20Q1
|
The Samsung Q1 (known as Samsung SENS Q1 in South Korea) was a family of ultra-mobile PCs produced by Samsung Electronics starting in 2007. They had a 7" (18 cm) LCD and were made in several different versions with either Windows XP Tablet PC Edition or Windows Vista Home Premium.
Variations
Q1 series
Samsung Q1
Intel Celeron M ULV (Ultra Low Voltage) 353 running at 900 MHz
40 GB 1.8" Hard Drive (ZIF interface)
512MB DDR2 533
Max memory 2GB DDR2 533
Mobile Intel 915GMS Express Chipset
7 inch WVGA (800×480) resistive (single-touch) touch screen (using finger or stylus) with the included "Easy Display Manager" software allowing the user to downscale from 1024×600 and 1024×768 with a few button presses.
VGA port
Weighs 0.78 kg
3-cell battery (up to 3 hours) or 6-cell battery (up to 6 hours)
WLAN 802.11b/g
LAN port 100 mbit
CompactFlash port Type II
Stereo speakers
Array mics
AVS mode using Windows XP embedded
Bluetooth enabled
Digital Multimedia Broadcasting
2 USB ports
The Q1 is one of the first ultra-mobile computers (UMPC) produced under Microsoft's "Origami" project. The Q1 can boot into two different modes: typical Windows XP (OS can be replaced), and AVS mode running Windows XP Embedded. AVS mode runs in a separate partition and boots directly to a music, photo, and video player with no Windows Explorer interface. The AVS feature is unique to the Q1.
Samsung Q1 SSD
The SSD version is identical to the Q1 except that the 40 GB hard disk drive has been replaced by Samsung's 32 GB solid-state drive. At release, the SSD version was about twice as expensive as the normal Q1.
Samsung Q1b
The Q1b was Samsung's second UMPC device, with a much improved battery life and 30% brighter screen compared to the Q1. The CF card slot and the Ethernet port were removed on this version. It also had a mono speaker and a single microphone.
VIA C7-M ULV @ 1 GHz
5 Hour Battery Life (using standard 3-cell battery)
30% Brighter Screen (LED backlight)
Wi-Fi (802.11 b/g support)
B
|
https://en.wikipedia.org/wiki/Holland%27s%20schema%20theorem
|
Holland's schema theorem, also called the fundamental theorem of genetic algorithms, is an inequality that results from coarse-graining an equation for evolutionary dynamics. The Schema Theorem says that short, low-order schemata with above-average fitness increase exponentially in frequency in successive generations. The theorem was proposed by John Holland in the 1970s. It was initially widely taken to be the foundation for explanations of the power of genetic algorithms. However, this interpretation of its implications has been criticized in several publications reviewed in, where the Schema Theorem is shown to be a special case of the Price equation with the schema indicator function as the macroscopic measurement.
A schema is a template that identifies a subset of strings with similarities at certain string positions. Schemata are a special case of cylinder sets, and hence form a topological space.
Description
Consider binary strings of length 6. The schema 1*10*1 describes the set of all strings of length 6 with 1's at positions 1, 3 and 6 and a 0 at position 4. The * is a wildcard symbol, which means that positions 2 and 5 can have a value of either 1 or 0. The order of a schema is defined as the number of fixed positions in the template, while the defining length is the distance between the first and last specific positions. The order of 1*10*1 is 4 and its defining length is 5. The fitness of a schema is the average fitness of all strings matching the schema. The fitness of a string is a measure of the value of the encoded problem solution, as computed by a problem-specific evaluation function. Using the established methods and genetic operators of genetic algorithms, the schema theorem states that short, low-order schemata with above-average fitness increase exponentially in successive generations. Expressed as an equation:
Here is the number of strings belonging to schema at generation , is the observed average fitness of schema and is the
|
https://en.wikipedia.org/wiki/Ecological%20facilitation
|
Ecological facilitation or probiosis describes species interactions that benefit at least one of the participants and cause harm to neither. Facilitations can be categorized as mutualisms, in which both species benefit, or commensalisms, in which one species benefits and the other is unaffected. This article addresses both the mechanisms of facilitation and the increasing information available concerning the impacts of facilitation on community ecology.
Categories
There are two basic categories of facilitative interactions:
Mutualism is an interaction between species that is beneficial to both. A familiar example of a mutualism is the relationship between flowering plants and their pollinators. The plant benefits from the spread of pollen between flowers, while the pollinator receives some form of nourishment, either from nectar or the pollen itself.
Commensalism is an interaction in which one species benefits and the other species is unaffected. Epiphytes (plants growing on other plants, usually trees) have a commensal relationship with their host plant because the epiphyte benefits in some way (e.g., by escaping competition with terrestrial plants or by gaining greater access to sunlight) while the host plant is apparently unaffected.
Strict categorization, however, is not possible for some complex species interactions. For example, seed germination and survival in harsh environments is often higher under so-called nurse plants than on open ground. A nurse plant is one with an established canopy, beneath which germination and survival are more likely due to increased shade, soil moisture, and nutrients. Thus, the relationship between seedlings and their nurse plants is commensal. However, as the seedlings grow into established plants, they are likely to compete with their former benefactors for resources.
Mechanisms
The beneficial effects of species on one another are realized in various ways, including refuge from physical stress, predation, and competi
|
https://en.wikipedia.org/wiki/Atwater%20system
|
The Atwater system, named after Wilbur Olin Atwater, or derivatives of this system are used for the calculation of the available energy of foods. The system was developed largely from the experimental studies of Atwater and his colleagues in the later part of the 19th century and the early years of the 20th at Wesleyan University in Middletown, Connecticut. Its use has frequently been the cause of dispute, but few alternatives have been proposed. As with the calculation of protein from total nitrogen, the Atwater system is a convention and its limitations can be seen in its derivation.
Derivation
Available energy (as used by Atwater) is equivalent to the modern usage of the term metabolisable energy (ME).
In most studies on humans, losses in secretions and gases are ignored. The gross energy (GE) of a food, as measured by bomb calorimetry is equal to the sum of the heats of combustion of the components – protein (GEp), fat (GEf) and carbohydrate (GEcho) (by difference) in the proximate system.
Atwater considered the energy value of feces in the same way.
By measuring coefficients of availability or in modern terminology apparent digestibility, Atwater derived a system for calculating faecal energy losses.
where Dp, Df, and Dcho are respectively the digestibility coefficients of protein, fat and carbohydrate calculated as
for the constituent in question.
Urinary losses were calculated from the energy to nitrogen ratio in urine. Experimentally this was 7.9 kcal/g (33 kJ/g) urinary nitrogen and thus his equation for metabolisable energy became
Gross energy values
Atwater collected values from the literature and also measured the heat of combustion of proteins, fats and carbohydrates. These vary slightly depending on sources and Atwater derived weighted values for the gross heat of combustion of the protein, fat and carbohydrate in the typical mixed diet of his time. It has been argued that these weighted values are invalid for individual foods and for diets who
|
https://en.wikipedia.org/wiki/InterPro
|
InterPro is a database of protein families, protein domains and functional sites in which identifiable features found in known proteins can be applied to new protein sequences in order to functionally characterise them.
The contents of InterPro consist of diagnostic signatures and the proteins that they significantly match. The signatures consist of models (simple types, such as regular expressions or more complex ones, such as Hidden Markov models) which describe protein families, domains or sites. Models are built from the amino acid sequences of known families or domains and they are subsequently used to search unknown sequences (such as those arising from novel genome sequencing) in order to classify them. Each of the member databases of InterPro contributes towards a different niche, from very high-level, structure-based classifications (SUPERFAMILY and CATH-Gene3D) through to quite specific sub-family classifications (PRINTS and PANTHER).
InterPro's intention is to provide a one-stop-shop for protein classification, where all the signatures produced by the different member databases are placed into entries within the InterPro database. Signatures which represent equivalent domains, sites or families are put into the same entry and entries can also be related to one another. Additional information such as a description, consistent names and Gene Ontology (GO) terms are associated with each entry, where possible.
Data contained in InterPro
InterPro contains three main entities: proteins, signatures (also referred to as "methods" or "models") and entries. The proteins in UniProtKB are also the central protein entities in InterPro. Information regarding which signatures significantly match these proteins are calculated as the sequences are released by UniProtKB and these results are made available to the public (see below). The matches of signatures to proteins are what determine how signatures are integrated together into InterPro entries: comparativ
|
https://en.wikipedia.org/wiki/Ozone%20monitor
|
An ozone monitor is electronic equipment that monitors for ozone concentrations in the air. The instrument may be used to monitor ozone values for industrial applications or to determine the amount of ambient ozone at ground level and determine whether these values violate National Ambient Air Quality Standards (NAAQS).
The ozone molecule absorbs ultraviolet radiation, and most ozone monitors utilized in regulatory applications use ultraviolet absorption to accurately quantify ozone levels. An ozone monitor of this type operates by pulling an air sample from the atmosphere into the machine with an air pump. During one cycle, the ozone monitor will take one air sample through the air inlet, and scrub the ozone from the air; for the next cycle, an air sample bypasses the scrubber and the ozone value calculated. The solenoid valve is electronically activated to shift the air flow either through the scrubber or to bypass it on a timed sequence. The difference between the two sampled values determines the actual ozone value at that time. The monitor may also have options to account for air pressure and air temperature to calculate the value of ozone.
The concentration of ozone is determined using the Beer-Lambert Law that basically says that the absorption of light is proportional to the concentration. For ozone, a 254 nanometer wavelength of light created by a mercury lamp is shined through a specific length of tubing with reflective mirrors. A photodiode at the other end of the tube detects the changes of brightness from the light.
The onboard electronics process the values obtained and display the value on the screen and can also output an electrical signal in volts or a 4-20 mA current that can be read by an electronic data logger. Other options for output are RS232 serial port or ethernet or internal data storage on flash memory.
See also
Environmental science
References
Ozone
Measuring instruments
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.