source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Egress%20filtering
|
In computer networking, egress filtering is the practice of monitoring and potentially restricting the flow of information outbound from one network to another. Typically, it is information from a private TCP/IP computer network to the Internet that is controlled.
TCP/IP packets that are being sent out of the internal network are examined via a router, firewall, or similar edge device. Packets that do not meet security policies are not allowed to leave – they are denied "egress".
Egress filtering helps ensure that unauthorized or malicious traffic never leaves the internal network.
In a corporate network, typical recommendations are that all traffic except that emerging from a select set of servers would be denied egress. Restrictions can further be made such that only select protocols such as HTTP, email, and DNS are allowed. User workstations would then need to be configured either manually or via proxy auto-config to use one of the allowed servers as a proxy.
Corporate networks also typically have a limited number of internal address blocks in use. An edge device at the boundary between the internal corporate network and external networks (such as the Internet) is used to perform egress checks against packets leaving the internal network, verifying that the source IP address in all outbound packets is within the range of allocated internal address blocks.
Egress filtering may require policy changes and administrative work whenever a new application requires external network access. For this reason, egress filtering is an uncommon feature on consumer and very small business networks.
See also
Content-control software
Ingress filtering
Web Proxy Autodiscovery Protocol
References
External links
RFC 3013
Pcisecuritystandards.org
Pcisecuritystandards.org
Sans.org
Computer network security
|
https://en.wikipedia.org/wiki/Bell%20jar
|
A bell jar is a glass jar, similar in shape to a bell (i.e. in its best-known form it is open at the bottom, while its top and sides together are a single piece), and can be manufactured from a variety of materials (ranging from glass to different types of metals). Bell jars are often used in laboratories to form and contain a vacuum. It is a common science apparatus used in experiments. Bell jars have a limited ability to create strong vacuums; vacuum chambers are available when higher performance is needed. They have been used to demonstrate the effect of vacuum on sound propagation.
In addition to their scientific applications, bell jars may also serve as display cases or transparent dust covers. In these situations the bell jar is not usually placed under vacuum.
Vacuum
A vacuum bell jar is placed on a base which is vented to a hose fitting, that can be connected via a hose to a vacuum pump. A vacuum is formed by pumping the air out of the bell jar.
The lower edge of a vacuum bell jar forms a flange of heavy glass, ground smooth on the bottom for better contact. The base of the jar is equally heavy and flattened. A smear of vacuum grease is usually applied between them. As the vacuum forms inside, it creates a considerable compression force, so there is no need to clamp the seal. For this reason, a bell jar cannot be used to contain pressures above atmospheric, only below.
Bell jars are generally used for classroom demonstrations or by hobbyists, when only a relatively low-quality vacuum is required. Cutting-edge research done at ultra high vacuum requires a more sophisticated vacuum chamber. However, several tests may be completed in a bell jar chamber having an effective pump and low leak rate.
Some of the first scientific experiments using a bell jar to provide a vacuum were reported by Robert Boyle. In his book, New Experiments Physico-Mechanicall, Touching the Spring of the Air, and its Effects, (Made, for the Most Part, in a New Pneumatical Engine),
|
https://en.wikipedia.org/wiki/Crosstalk
|
In electronics, crosstalk is any phenomenon by which a signal transmitted on one circuit or channel of a transmission system creates an undesired effect in another circuit or channel. Crosstalk is usually caused by undesired capacitive, inductive, or conductive coupling from one circuit or channel to another.
Crosstalk is a significant issue in structured cabling, audio electronics, integrated circuit design, wireless communication and other communications systems.
Mechanisms
Every electrical signal is associated with a varying field, whether electrical, magnetic or traveling. Where these fields overlap, they interfere with each other's signals. This electromagnetic interference creates crosstalk. For example, if two wires next to each other carry different signals, the currents in them will create magnetic fields that will induce a smaller signal in the neighboring wire.
In electrical circuits sharing a common signal return path, electrical impedance in the return path creates between the signals, resulting in crosstalk.
In cabling
In structured cabling, crosstalk refers to electromagnetic interference from one unshielded twisted pair to another twisted pair, normally running in parallel. Signals traveling through adjacent pairs of wire create magnetic fields that interact with each other, inducing interference in the neighboring pair. The pair causing the interference is called the disturbing pair, while the pair experiencing the interference is the disturbed pair.
(NEXT) NEXT is a measure of the ability of a cable to reject crosstalk, so the higher the NEXT value, the greater the rejection of crosstalk at the local connection. It is referred to as near end because the interference between the two signals in the cable is measured at the same end of the cable as the interfering transmitter. The NEXT value for a given cable type is generally expressed in decibels per feet or decibels per 1000 feet and varies with the frequency of transmission. General specif
|
https://en.wikipedia.org/wiki/Sony%20HDVS
|
Sony HDVS is a range of high-definition video equipment developed in the 1980s to support an early analog high-definition television system (used in multiple sub-Nyquist sampling encoding (MUSE) broadcasts) thought to be the broadcast television systems that would be in use today. The line included professional video cameras, video monitors and linear video editing systems.
History
Sony first demonstrated a wideband analog video HDTV capable video camera, monitor and video tape recorder (VTR) in April 1981 at an international meeting of television engineers in Algiers, Algeria.
The HDVS range was launched in April 1984, with the HDC-100 camera, which was the world's first commercially available HDTV camera and HDV-1000 video recorder, with its companion HDT-1000 processor/TBC, and HDS-1000 video switcher all working in the 1125-line component video format with interlaced video and a 5:3 aspect ratio. The first system consisting of a monitor, camera and VTR was sold by Sony in 1985 for $1.5 million, and the first HDTV production studio, Captain Video, was opened in Paris.
The helical scan VTR (the HDV-100) used magnetic tape similar to 1" type C videotape for analog recording. Sony in 1988 unveiled a new HDVS digital line, including a reel-to-reel digital recording VTR (the HDD-1000) that used digital signals between the machines for dubbing but the primary I/O remained analog signals. The Sony HDVS HDC-300 camera was also introduced. The large HDD-1000 unit was housed in a 1-inch reel-to-reel transport, and because of the high tape speed needed, had a limit of 1-hour per reel. By this time, the aspect ratio of the system had been changed to 16:9. Sony, owner of Columbia Pictures/Tri-Star, would start to archive feature films on this format, requiring an average of two reels per movie. There was also a portable videocassette recorder (the HDV-10) for the HDVS system, using the "UniHi" format of videocassette using 1/2" wide tape. The tape housing is similar in a
|
https://en.wikipedia.org/wiki/Zolotarev%27s%20lemma
|
In number theory, Zolotarev's lemma states that the Legendre symbol
for an integer a modulo an odd prime number p, where p does not divide a, can be computed as the sign of a permutation:
where ε denotes the signature of a permutation and πa is the permutation of the nonzero residue classes mod p induced by multiplication by a.
For example, take a = 2 and p = 7. The nonzero squares mod 7 are 1, 2, and 4, so (2|7) = 1 and (6|7) = −1. Multiplication by 2 on the nonzero numbers mod 7 has the cycle decomposition (1,2,4)(3,6,5), so the sign of this permutation is 1, which is (2|7). Multiplication by 6 on the nonzero numbers mod 7 has cycle decomposition (1,6)(2,5)(3,4), whose sign is −1, which is (6|7).
Proof
In general, for any finite group G of order n, it is straightforward to determine the signature of the permutation πg made by left-multiplication by the element g of G. The permutation πg will be even, unless there are an odd number of orbits of even size. Assuming n even, therefore, the condition for πg to be an odd permutation, when g has order k, is that n/k should be odd, or that the subgroup <g> generated by g should have odd index.
We will apply this to the group of nonzero numbers mod p, which is a cyclic group of order p − 1. The jth power of a primitive root modulo p will have index the greatest common divisor
i = (j, p − 1).
The condition for a nonzero number mod p to be a quadratic non-residue is to be an odd power of a primitive root.
The lemma therefore comes down to saying that i is odd when j is odd, which is true a fortiori, and j is odd when i is odd, which is true because p − 1 is even (p is odd).
Another proof
Zolotarev's lemma can be deduced easily from Gauss's lemma and vice versa. The example
,
i.e. the Legendre symbol (a/p) with a = 3 and p = 11, will illustrate how the proof goes. Start with the set {1, 2, . . . , p − 1} arranged as a matrix of two rows such that the sum of the two elements in any column is zero mod p, say:
Apply the
|
https://en.wikipedia.org/wiki/Theta%20correspondence
|
In mathematics, the theta correspondence or Howe correspondence is a mathematical relation between representations of two groups of a reductive dual pair. The local theta correspondence relates irreducible admissible representations over a local field, while the global theta correspondence relates irreducible automorphic representations over a global field.
The theta correspondence was introduced by Roger Howe in . Its name arose due to its origin in André Weil's representation theoretical formulation of the theory of theta series in . The Shimura correspondence as constructed by Jean-Loup Waldspurger in and may be viewed as an instance of the theta correspondence.
Statement
Setup
Let be a local or a global field, not of characteristic . Let be a symplectic vector space over , and the symplectic group.
Fix a reductive dual pair in . There is a classification of reductive dual pairs.
Local theta correspondence
is now a local field. Fix a non-trivial additive character of . There exists a Weil representation of the metaplectic group associated to , which we write as .
Given the reductive dual pair in , one obtains a pair of commuting subgroups in by pulling back the projection map from to .
The local theta correspondence is a 1-1 correspondence between certain irreducible admissible representations of and certain irreducible admissible representations of , obtained by restricting the Weil representation of to the subgroup . The correspondence was defined by Roger Howe in . The assertion that this is a 1-1 correspondence is called the Howe duality conjecture.
Key properties of local theta correspondence include its compatibility with Bernstein-Zelevinsky induction and conservation relations concerning the first occurrence indices along Witt towers .
Global theta correspondence
Stephen Rallis showed a version of the global Howe duality conjecture for cuspidal automorphic representations over a global field, assuming the validity of the Howe dua
|
https://en.wikipedia.org/wiki/Multimedia%20Broadcast%20Multicast%20Service
|
Multimedia Broadcast Multicast Services (MBMS) is a point-to-multipoint interface specification for existing 3GPP cellular networks, which is designed to provide efficient delivery of broadcast and multicast services, both within a cell as well as within the core network. For broadcast transmission across multiple cells, it defines transmission via single-frequency network configurations. The specification is referred to as Evolved Multimedia Broadcast Multicast Services (eMBMS) when transmissions are delivered through an LTE (Long Term Evolution) network. eMBMS is also known as LTE Broadcast.
Target applications include mobile TV and radio broadcasting, live streaming video services, as well as file delivery and emergency alerts.
Questions remain whether the technology is an optimization tool for the operator or if an operator can generate new revenues with it. Several studies have been published on the domain identifying both cost savings and new revenues.
Deployments
In 2013, Verizon announced that it would launch eMBMS services in 2014, over its nationwide (United States) LTE networks. AT&T subsequently announced plans to use the 700 MHz Lower D and E Block licenses it acquired in 2011 from Qualcomm for an LTE Broadcast service.
Several major operators worldwide have been lining-up to deploy and test the technology. The frontrunners being Verizon in the United States, Kt and Reliance in Asia, and recently EE and Vodafone in Europe.
In January 2014, Korea’s Kt launched the first commercial LTE Broadcast service. The solution includes Kt’s internally developed eMBMS Bearer Service, and Samsung mobile devices fitted with the Expway Middleware as the eMBMS User Service.
In February 2014, Verizon demonstrated the potential of LTE Broadcast during Super Bowl XLVIII, using Samsung Galaxy Note 3s, fitted with Expway's eMBMS User Service.
In July 2014, Nokia demonstrated the use of LTE Broadcast to replace Traditional Digital TV. This use case remains controversi
|
https://en.wikipedia.org/wiki/Interstitium
|
The interstitium is a contiguous fluid-filled space existing between a structural barrier, such as a cell membrane or the skin, and internal structures, such as organs, including muscles and the circulatory system. The fluid in this space is called interstitial fluid, comprises water and solutes, and drains into the lymph system. The interstitial compartment is composed of connective and supporting tissues within the body – called the extracellular matrix – that are situated outside the blood and lymphatic vessels and the parenchyma of organs.
Structure
The non-fluid parts of the interstitium are predominantly collagen types I, III, and V, elastin, and glycosaminoglycans, such as hyaluronan and proteoglycans that are cross-linked to form a honeycomb-like reticulum. Such structural components exist both for the general interstitium of the body, and within individual organs, such as the myocardial interstitium of the heart, the renal interstitium of the kidney, and the pulmonary interstitium of the lung.
The interstitium in the submucosae of visceral organs, the dermis, superficial fascia, and perivascular adventitia are fluid-filled spaces supported by a collagen bundle lattice. The fluid spaces communicate with draining lymph nodes though they do not have lining cells or structures of lymphatic channels.
Functions
The interstitial fluid is a reservoir and transportation system for nutrients and solutes distributing among organs, cells, and capillaries, for signaling molecules communicating between cells, and for antigens and cytokines participating in immune regulation. The composition and chemical properties of the interstitial fluid vary among organs and undergo changes in chemical composition during normal function, as well as during body growth, conditions of inflammation, and development of diseases, as in heart failure and chronic kidney disease.
The total fluid volume of the interstitium during health is about 20% of body weight, but this space is dynamic
|
https://en.wikipedia.org/wiki/Potential%20isomorphism
|
In mathematical logic and in particular in model theory, a potential isomorphism is a collection of finite partial isomorphisms between two models which satisfies certain closure conditions. Existence of a partial isomorphism entails elementary equivalence, however the converse is not generally true, but it holds for ω-saturated models.
Definition
A potential isomorphism between two models M and N is a non-empty collection F of finite partial isomorphisms between M and N which satisfy the following two properties:
for all finite partial isomorphisms Z ∈ F and for all x ∈ M there is a y ∈ N such that Z ∪ {(x,y)} ∈ F
for all finite partial isomorphisms Z ∈ F and for all y ∈ N there is a x ∈ M such that Z ∪ {(x,y)} ∈ F
A notion of Ehrenfeucht-Fraïssé game is an exact characterisation of elementary equivalence and potential isomorphism can be seen as an approximation of it. Another notion that is similar to potential isomorphism is that of local isomorphism.
References
Model theory
|
https://en.wikipedia.org/wiki/Cerebral%20perfusion%20pressure
|
Cerebral perfusion pressure, or CPP, is the net pressure gradient causing cerebral blood flow to the brain (brain perfusion). It must be maintained within narrow limits because too little pressure could cause brain tissue to become ischemic (having inadequate blood flow), and too much could raise intracranial pressure (ICP).
Definitions
The cranium encloses a fixed-volume space that holds three components: blood, cerebrospinal fluid (CSF), and very soft tissue (the brain). While both the blood and CSF have poor compression capacity, the brain is easily compressible.
Every increase of ICP can cause a change in tissue perfusion and an increase in stroke events.
From resistance
CPP can be defined as the pressure gradient causing cerebral blood flow (CBF) such that
where:
CVR is cerebrovascular resistance
By intracranial pressure
An alternative definition of CPP is:
where:
MAP is mean arterial pressure
ICP is intracranial pressure
JVP is jugular venous pressure
This definition may be more appropriate if considering the circulatory system in the brain as a Starling resistor, where an external pressure (in this case, the intracranial pressure) causes decreased blood flow through the vessels. In this sense, more specifically, the cerebral perfusion pressure can be defined as either:
(if ICP is higher than JVP)
or
(if JVP is higher than ICP).
Physiologically, increased intracranial pressure (ICP) causes decreased blood perfusion of brain cells by mainly two mechanisms:
Increased ICP constitutes an increased interstitial hydrostatic pressure that, in turn, causes a decreased driving force for capillary filtration from intracerebral blood vessels.
Increased ICP compresses cerebral arteries, causing increased cerebrovascular resistance (CVR).
FLOW
Ranging from in white matter to in grey matter.
Autoregulation
Under normal circumstances a MAP between 60 and 160 mmHg and ICP about 10 mmHg (CPP of 50-150 mmHg) sufficient blood flow can be maintained with a
|
https://en.wikipedia.org/wiki/Hiroshi%20Ishii%20%28computer%20scientist%29
|
is a Japanese computer scientist. He is a professor at the Massachusetts Institute of Technology. Ishii pioneered the Tangible User Interface in the field of Human-computer interaction with the paper "Tangible Bits: Towards Seamless Interfaces between People, Bits and Atoms", co-authored with his then PhD student Brygg Ullmer.
Biography
Ishii was born in Tokyo and raised in Sapporo.
He received B.E. in electronic engineering, and M.E. and Ph.D. in computer engineering from Hokkaido University in Sapporo, Japan.
Hiroshi Ishii founded the Tangible Media Group and started their ongoing
Tangible Bits project in 1995, when he joined the MIT Media Laboratory as
a professor of Media Arts and Sciences. Ishii relocated from Japan's NTT
Human Interface Laboratories in Yokosuka, where he had made his mark in
Human Computer Interaction (HCI) and Computer-Supported Cooperative Work
(CSCW) in the early 1990s. Ishii
was elected to the CHI Academy in 2006. He was named to the 2022 class of ACM Fellows, "for contributions to tangible user interfaces and to human-computer interaction".
He currently teaches the class MAS.834 Tangible Interfaces at the Media Lab.
External links
Computer programmers
Japanese computer scientists
Human–computer interaction researchers
Ubiquitous computing researchers
MIT School of Architecture and Planning faculty
Hokkaido University alumni
MIT Media Lab people
1956 births
Living people
People from Tokyo
Fellows of the Association for Computing Machinery
|
https://en.wikipedia.org/wiki/Dutch%20Waterline
|
The Dutch Waterline (, modern spelling: Hollandse Waterlinie) was a series of water-based defences conceived by Maurice of Nassau in the early 17th century, and realised by his half brother Frederick Henry. Combined with natural bodies of water, the Waterline could be used to transform Holland, the westernmost region of the Netherlands and adjacent to the North Sea, almost into an island. In the 19th century, the Line was extended to include Utrecht.
On July 26, 2021, the line was added to the Defence Line of Amsterdam to become the Dutch Water Defence Lines UNESCO World Heritage Site.
History
Early in the Eighty Years' War of Independence against Spain, the Dutch realized that flooding low-lying areas formed an excellent defence against enemy troops. This was demonstrated, for example, during the Siege of Leiden in 1574. In the latter half of the war, when the province of Holland had been freed of Spanish troops, Maurice of Nassau planned to defend it with a line of flooded land protected by fortresses that ran from the Zuiderzee (present IJsselmeer) down to the river Waal.
Old Dutch Waterline
In 1629, Prince Frederick Henry started the execution of the plan. Sluices were constructed in dikes and forts and fortified towns were created at strategic points along the line with guns covering the dikes that traversed the water line. The water level in the flooded areas was carefully maintained at a level deep enough to make an advance on foot precarious and shallow enough to rule out effective use of boats (other than the flat bottomed gun barges used by the Dutch defenders). Under the water level additional obstacles like ditches and trous de loup (and much later, barbed wire and land mines) were hidden. The trees lining the dikes that formed the only roads through the line could be turned into abatis in time of war. In wintertime the water level could be manipulated to weaken ice covering, while the ice itself could be used when broken up to form further obstacles
|
https://en.wikipedia.org/wiki/Code%20page%20932%20%28IBM%29
|
IBM code page 932 (abbreviated as IBM-932 or ambiguously as CP932) is one of IBM's extensions of Shift JIS. The coded character sets are JIS X 0201:1976, JIS X 0208:1983, IBM extensions and IBM extensions for IBM 1880 UDC. It is the combination of the single-byte Code page 897 and the double-byte Code page 301. Code page 301 is designed to encode the same repertoire as IBM Japanese DBCS-Host.
IBM-932 resembles IBM-943. One difference is that IBM-932 encodes the JIS X 0208:1983 characters but preserves the 1978 ordering, whereas IBM-943 uses the 1983 ordering (i.e. the character variant swaps made in JIS X 0208:1983). Another difference is that IBM-932 does not incorporate the NEC selected extensions, which IBM-943 includes for Microsoft compatibility.
IBM-942 includes the same double-byte codes as IBM-932 (those from Code page 301) but includes additional single-byte extensions. International Components for Unicode treats "ibm-932" and "ibm-942" as aliases for the same decoder.
IBM-932 contains 7-bit ISO 646 codes, and Japanese characters are indicated by the high bit of the first byte being set to 1. Some code points in this page require a second byte, so characters use either 8 or 16 bits for encoding.
Layout
See also
LMBCS-16
Code page 942
Code page 943
References
External links
IBM Code Page 932
932
Encodings of Japanese
|
https://en.wikipedia.org/wiki/Bauer%E2%80%93Fike%20theorem
|
In mathematics, the Bauer–Fike theorem is a standard result in the perturbation theory of the eigenvalue of a complex-valued diagonalizable matrix. In its substance, it states an absolute upper bound for the deviation of one perturbed matrix eigenvalue from a properly chosen eigenvalue of the exact matrix. Informally speaking, what it says is that the sensitivity of the eigenvalues is estimated by the condition number of the matrix of eigenvectors.
The theorem was proved by Friedrich L. Bauer and C. T. Fike in 1960.
The setup
In what follows we assume that:
is a diagonalizable matrix;
is the non-singular eigenvector matrix such that , where is a diagonal matrix.
If is invertible, its condition number in -norm is denoted by and defined by:
The Bauer–Fike Theorem
Bauer–Fike Theorem. Let be an eigenvalue of . Then there exists such that:
Proof. We can suppose , otherwise take and the result is trivially true since . Since is an eigenvalue of , we have and so
However our assumption, , implies that: and therefore we can write:
This reveals to be an eigenvalue of
Since all -norms are consistent matrix norms we have where is an eigenvalue of . In this instance this gives us:
But is a diagonal matrix, the -norm of which is easily computed:
whence:
An Alternate Formulation
The theorem can also be reformulated to better suit numerical methods. In fact, dealing with real eigensystem problems, one often has an exact matrix , but knows only an approximate eigenvalue-eigenvector couple, and needs to bound the error. The following version comes in help.
Bauer–Fike Theorem (Alternate Formulation). Let be an approximate eigenvalue-eigenvector couple, and . Then there exists such that:
Proof. We can suppose , otherwise take and the result is trivially true since . So exists, so we can write:
since is diagonalizable; taking the -norm of both sides, we obtain:
However
is a diagonal matrix and its -norm is easily computed:
whence:
A Relativ
|
https://en.wikipedia.org/wiki/Logical%20Methods%20in%20Computer%20Science
|
Logical Methods in Computer Science (LMCS) is a peer-reviewed open access scientific journal covering theoretical computer science and applied logic. It opened to submissions on September 1, 2004. The editor-in-chief is Stefan Milius (Friedrich-Alexander Universität Erlangen-Nürnberg).
History
The journal was initially published by the International Federation
for Computational Logic, and then by a dedicated non-profit. It moved to the . platform in 2017. The first editor-in-chief was Dana Scott. In its first year, the journal received 75 submissions.
Abstracting and indexing
The journal is abstracted and indexed in Current Contents/Engineering, Computing & Technology, Mathematical Reviews, Science Citation Index Expanded, Scopus, and Zentralblatt MATH. According to the Journal Citation Reports, the journal has a 2016 impact factor of 0.661.
References
External links
Academic journals established in 2005
Computer science journals
Open access journals
Logic journals
Logic in computer science
Formal methods publications
Quarterly journals
English-language journals
|
https://en.wikipedia.org/wiki/Isobutyl%20acetate
|
The chemical compound isobutyl acetate, also known as 2-methylpropyl ethanoate (IUPAC name) or β-methylpropyl acetate, is a common solvent. It is produced from the esterification of isobutanol with acetic acid. It is used as a solvent for lacquer and nitrocellulose. Like many esters it has a fruity or floral smell at low concentrations and occurs naturally in raspberries, pears and other plants. At higher concentrations the odor can be unpleasant and may cause symptoms of central nervous system depression such as nausea, dizziness and headache.
A common method for preparing isobutyl acetate is Fischer esterification, where precursors isobutyl alcohol and acetic acid are heated in the presence of a strong acid.
Isobutyl acetate has three isomers: n-butyl acetate, tert-butyl acetate, and sec-butyl acetate, which are also common solvents.
References
Flavors
Ester solvents
Acetate esters
|
https://en.wikipedia.org/wiki/Grass%20mountain
|
A grass mountain () in topography is a mountain covered with low vegetation, typically in the Alps and often steep-sided. The nature of such cover, which often grows particularly well on sedimentary rock, will reflect local conditions.
Distribution
The following mountain ranges of the Eastern Alps in Europe are often referred to as grass mountains (Grasberge):
the Allgäu Alps in Bavaria, Germany and Tyrol in Austria,
the Kitzbühel Alps in the Austrian states of Salzburg and Tyrol, and
the Dienten Mountains in Salzburg.
Other areas where grass mountains occur include: the gorges of the Himalayas, Scotland, Poland's Tatra Mountains, and Lofoten.
Individual examples
Geißstein (2,366 m), Kitzbühel Alps.
Höfats (2,259 m), Allgäu Alps
Schneck (2,268 m), Allgäu Alps
Latschur (2,236 m), Gailtal Alps
Ascent techniques
Negotiating the steep grass-covered sides of grass mountains requires a special type of climbing known as grass climbing (Grasklettern).
References
Biogeomorphology
Mountain geomorphology
|
https://en.wikipedia.org/wiki/Chai%20%28symbol%29
|
Chai or Hai ( "living" ) is a symbol that figures prominently in modern Jewish culture; the Hebrew letters of the word are often used as a visual symbol.
History
According to The Jewish Daily Forward, its use as an amulet originates in 18th century Eastern Europe. Chai as a symbol goes back to medieval Spain. Letters as symbols in Jewish culture go back to the earliest Jewish roots, the Talmud states that the world was created from Hebrew letters which form verses of the Torah. In medieval Kabbalah, Chai is the lowest (closest to the physical plane) emanation of God. According to 16th century Greek rabbi Shlomo Hacohen Soloniki, in his commentary on the Zohar, Chai as a symbol has its linkage in the Kabbalah texts to God's attribute of 'Ratzon', or motivation, will, muse. The Jewish commentaries give an especially long treatment to certain verses in the Torah with the word as their central theme. Three examples are Leviticus 18:5 'Chai Bahem', 'and you shall live by [this faith]' (as opposed to just doing it), this is part of the section dealing with the legacy of Moses after his death. Deuteronomy 30:15 "Verily, I have set before thee this day life and good, and death and evil, in that I command thee this day to love the thy God, to walk in His ways, and to keep His commandments and His statutes and His ordinances; then thou shalt live." There is nary an ancient Jewish commentator who does not comment on that verse. The Shema prayer as well speaks of the importance of Chai, to live and walk in the Jewish cultural lifestyle.
Two common Jewish names used since Talmudic times, are based on this symbol, Chaya feminine, Chayim masculine. The Jewish toast (on alcoholic beverages such as wine) is l'chaim, 'to life'.
Linguistics
The word is made up of two letters of the Hebrew alphabet – Chet () and Yod (), forming the word "chai", meaning "alive", or "living". The most common spelling in Latin script is "Chai", but the word is occasionally also spelled "Hai". The u
|
https://en.wikipedia.org/wiki/ICFP%20Programming%20Contest
|
The ICFP Programming Contest is an international programming competition held annually around June or July since 1998, with results announced at the International Conference on Functional Programming.
Teams may be of any size and any programming language(s) may be used. There is also no entry fee. Participants have 72 hours to complete and submit their entry over the Internet. There is often also a 24-hour lightning division.
The winners reserve "bragging rights" to claim that their language is "the programming tool of choice for discriminating hackers". As such, one of the competition's goals is to showcase the capabilities of the contestants' favorite programming languages and tools. Previous first prize winners have used Haskell, OCaml, C++, Cilk, Java, F#, and Rust.
The contests usually have around 300 submitted entries.
Past tasks
Prizes
Prizes have a modest cash value, primarily aimed at helping the winners to attend the conference, where the prizes are awarded and the judges make the following declarations:
First prize [Language 1] is the programming tool of choice for discriminating hackers.
Second prize [Language 2] is a fine programming tool for many applications.
Third prize [Language 3] is also not too shabby.
Winner of the lightning division [Language L] is very suitable for rapid prototyping.
Judges' prize [Team X] are an extremely cool bunch of hackers.
Where a winning entry involves several languages, the winners are asked to nominate one or two.
The languages named in the judges' declarations have been:
See also
International Collegiate Programming Contest (ICPC)
Online judge
References and notes
External links
Contest at ICFP site
Programming contests
|
https://en.wikipedia.org/wiki/Architecture%20Analysis%20%26%20Design%20Language
|
The Architecture Analysis & Design Language (AADL) is an architecture description language standardized by SAE. AADL was first developed in the field of avionics, and was known formerly as the Avionics Architecture Description Language.
The Architecture Analysis & Design Language is derived from MetaH, an architecture description language made by the Advanced Technology Center of Honeywell. AADL is used to model the software and hardware architecture of an embedded, real-time system. Due to its emphasis on the embedded domain, AADL contains constructs for modeling both software and hardware components (with the hardware components named "execution platform" components within the standard). This architecture model can then be used either as a design documentation, for analyses (such as schedulability and flow control) or for code generation (of the software portion), like UML.
AADL ecosystem
AADL is defined by a core language that defines a single notation for both system and software aspects. Having a single model eases the analysis tools by having only one single representation of the system. The language specifies system-specific characteristics using properties.
The language can be extended with the following methods:
user-defined properties: user can extend the set of applicable properties and add their own to specify their own requirements
language annexes: the core language is enhanced by annex languages that enrich the architecture description. For now, the following annexes have been defined.
Behavior annex: add components behavior with state machines
Error-model annex: specifies fault and propagation concerns
ARINC653 annex: defines modelling patterns for modelling avionics system
Data-Model annex: describes the modelling of specific data constraint with AADL
AADL tools
AADL is supported by a wide range of tools:
MASIW - is an open source Eclipse-based IDE for development and analysis of AADL models. It is developed by ISP RAS
OSATE includes
|
https://en.wikipedia.org/wiki/Google%20Base
|
Google Base was a database provided by Google into which any user can add almost any type of content, such as text, images, and structured information in formats such as XML, PDF, Excel, RTF, or WordPerfect. As of September 2010, the product has been downgraded to Google Merchant Center.
If Google found user-added content relevant, submitted content appeared on its shopping search engine, Google Maps or even the web search. The piece of content could then be labeled with attributes like the ingredients for a recipe or the camera model for stock photography. Because information about the service was leaked before public release, it generated much interest in the information technology community prior to release. Google subsequently responded on their blog with an official statement:
"You may have seen stories today reporting on a new product that we're testing, and speculating about our plans. Here's what's really going on. We are testing a new way for content owners to submit their content to Google, which we hope will complement existing methods such as our web crawl and Google Sitemaps. We think it's an exciting product, and we'll let you know when there's more news."
Files could be uploaded to the Google Base servers by browsing your computer or the web, by various FTP methods, or by API coding. Online tools were provided to view the number of downloads of the user's files, and other performance measures.
On December 17, 2010, it was announced that Google Base's API is deprecated in favor of a set of new APIs known as Google Shopping APIs.
See also
List of Google services and tools
Resources of a Resource – ROR
Base Feeder – Software to create bulk submission Google Base Feeds
External links
Google Base
About Google Base
Official Google Base Blog
Official Google Blog Press Release
Google Base API Mashups
References
Beta software
Defunct websites
Base
Internet properties disestablished in 2009
Online databases
|
https://en.wikipedia.org/wiki/Propyl%20acetate
|
Propyl acetate, also known as propyl ethanoate, is an organic compound. Nearly 20,000 tons are produced annually for use as a solvent. This colorless liquid is known by its characteristic odor of pears. Due to this fact, it is commonly used in fragrances and as a flavor additive. It is formed by the esterification of acetic acid and propan-1-ol, often via Fischer–Speier esterification, with sulfuric acid as a catalyst and water produced as a byproduct.
References
External links
NIOSH Pocket Guide to Chemical Hazards
Acetic acid, propyl ester - Toxicity Data
N-Propyl Acetate MSDS
Ester solvents
Flavors
Acetate esters
Sweet-smelling chemicals
Propyl esters
|
https://en.wikipedia.org/wiki/Facilitated%20variation
|
The theory of facilitated variation demonstrates how seemingly complex biological systems can arise through a limited number of regulatory genetic changes, through the differential re-use of pre-existing developmental components. The theory was presented in 2005 by Marc W. Kirschner (a professor and chair at the Department of Systems Biology, Harvard Medical School) and John C. Gerhart (a professor at the Graduate School, University of California, Berkeley).
The theory of facilitated variation addresses the nature and function of phenotypic variation in evolution. Recent advances in cellular and evolutionary developmental biology shed light on a number of mechanisms for generating novelty. Most anatomical and physiological traits that have evolved since the Cambrian are, according to Kirschner and Gerhart, the result of regulatory changes in the usage of various conserved core components that function in development and physiology. Novel traits arise as novel packages of modular core components, which requires modest genetic change in regulatory elements. The modularity and adaptability of developmental systems reduces the number of regulatory changes needed to generate adaptive phenotypic variation, increases the probability that genetic mutation will be viable, and allows organisms to respond flexibly to novel environments. In this manner, the conserved core processes facilitate the generation of adaptive phenotypic variation, which natural selection subsequently propagates.
Description of the theory
The theory of facilitated variation consists of several elements. Organisms are built from a set of highly conserved modules called "core processes" that function in development and physiology, and have remained largely unchanged for millions (in some instances billions) of years. Genetic mutation leads to regulatory changes in the package of core components (i.e. new combinations, amounts, and functional states of those components) exhibited by an organism. Finall
|
https://en.wikipedia.org/wiki/Direct%20numerical%20control
|
Direct numerical control (DNC), also known as distributed numerical control (also DNC), is a common manufacturing term for networking CNC machine tools. On some CNC machine controllers, the available memory is too small to contain the machining program (for example machining complex surfaces), so in this case the program is stored in a separate computer and sent directly to the machine, one block at a time. If the computer is connected to a number of machines it can distribute programs to different machines as required. Usually, the manufacturer of the control provides suitable DNC software. However, if this provision is not possible, some software companies provide DNC applications that fulfill the purpose. DNC networking or DNC communication is always required when CAM programs are to run on some CNC machine control.
Wireless DNC is also used in place of hard-wired versions. Controls of this type are very widely used in industries with significant sheet metal fabrication, such as the automotive, appliance, and aerospace industries.
History
1950s-1970s
Programs had to be walked to NC controls, generally on paper tape. NC controls had paper tape readers precisely for this purpose. Many companies were still punching programs on paper tape well into the 1980s, more than twenty-five years after its elimination in the computer industry.
1980s
The focus in the 1980s was mainly on reliably transferring NC programs between a host computer and the control. The Host computers would frequently be Sun Microsystems, HP, Prime, DEC or IBM type computers running a variety of CAD/CAM software. DNC companies offered machine tool links using rugged proprietary terminals and networks. For example, DLog offered an x86 based terminal, and NCPC had one based on the 6809. The host software would be responsible for tracking and authorising NC program modifications. Depending on program size, for the first time operators had the opportunity to modify programs at the DNC terminal. No
|
https://en.wikipedia.org/wiki/Psychology%2C%20philosophy%20and%20physiology
|
Psychology, philosophy and physiology (PPP) was a degree at the University of Oxford. It was Oxford's
first psychology degree, beginning in 1947, but admitted its last students in October 2010. It has been, in part, replaced by psychology, philosophy, and linguistics (PPL, in which students usually study two of three subjects).
PPP covered the study of thought and behaviour from the differing points of view of psychology, physiology and philosophy. Psychology includes social interaction, learning, child development, mental illness and information processing. Physiology considers the organization of the brain and body of mammals and humans, from the molecular level to the organism as a whole. Philosophy is concerned with ethics, knowledge, the mind, etc.
External links
Academic courses at the University of Oxford
Philosophy education
Physiology
Psychology education
|
https://en.wikipedia.org/wiki/Truel
|
Truel and triel are neologisms for a duel between three opponents, in which players can fire at one another in an attempt to eliminate them while surviving themselves.
Game theory overview
A variety of forms of truels have been studied in game theory. Features that determine the nature of a truel include
the probability of each player hitting their chosen targets (often not assumed to be the same for each player)
whether the players shoot simultaneously or sequentially, and, if sequentially, whether the shooting order is predetermined, or determined at random from among the survivors;
the number of bullets each player has (in particular, whether this is finite or infinite);
whether or not intentionally missing is allowed.
whether or not self-targeting or random selection of targets is allowed.
There is usually a general assumption that each player in the truel wants to be the only survivor, and will behave logically in a manner that maximizes the probability of this. (If each player only wishes to survive and does not mind if the others also survive, then the rational strategy for all three players can be to miss every time.)
In the widely studied form, the three have different probabilities of hitting their target.
If a single bullet is used, the probabilities of hitting the target are equal and deliberate missing is allowed, the best strategy for the first shooter is to deliberately miss. Since he is now disarmed, the next shooter will have no reason to shoot the first one and so will shoot at the third shooter. While the second shooter might miss deliberately, there would then be the risk that the third one would shoot him. If the first shooter does not deliberately miss, he will presumably be shot by whichever shooter remained.
If an unlimited number of bullets are used, then deliberate missing may be the best strategy for a duelist with lower accuracy than both opponents.
If both have better than 50% success rate, he should continue to miss until one
|
https://en.wikipedia.org/wiki/Certance
|
Certance, LLC, was a privately held company engaged in design and manufacture of computer tape drives.
Based in Costa Mesa, California, Certance designed and manufactured drives using a variety of tape formats, including Travan, DDS, and Linear Tape-Open computer tape drives. Certance was one of the three original technology partners, (Certance, IBM, and Hewlett-Packard), that created the Linear Tape-Open technology.
In 2005, Certance was acquired by Quantum Corporation.
History
The company began as the removable storage systems division of Seagate Technology. The division was formed in 1996 from storage companies Archive Corporation, Irwin Magnetic Systems, Cipher Data Products, and Maynard Electronics. In a restructuring involving Seagate Technology and Veritas Software, the division was spun off in 2000 into the independent company Seagate Removable Storage Systems. The company was the worldwide unit volume shipment leader in 2001, 2002, and 2003.
The company name was changed to "Certance" in 2003. In 2004, Quantum Corporation announced plans to acquire Certance. The acquisition was completed in 2005, whereupon Certance ceased to exist as an independent company.
References
Computer storage companies
Companies based in Costa Mesa, California
Defunct technology companies of the United States
Defunct computer hardware companies
1996 establishments in California
2005 establishments in California
2005 mergers and acquisitions
Technology companies established in 1996
Technology companies disestablished in 2005
Defunct computer companies of the United States
|
https://en.wikipedia.org/wiki/Proportional-fair%20scheduling
|
Proportional-fair scheduling is a compromise-based scheduling algorithm. It is based upon maintaining a balance between two competing interests: Trying to maximize the total throughput of the network (wired or not) while at the same time allowing all users at least a minimal level of service. This is done by assigning each data flow a data rate or a scheduling priority (depending on the implementation) that is inversely proportional to its anticipated resource consumption.
Weighted fair queuing
Proportionally fair scheduling can be achieved by means of weighted fair queuing (WFQ), by setting the scheduling weights for data flow to , where the cost is the amount of consumed resources per data bit. For instance:
In CDMA spread spectrum cellular networks, the cost may be the required energy per bit in the transmit power control (the increased interference level).
In wireless communication with link adaptation, the cost may be the required time to transmit a certain number of bits using the modulation and error coding scheme that this required. An example of this is EVDO networks, where reported SNR is used as the primary costing factor.
In wireless networks with fast Dynamic Channel Allocation, the cost may be the number of nearby base station sites that can not use the same frequency channel simultaneously, in view to avoid co-channel interference.
User prioritization
Another way to schedule data transfer that leads to similar results is through the use of prioritization coefficients. Here we schedule the channel for the station that has the maximum of the priority function:
denotes the data rate potentially achievable for the station in the present time slot.
is the historical average data rate of this station.
and tune the "fairness" of the scheduler.
By adjusting and in the formula above, we are able to adjust the balance between serving the best mobiles (the ones in the best channel conditions) more often and serving the costly mobiles often enou
|
https://en.wikipedia.org/wiki/CVSNT
|
CVSNT is a version control system compatible with and originally based on Concurrent Versions System (CVS), but whereas that was popular in the open-source world, CVSNT included features designed for developers working on commercial software including support for Windows, Active Directory authentication, reserved branches/locking, per-file access control lists and Unicode filenames. Also included in CVSNT were various RCS tools updated to work with more recent compilers and compatible with CVSNT.
CVSNT was initially developed by users unhappy with the limitations of CVS 1.10.8, addressing limitations related to running CVS server on Windows and handling filenames for case-insensitive platforms. March Hare Software began sponsorship of the project in July 2004 to guarantee the project's future and to employ the original project manager on CVSNT development and commercial support.
CVSNT was commercially popular, with a number of commercial IDEs directly including support for it including Oracle JDeveloper, IBM Rational Application Developer, and IBM WebSphere Business Modeler. The CVSNT variation of RCS tools were also widely used, including by Apple, Inc. CVSNT was so ubiquitous in commercial programming that it was often referred to simply as CVS, even though the open-source CVS developers had publicly stated that CVSNT was significantly different and should be kept as a separate project.
Several books were written about CVSNT including CVSNT (CVS for NT) and All About CVS.
Features
CVSNT keeps track of the version history of a project (or set of files).
CVSNT is based on the same client–server architecture as the Concurrent Versions System: a server stores the current version(s) of the project and its history, and clients connect to the server in order to check-out a complete copy of the project, work on this copy and then later check-in their changes. A server may be a caching or proxy server (a read only server that passes on write requests to another se
|
https://en.wikipedia.org/wiki/Halochromism
|
A halochromic material or pH indicator is a material which changes colour when pH changes occur. The term ‘chromic’ is defined for materials that can change colour reversibly with the presence of an external factor. In this case, the factor is pH. The pH indicators have this property.
Halochromic substances are suited for use in environments where pH changes occur frequently, or places where changes in pH are extreme. Halochromic substances detect alterations in the acidity of substances, like detection of corrosion in metals.
Halochromic substances may be used as indicators to determine the pH of solutions of unknown pH. The colour obtained is compared with the colour obtained when the indicator is mixed with solutions of known pH. The pH of the unknown solution can then be estimated. Obvious disadvantages of this method include its dependency on the colour sensitivity of the human eye, and that unknown solutions that are already coloured cannot be used.
The colour change of halochromic substances occur when the chemical binds to existing hydrogen and hydroxide ions in solution. Such bonds result in changes in the conjugated systems of the molecule, or the range of electron flow. This alters the wavelength of light absorbed, which in turn results in a visible change of colour. Halochromic substances do not display a full range of colour for a full range of pH because, after certain acidities, the conjugated system will not change. The various shades result from different concentrations of halochromic molecules with different conjugated systems.
Chromism
|
https://en.wikipedia.org/wiki/Crash%20reporter
|
A crash reporter is usually a system software whose function is to identify reporting crash details and to alert when there are crashes, in production or on development / testing environments. Crash reports often include data such as stack traces, type of crash, trends and version of software. These reports help software developers- Web, SAAS, mobile apps and more, to diagnose and fix the underlying problem causing the crashes. Crash reports may contain sensitive information such as passwords, email addresses, and contact information, and so have become objects of interest for researchers in the field of computer security.
Implementing crash reporting tools as part of the development cycle has become a standard, and crash reporting tools have become a commodity, many of them are offered for free, like Crashlytics.
Many giant industry players, that are part of the software development eco-system have entered the game. Companies such as Twitter, Google and others are putting a lot of efforts on encouraging software developers to use their APIs, knowing this will increase their revenues down the road (through advertisements and other mechanisms). As they realize that they must offer elegant solutions for as many as possible development issues, otherwise their competitors will take actions, they keep adding advanced features. Crash reporting tools make an important development functionality that giant companies include in their portfolio of solutions.
Many crash reporting tools are specialized in mobile app. Many of them are SDKs.
macOS
In macOS there is a standard crash reporter in . Crash Reporter.app sends the Unix crash logs to Apple for their engineers to look at. The top text field of the window has the crash log, while the bottom field is for user comments. Users may also copy and paste the log in their email client to send to the application vendor for them to use. Crash Reporter.app has 3 main modes: display nothing on crash, display "Application has cras
|
https://en.wikipedia.org/wiki/Language%20binding
|
In programming and software design, binding is an application programming interface (API) that provides glue code specifically made to allow a programming language to use a foreign library or operating system service (one that is not native to that language).
Characteristics
Binding generally refers to a mapping of one thing to another. In the context of software libraries, bindings are wrapper libraries that bridge two programming languages, so that a library written for one language can be used in another language. Many software libraries are written in system programming languages such as C or C++. To use such libraries from another language, usually of higher-level, such as Java, Common Lisp, Scheme, Python, or Lua, a binding to the library must be created in that language, possibly requiring recompiling the language's code, depending on the amount of modification needed. However, most languages offer a foreign function interface, such as Python's and OCaml's ctypes, and Embeddable Common Lisp's cffi and uffi.
For example, Python bindings are used when an extant C library, written for some purpose, is to be used from Python. Another example is libsvn which is written in C to provide an API to access the Subversion software repository. To access Subversion from within Java code, libsvnjavahl can be used, which depends on libsvn being installed and acts as a bridge between the language Java and libsvn, thus providing an API that invokes functions from libsvn to do the work.
Major motives to create library bindings include software reuse, to reduce reimplementing a library in several languages, and the difficulty of implementing some algorithms efficiently in some high-level languages.
Runtime environment
Object models
Common Object Request Broker Architecture (CORBA) – cross-platform-language model
Component Object Model (COM) – Microsoft Windows only cross-language model
Distributed Component Object Model (DCOM) – extension enabling COM to work over net
|
https://en.wikipedia.org/wiki/Cant%20%28architecture%29
|
A cant in architecture is an angled (oblique-angled) line or surface that cuts off a corner.
Something with a cant is canted.
Canted facades are a typical of, but not exclusive to, Baroque architecture. The angle breaking the facade is less than a right angle, thus enabling a canted facade to be viewed as, and remain, one composition. Bay windows frequently have canted sides.
A cant is sometimes synonymous with chamfer and bevel.
References
Architectural elements
Building engineering
|
https://en.wikipedia.org/wiki/Empirical%20modelling
|
Empirical modelling refers to any kind of (computer) modelling based on empirical observations rather than on mathematically describable relationships of the system modelled.
Empirical Modelling
Empirical Modelling as a variety of empirical modelling
Empirical modelling is a generic term for activities that create models by observation and experiment. Empirical Modelling (with the initial letters capitalised, and often abbreviated to EM) refers to a specific variety of empirical modelling in which models are constructed following particular principles. Though the extent to which these principles can be applied to model-building without computers is an interesting issue (to be revisited below), there are at least two good reasons to consider Empirical Modelling in the first instance as computer-based. Without doubt, computer technologies have had a transformative impact where the full exploitation of Empirical Modelling principles is concerned. What is more, the conception of Empirical Modelling has been closely associated with thinking about the role of the computer in model-building.
An empirical model operates on a simple semantic principle: the maker observes a close correspondence between the behaviour of the model and that of its referent. The crafting of this correspondence can be 'empirical' in a wide variety of senses: it may entail a trial-and-error process, may be based on computational approximation to analytic formulae, it may be derived as a black-box relation that affords no insight into 'why it works'.
Empirical Modelling is rooted on the key principle of William James's radical empiricism, which postulates that all knowing is rooted in connections that are given-in-experience. Empirical Modelling aspires to craft the correspondence between the model and its referent in such a way that its derivation can be traced to connections given-in-experience. Making connections in experience is an essentially individual human activity that requires skill a
|
https://en.wikipedia.org/wiki/Methyl%20benzoate
|
Methyl benzoate is an organic compound. It is an ester with the chemical formula C6H5CO2CH3. It is a colorless liquid that is poorly soluble in water, but miscible with organic solvents. Methyl benzoate has a pleasant smell, strongly reminiscent of the fruit of the feijoa tree, and it is used in perfumery. It also finds use as a solvent and as a pesticide used to attract insects such as orchid bees.
Synthesis and reactions
Methyl benzoate is formed by the condensation of methanol and benzoic acid, in presence of a strong acid.
Methyl benzoate reacts at both the ring and the ester, depending on the substrate. Electrophiles attack the ring, illustrated by acid-catalysed nitration with nitric acid to give methyl 3-nitrobenzoate. Nucleophiles attack the carbonyl center, illustrated by hydrolysis with addition of aqueous NaOH to give methanol and sodium benzoate.
Occurrence
Methyl benzoate can be isolated from the freshwater fern Salvinia molesta. It is one of many compounds that is attractive to males of various species of orchid bees, which apparently gather the chemical to synthesize pheromones; it is commonly used as bait to attract and collect these bees for study.
Cocaine hydrochloride hydrolyzes in moist air to give methyl benzoate; drug-sniffing dogs are thus trained to detect the smell of methyl benzoate.
Uses
Non electric Heat cost allocators. See: DIN EN 835.
References
Flavors
Methyl esters
Benzoate esters
Perfume ingredients
|
https://en.wikipedia.org/wiki/Angular%20diameter%20distance
|
In astronomy, angular diameter distance is a distance defined in terms of an object's physical size, , and its angular size, , as viewed from Earth:
Cosmology dependence
The angular diameter distance depends on the assumed cosmology of the universe. The angular diameter distance to an object at redshift, , is expressed in terms of the comoving distance, as:
where is the FLRW coordinate defined as:
where is the curvature density and is the value of the Hubble parameter today.
In the currently favoured geometric model of our Universe, the "angular diameter distance" of an object is a good approximation to the "real distance", i.e. the proper distance when the light left the object.
Angular size redshift relation
The angular size redshift relation describes the relation between the angular size observed on the sky of an object of given physical size, and the object's redshift from Earth (which is related to its distance, , from Earth). In a Euclidean geometry the relation between size on the sky and distance from Earth would simply be given by the equation:
where is the angular size of the object on the sky, is the size of the object and is the distance to the object. Where is small this approximates to:
However, in the ΛCDM model, the relation is more complicated. In this model, objects at redshifts greater than about 1.5 appear larger on the sky with increasing redshift.
This is related to the angular diameter distance, which is the distance an object is calculated to be at from and , assuming the Universe is Euclidean.
The Mattig relation yields the angular-diameter distance, , as a function of redshift z for a universe with ΩΛ = 0. is the present-day value of the deceleration parameter, which measures the deceleration of the expansion rate of the Universe; in the simplest models, corresponds to the case where the Universe will expand forever, to closed models which will ultimately stop expanding and contract, corresponds to the critical case
|
https://en.wikipedia.org/wiki/Noether%20normalization%20lemma
|
In mathematics, the Noether normalization lemma is a result of commutative algebra, introduced by Emmy Noether in 1926. It states that for any field k, and any finitely generated commutative k-algebra A, there exist algebraically independent elements y1, y2, ..., yd in A such that A is a finitely generated module over the polynomial ring S = k[y1, y2, ..., yd]. The integer d is equal to the Krull dimension of the ring A; and if A is an integral domain, d is also the transcendence degree of the field of fractions of A over k.
The theorem has a geometric interpretation. Suppose A is the coordinate ring of an affine variety X, and consider S as the coordinate ring of a d-dimensional affine space . Then the inclusion map induces a surjective finite morphism of affine varieties : that is, any affine variety is a branched covering of affine space.
When k is infinite, such a branched covering map can be constructed by taking a general projection from an affine space containing X to a d-dimensional subspace.
More generally, in the language of schemes, the theorem can equivalently be stated as: every affine k-scheme (of finite type) X is finite over an affine n-dimensional space. The theorem can be refined to include a chain of ideals of R (equivalently, closed subsets of X) that are finite over the affine coordinate subspaces of the corresponding dimensions.
The Noether normalization lemma can be used as an important step in proving Hilbert's Nullstellensatz, one of the most fundamental results of classical algebraic geometry. The normalization theorem is also an important tool in establishing the notions of Krull dimension for k-algebras.
Proof
The following proof is due to Nagata, following Mumford's red book. A more geometric proof is given on page 127 of the red book.
The ring A in the lemma is generated as a k-algebra by some elements, . We shall induct on m. If , then the assertion is trivial. Assume now . It is enough to show that there is a subring S of A th
|
https://en.wikipedia.org/wiki/Mobile%20daughter%20card
|
The mobile daughter card, also known as an MDC or CDC (communications daughter card), is a notebook version of the AMR slot on the motherboard of a desktop computer. It is designed to interface with special Ethernet (EDC), modem (MDC) or bluetooth (BDC) cards.
Intel MDC specification 1.0
In 1999, Intel published a specification for mobile audio/modem daughter cards. The document defines a standard connector (AMP* 3-179397-0), mechanical elements including several form factors, and electrical interface. The 30-pin connector carries power, several audio channels and AC-Link serial data. Up to two AC'97 codecs are supported on such a card.
Several form factors are specified:
45 × 27 mm
45 × 37 mm
55 × 27 mm with RJ11 jack
55 × 37 mm with RJ11 jack
45 × 55 mm
45 × 70 mm
30-pin AMP* 3-179397-0 pinout
See also
Daughter board
External links
intel.com – MDC specification.pdf
Mobile computers
Motherboard expansion slot
|
https://en.wikipedia.org/wiki/Virokine
|
Virokines are proteins encoded by some large DNA viruses that are secreted by the host cell and serve to evade the host's immune system. Such proteins are referred to as virokines if they resemble cytokines, growth factors, or complement regulators; the term viroceptor is sometimes used if the proteins resemble cellular receptors. A third class of virally encoded immunomodulatory proteins consists of proteins that bind directly to cytokines. Due to the immunomodulatory properties of these proteins, they have been proposed as potentially therapeutically relevant to autoimmune diseases.
Mechanism
The primary mechanism of virokine interference with immune signaling is thought to be competitive inhibition of the binding of host signaling molecules to their target receptors. Virokines occupy binding sites on host receptors, thereby inhibiting access by signaling molecules. Viroceptors mimic host receptors and thus divert signaling molecules from finding their targets. Cytokine-binding proteins bind to and sequester cytokines, occluding the binding surface through which they interact with receptors. The effect is to attenuate and subvert host immune response.
Discovery
The term "virokine" was coined by National Institutes of Health virologist Bernard Moss. The early 1990s saw several reports of virally encoded proteins with sequence homology to immune proteins, followed by reports of the cowpox and vaccinia viruses directly interfering with key immune regulator IL1B. The first identified virokine was an epidermal growth factor-like protein found in myxoma viruses.
Much of the early work on virokines involved vaccinia virus, which was discovered to secrete proteins that promote proliferation of neighboring cells and block complement immune activity leading to inflammation.
Evolutionary origins
The immunomodulatory proteins, including virokines, in the poxvirus family have been extensively studied in the context of the evolution of the family. Virokines in this family
|
https://en.wikipedia.org/wiki/Infinite%20regress
|
An infinite regress is an infinite series of entities governed by a recursive principle that determines how each entity in the series depends on or is produced by its predecessor. In the epistemic regress, for example, a belief is justified because it is based on another belief that is justified. But this other belief is itself in need of one more justified belief for itself to be justified and so on. An infinite regress argument is an argument against a theory based on the fact that this theory leads to an infinite regress. For such an argument to be successful, it has to demonstrate not just that the theory in question entails an infinite regress but also that this regress is vicious. There are different ways in which a regress can be vicious. The most serious form of viciousness involves a contradiction in the form of metaphysical impossibility. Other forms occur when the infinite regress is responsible for the theory in question being implausible or for its failure to solve the problem it was formulated to solve. Traditionally, it was often assumed without much argument that each infinite regress is vicious but this assumption has been put into question in contemporary philosophy. While some philosophers have explicitly defended theories with infinite regresses, the more common strategy has been to reformulate the theory in question in a way that avoids the regress. One such strategy is foundationalism, which posits that there is a first element in the series from which all the other elements arise but which is not itself explained this way. Another way is coherentism, which is based on a holistic explanation that usually sees the entities in question not as a linear series but as an interconnected network. Infinite regress arguments have been made in various areas of philosophy. Famous examples include the cosmological argument, Bradley's regress and regress arguments in epistemology.
Definition
An infinite regress is an infinite series of entities governed b
|
https://en.wikipedia.org/wiki/Bell%27s%20law%20of%20computer%20classes
|
Bell's law of computer classes formulated by Gordon Bell in 1972 describes how types of computing systems (referred to as computer classes) form, evolve and may eventually die out. New classes of computers create new applications resulting in new markets and new industries.
Description
Bell considers the law to be partially a corollary to Moore's law which states "the number of transistors per chip double every 18 months". Unlike Moore's law, a new computer class is usually based on lower cost components that have fewer transistors or less bits on a magnetic surface, etc. A new class forms about every decade. It also takes up to a decade to understand how the class formed, evolved, and is likely to continue. Once formed, a lower priced class may evolve in performance to take over and disrupt an existing class. This evolution has caused clusters of scalable personal computers with 1 to thousands of computers to span a price and performance range of use from a PC, through mainframes, to become the largest supercomputers of the day. Scalable clusters became a universal class beginning in the mid-1990s; by 2010, clusters of at least one million independent computers will constitute the world's largest cluster.
Definition: Roughly every decade a new, lower priced computer class forms based on a new programming platform, network, and interface resulting in new usage and the establishment of a new industry.
Established market class computers aka platforms are introduced and continue to evolve at roughly a constant price (subject to learning curve cost reduction) with increasing functionality (or performance) based on Moore's law that gives more transistors per chip, more bits per unit area, or increased functionality per system. Roughly every decade, technology advances in semiconductors, storage, networks, and interfaces enable the emergence of a new, lower-cost computer class (aka "platform") to serve a new need that is enabled by smaller devices (e.g. more transisto
|
https://en.wikipedia.org/wiki/Atomic%20model%20%28mathematical%20logic%29
|
In model theory, a subfield of mathematical logic, an atomic model is a model such that the complete type of every tuple is axiomatized by a single formula. Such types are called principal types, and the formulas that axiomatize them are called complete formulas.
Definitions
Let T be a theory. A complete type p(x1, ..., xn) is called principal or atomic (relative to T) if it is axiomatized relative to T by a single formula φ(x1, ..., xn) ∈ p(x1, ..., xn).
A formula φ is called complete in T if for every formula ψ(x1, ..., xn), the theory T ∪ {φ} entails exactly one of ψ and ¬ψ.
It follows that a complete type is principal if and only if it contains a complete formula.
A model M is called atomic if every n-tuple of elements of M satisfies a formula that is complete in Th(M)—the theory of M.
Examples
The ordered field of real algebraic numbers is the unique atomic model of the theory of real closed fields.
Any finite model is atomic.
A dense linear ordering without endpoints is atomic.
Any prime model of a countable theory is atomic by the omitting types theorem.
Any countable atomic model is prime, but there are plenty of atomic models that are not prime, such as an uncountable dense linear order without endpoints.
The theory of a countable number of independent unary relations is complete but has no completable formulas and no atomic models.
Properties
The back-and-forth method can be used to show that any two countable atomic models of a theory that are elementarily equivalent are isomorphic.
Notes
References
Model theory
|
https://en.wikipedia.org/wiki/Disjunctive%20sum
|
In the mathematics of combinatorial games, the sum or disjunctive sum of two games is a game in which the two games are played in parallel, with each player being allowed to move in just one of the games per turn. The sum game finishes when there are no moves left in either of the two parallel games, at which point (in normal play) the last player to move wins.
This operation may be extended to disjunctive sums of any number of games, again by playing the games in parallel and moving in exactly one of the games per turn. It is the fundamental operation that is used in the Sprague–Grundy theorem for impartial games and which led to the field of combinatorial game theory for partisan games.
Application to common games
Disjunctive sums arise in games that naturally break up into components or regions that do not interact except in that each player in turn must choose just one component to play in. Examples of such games are Go, Nim, Sprouts, Domineering, the Game of the Amazons, and the map-coloring games.
In such games, each component may be analyzed separately for simplifications that do not affect its outcome or the outcome of its disjunctive sum with other games. Once this analysis has been performed, the components can be combined by taking the disjunctive sum of two games at a time, combining them into a single game with the same outcome as the original game.
Mathematics
The sum operation was formalized by . It is a commutative and associative operation: if two games are combined, the outcome is the same regardless of what order they are combined, and if more than two games are combined, the outcome is the same regardless of how they are grouped.
The negation −G of a game G (the game formed by trading the roles of the two players) forms an additive inverse under disjunctive sums: the game G + −G is a zero game (won by whoever goes second) using a simple echoing strategy in which the second player repeatedly copies the first player's move in the other game
|
https://en.wikipedia.org/wiki/Foxconn
|
Hon Hai Precision Industry Co., Ltd., trading as Hon Hai Technology Group in China and Taiwan and Foxconn internationally, is a Taiwanese multinational electronics contract manufacturer established in 1974 with headquarters in Tucheng, New Taipei City, Taiwan. In 2021, the company's annual revenue reached () and was ranked 20th in the 2023 Fortune Global 500. It is the world's largest contract manufacturer of electronics. While headquartered in Taiwan, the company earns the majority of its revenue from assets in China and is one of the largest employers worldwide. Terry Gou is the company founder and former chairman.
Foxconn manufactures electronic products for major American, Canadian, Chinese, Finnish, and Japanese companies. Notable products manufactured by Foxconn include the BlackBerry, iPad, iPhone, iPod, Kindle, all Nintendo gaming systems since the GameCube, Nintendo DS models, Sega models, Nokia devices, Cisco products, Sony devices (including mostly PlayStation gaming consoles), Google Pixel devices, Xiaomi devices, every successor to Microsoft's Xbox console, and several CPU sockets, including the TR4 CPU socket on some motherboards. As of 2012, Foxconn factories manufactured an estimated 40% of all consumer electronics sold worldwide.
Foxconn named Young Liu its new chairman after the retirement of founder Terry Gou, effective on 1 July 2019. Young Liu was the special assistant to former chairman Terry Gou and the head of business group S (semiconductor). Analysts said the handover signals the company's future direction, underscoring the importance of semiconductors, together with technologies like artificial intelligence, robotics, and autonomous driving, after Foxconn's traditional major business of smartphone assembly has matured.
History
Terry Gou established Hon Hai Precision Industry Co., Ltd. as an electrical components manufacturer in 1974 in Taipei, Taiwan. Foxconn's first manufacturing plant in Mainland China opened in Longhua Town, Shenzh
|
https://en.wikipedia.org/wiki/Arming%20plug
|
An arming plug is a small plug that is fitted into flight hardware to enable functions that, for instrument or personnel safety, should not be activated before flight. In the case of a missile or bomb, the (lack of the) arming plug prevents explosion before flight; in the case of a spacecraft or scientific sounding rocket, it might prevent premature firing of a hydrazine thruster system (hydrazine is extremely toxic) or block cryogenic or photographic film systems from operating before launch.
References
Aerospace engineering
Aircraft components
Rocketry
Spacecraft components
Safety equipment
|
https://en.wikipedia.org/wiki/Meagher%20Electronics
|
Meagher Electronics was a Monterey, California, company which was founded in 1947 by Jim Meagher. It included a recording studio which recorded early demos for Joan Baez, her sister, Mimi Farina and her sister's husband, Richard Farina. The company also repaired all sorts of home entertainment equipment, focusing on professional and semi professional sound equipment and high end home systems. It had a huge, high warehouse space in which literally hundreds of old wooden console radios and phonographs dating back to the 1920s were stacked to the rafters. Meagher used to explain that these had been left by customers who chose not to pick them up instead of paying the repair estimate charges.
The firm was also a commercial sound installation company and one of the first Altec Lansing dealers in the country, with catalogues and equipment going back to 1947. It provided the sound system for most of the concerts and live events in the Monterey area in the 1950s through the 1970s, from folk to jazz to Roger Williams It also recorded the first gold jazz album, Erroll Garner's Concert by the Sea in the mid-1950s on a portable mono Ampex 601 tape recorder which remained a prize possession for many years.
The Monterey Jazz Festival contracted Meagher to provide their sound reinforcement system from the beginning of its existence in 1958 and the firm was extremely conscientious about providing the best quality sound possible, often using recording quality condenser microphones and custom designed loudspeaker arrays. The company supplied sound reinforcement systems for the Big Sur Folk Festivals and assisted Harry McCune Sound from San Francisco and their sound designer Abe Jacob, who was contracted to provide the sound system for the Monterey Pop Festival in 1967.
References
Audio engineering
Companies based in Monterey County, California
Electronics companies established in 1947
1947 establishments in California
|
https://en.wikipedia.org/wiki/Adaptive%20beamformer
|
An adaptive beamformer is a system that performs adaptive spatial signal processing with an array of transmitters or receivers. The signals are combined in a manner which increases the signal strength to/from a chosen direction. Signals to/from other directions are combined in a benign or destructive manner, resulting in degradation of the signal to/from the undesired direction. This technique is used in both radio frequency and acoustic arrays, and provides for directional sensitivity without physically moving an array of receivers or transmitters.
Motivation/Applications
Adaptive beamforming was initially developed in the 1960s for the military applications of sonar and radar. There exist several modern applications for beamforming, one of the most visible applications being commercial wireless networks such as LTE. Initial applications of adaptive beamforming were largely focused in radar and electronic countermeasures to mitigate the effect of signal jamming in the military domain.
Radar uses can be seen here Phased array radar. Although not strictly adaptive, these radar applications make use of either static or dynamic (scanning) beamforming.
Commercial wireless standards such as 3GPP Long Term Evolution (LTE (telecommunication)) and IEEE 802.16 WiMax rely on adaptive beamforming to enable essential services within each standard.
Basic Concepts
An adaptive beamforming system relies on principles of wave propagation and phase relationships. See Constructive interference, and Beamforming. Using the principles of superimposing waves, a higher or lower amplitude wave is created (e.g. by delaying and weighting the signal received). The adaptive beamforming system dynamically adapts in order to maximize or minimize a desired parameter, such as Signal-to-interference-plus-noise ratio.
Adaptive Beamforming Schemes
There are several ways to approach the beamforming design, the first approach was implemented by maximizing the signal to noise ratio (SNR) by Appleb
|
https://en.wikipedia.org/wiki/Information%20privacy%20law
|
Information privacy, data privacy or data protection laws provide a legal framework on how to obtain, use and store data of natural persons. The various laws around the world describe the rights of natural persons to control who is using its data. This includes usually the right to get details on which data is stored, for what purpose and to request the deletion in case the purpose is not given anymore.
Over 80 countries and independent territories, including nearly every country in Europe and many in Latin America and the Caribbean, Asia, and Africa, have now adopted comprehensive data protection laws. The European Union has the General Data Protection Regulation (GDPR), in force since May 25, 2018. The United States is notable for not having adopted a comprehensive information privacy law, but rather having adopted limited sectoral laws in some areas like the California Consumer Privacy Act (CCPA).
By Jurisdiction
The German state of Hessia enacted the World's first data privacy law on 30SEP1970. In Germany the term informational self-determination was first used in the context of a German constitutional ruling relating to personal information collected during the 1983 census.
Asia
India
India passed its Digital Personal Data Protection Act, 2023 in August 2023.
China
China passed its Personal Information Protection Law (PIPL) in mid-2021, and was effective from November 1, 2021. It focuses heavily on consent, rights of the individual, and transparency of data processing. PIPL has been compared to the EU GDPR as it has similar scope and many similar provisions.
Philippines
In the Philippines, The Data Privacy Act of 2012 mandated the creation of the National Privacy Commission that would monitor and maintain policies that involve information privacy and personal data protection in the country. Modeled after the EU Data Protection Directive and the Asia-Pacific Economic Cooperation (APEC) Privacy Framework, the independent body would ensure compliance of t
|
https://en.wikipedia.org/wiki/Map-coloring%20games
|
Several map-coloring games are studied in combinatorial game theory. The general idea is that we are given a map with regions drawn in but with not all the regions colored. Two players, Left and Right, take turns coloring in one uncolored region per turn, subject to various constraints, as in the map-coloring problem. The move constraints and the winning condition are features of the particular game.
Some players find it easier to color vertices of the dual graph, as in the Four color theorem. In this method of play, the regions are represented by small circles, and the circles for neighboring regions are linked by line segments or curves. The advantages of this method are that only a small area need be marked on a turn, and that the representation usually takes up less space on the paper or screen. The first advantage is less important when playing with a computer interface instead of pencil and paper. It is also possible to play with Go stones or Checkers.
Move constraints
An inherent constraint in each game is the set of colors available to the players in coloring regions. If Left and Right have the same colors available to them, the game is impartial; otherwise the game is partisan. The set of colors could also depend on the state of the game; for instance it could be required that the color used be different from the color used on the previous move.
The map-based constraints on a move are usually based on the region to be colored and its neighbors, whereas in the map-coloring problem, regions are considered to be neighbors when they meet along a boundary longer than a single point. The classical map-coloring problem requires that no two neighboring regions be given the same color. The classical move constraint enforces this by prohibiting coloring a region with the same color as one of its neighbor. The anticlassical constraint prohibits coloring a region with a color that differs from the color of one of its neighbors.
Another kind of constrain
|
https://en.wikipedia.org/wiki/Working%20set
|
Working set is a concept in computer science which defines the amount of memory that a process requires in a given time interval.
Definition
Peter Denning (1968) defines "the working set of information of a process at time to be the collection of information referenced by the process during the process time interval ".
Typically the units of information in question are considered to be memory pages. This is suggested to be an approximation of the set of pages that the process will access in the future (say during the next time units), and more specifically is suggested to be an indication of what pages ought to be kept in main memory to allow most progress to be made in the execution of that process.
Rationale
The effect of the choice of what pages to be kept in main memory (as distinct from being paged out to auxiliary storage) is important: if too many pages of a process are kept in main memory, then fewer other processes can be ready at any one time. If too few pages of a process are kept in main memory, then its page fault frequency is greatly increased and the number of active (non-suspended) processes currently executing in the system approaches zero.
The working set model states that a process can be in RAM if and only if all of the pages that it is currently using (often approximated by the most recently used pages) can be in RAM. The model is an all or nothing model, meaning if the pages it needs to use increases, and there is no room in RAM, the process is swapped out of memory to free the memory for other processes to use.
Often a heavily loaded computer has so many processes queued up that, if all the processes were allowed to run for one scheduling time slice, they would refer to more pages than there is RAM, causing the computer to "thrash".
By swapping some processes from memory, the result is that processes—even processes that were temporarily removed from memory—finish much sooner than they would if the computer attempted to run them all at
|
https://en.wikipedia.org/wiki/Mathematics%20Subject%20Classification
|
The Mathematics Subject Classification (MSC) is an alphanumerical classification scheme that has collaboratively been produced by staff of, and based on the coverage of, the two major mathematical reviewing databases, Mathematical Reviews and Zentralblatt MATH. The MSC is used by many mathematics journals, which ask authors of research papers and expository articles to list subject codes from the Mathematics Subject Classification in their papers. The current version is MSC2020.
Structure
The MSC is a hierarchical scheme, with three levels of structure. A classification can be two, three or five digits long, depending on how many levels of the classification scheme are used.
The first level is represented by a two-digit number, the second by a letter, and the third by another two-digit number. For example:
53 is the classification for differential geometry
53A is the classification for classical differential geometry
53A45 is the classification for vector and tensor analysis
First level
At the top level, 64 mathematical disciplines are labeled with a unique two-digit number. In addition to the typical areas of mathematical research, there are top-level categories for "History and Biography", "Mathematics Education", and for the overlap with different sciences. Physics (i.e. mathematical physics) is particularly well represented in the classification scheme with a number of different categories including:
Fluid mechanics
Quantum mechanics
Geophysics
Optics and electromagnetic theory
All valid MSC classification codes must have at least the first-level identifier.
Second level
The second-level codes are a single letter from the Latin alphabet. These represent specific areas covered by the first-level discipline. The second-level codes vary from discipline to discipline.
For example, for differential geometry, the top-level code is 53, and the second-level codes are:
A for classical differential geometry
B for local differential geometry
C for glo
|
https://en.wikipedia.org/wiki/Frangipane
|
Frangipane ( ), is a sweet almond-flavored custard, typical in French pastry, used in a variety of ways, including cakes and such pastries as the Bakewell tart, conversation tart, Jésuite and pithivier. A French spelling from a 1674 cookbook is franchipane, with the earliest modern spelling coming from a 1732 confectioners' dictionary. Originally designated as a custard tart flavored by almonds or pistachios, it came later to designate a filling that could be used in a variety of confections and baked goods.
It is traditionally made by combining two parts of almond cream (crème d’amande) with one part pastry cream (crème pâtissière). Almond cream is made from butter, sugar, eggs, almond meal, bread flour, and rum; and pastry cream is made from whole milk, vanilla bean, cornstarch, sugar, egg yolks or whole eggs, and butter. There are many variations on both of these creams as well as on the proportion of almond cream to pastry cream in frangipane.
On Epiphany, the French cut the king cake, a round cake made of frangipane layers into slices to be distributed by a child known as le petit roi (the little king), who is usually hiding under the dining table. The cake is decorated with stars, a crown, flowers and a special bean hidden inside the cake. Whoever gets the piece of the frangipane cake with the bean is crowned "king" or "queen" for the following year.
Etymology
The word frangipane is a French term used to name products with an almond flavour. The word comes ultimately from the last name of Marquis Muzio Frangipani or Cesare Frangipani. The word first denoted the frangipani plant, from which was produced the perfume originally said to flavor frangipane. Other sources say that the name as applied to the almond custard was an homage by 16th-century Parisian chefs in name only to Frangipani, who created a jasmine-based perfume with a smell like the flowers to perfume leather gloves.
See also
List of almond dishes
List of custard desserts
List of pastries
|
https://en.wikipedia.org/wiki/Environmental%20change
|
Environmental change is a change or disturbance of the environment most often caused by human influences and natural ecological processes. Environmental changes include various factors, such as natural disasters, human interferences, or animal interaction. Environmental change encompasses not only physical changes, but also factors like an infestation of invasive species.
See also
Climate change (general concept)
Environmental degradation
Global warming
Human impact on the environment
Acclimatization
Atlas of Our Changing Environment
Phenotypic plasticity
Socioeconomics
References
Ecology
|
https://en.wikipedia.org/wiki/Null%20%28mathematics%29
|
In mathematics, the word null (from meaning "zero", which is from meaning "none") is often associated with the concept of zero or the concept of nothing. It is used in varying context from "having zero members in a set" (e.g., null set) to "having a value of zero" (e.g., null vector).
In a vector space, the null vector is the neutral element of vector addition; depending on the context, a null vector may also be a vector mapped to some null by a function under consideration (such as a quadratic form coming with the vector space, see null vector, a linear mapping given as matrix product or dot product, a seminorm in a Minkowski space, etc.). In set theory, the empty set, that is, the set with zero elements, denoted "{}" or "∅", may also be called null set. In measure theory, a null set is a (possibly nonempty) set with zero measure.
A null space of a mapping is the part of the domain that is mapped into the null element of the image (the inverse image of the null element). For example, in linear algebra, the null space of a linear mapping, also known as kernel, is the set of vectors which map to the null vector under that mapping.
In statistics, a null hypothesis is a proposition that no effect or relationship exists between populations and phenomena. It is the hypothesis which is presumed true—unless statistical evidence indicates otherwise.
See also
0
Null sign
References
Mathematical terminology
0 (number)
|
https://en.wikipedia.org/wiki/Rent%27s%20rule
|
Rent's rule pertains to the organization of computing logic, specifically the relationship between the number of external signal connections to a logic block (i.e., the number of "pins") with the number of logic gates in the logic block, and has been applied to circuits ranging from small digital circuits to mainframe computers. Put simply, it states that there is a simple power law relationship between these two values (pins and gates).
E. F. Rent's discovery and first publications
In the 1960s, E. F. Rent, an IBM employee, found a remarkable trend between the number of pins (terminals, T) at the boundaries of integrated circuit designs at IBM and the number of internal components (g), such as logic gates or standard cells. On a log–log plot, these datapoints were on a straight line, implying a power-law relation , where t and p are constants (p < 1.0, and generally 0.5 < p < 0.8).
Rent's findings in IBM-internal memoranda were published in the IBM Journal of Research and Development in 2005, but the relation was described in 1971 by Landman and Russo. They performed a hierarchical circuit partitioning in such a way that at each hierarchical level (top-down) the fewest interconnections had to be cut to partition the circuit (in more or less equal parts). At each partitioning step, they noted the number of terminals and the number of components in each partition and then partitioned the sub-partitions further. They found the power-law rule applied to the resulting T versus g plot and named it "Rent's rule".
Rent's rule is an empirical result based on observations of existing designs, and therefore it is less applicable to the analysis of non-traditional circuit architectures. However, it provides a useful framework with which to compare similar architectures.
Theoretical basis
Christie and Stroobandt later derived Rent's rule theoretically for homogeneous systems and pointed out that the amount of optimization achieved in placement is reflected by the paramete
|
https://en.wikipedia.org/wiki/Charismatic%20megafauna
|
Charismatic megafauna are animal species that are large—in the relevant category that they represent—with symbolic value or widespread popular appeal, and are often used by environmental activists to gain public support for environmentalist goals. Examples include tigers, lions, jaguars, hippopotamuses, elephants, gorillas, chimpanzees, giant pandas, brown and polar bears, rhinoceroses, kangaroos, koalas, blue whales, humpback whales, orcas, walruses, elephant seals, bald eagles, white-tailed and eastern imperial eagles, penguins, crocodiles and great white sharks among countless others. In this definition, animals such as penguins or bald eagles can be considered megafauna because they are among the largest animals within the local animal community, and they disproportionately affect their environment. The vast majority of charismatic megafauna species are threatened and endangered by overhunting, poaching, black market trade, climate change, habitat destruction, invasive species, and many more causes.
Use in conservation
Charismatic species are often used as flagship species in conservation programs, as they are supposed to affect people's feelings more. However, being charismatic does not protect species against extinction; all of the 10 most charismatic species are currently endangered, and only the giant panda shows a demographic growth from an extremely small population.
Beginning early in the 20th century, efforts to reintroduce extirpated charismatic megafauna to ecosystems have been an interest of a number of private and non-government conservation organizations. Species have been reintroduced from captive breeding programs in zoos, such as the wisent (the European bison) to Poland's Białowieża Forest.
These and other reintroductions of charismatic megafauna, such as Przewalski's horse to Mongolia, have been to areas of limited, and often patchy, range compared to the historic ranges of the respective species.
Environmental activists and proponents of
|
https://en.wikipedia.org/wiki/Preferential%20entailment
|
Preferential entailment is a non-monotonic logic based on selecting only models that are considered the most plausible. The plausibility of models is expressed by an ordering among models called a preference relation, hence the name preference entailment.
Formally, given a propositional formula and an ordering over propositional models , preferential entailment selects only the models of that are minimal according to . This selection leads to a non-monotonic inference relation: holds if and only if all minimal models of according to are also models of .
Circumscription can be seen as the particular case of preferential entailment when the ordering is based on containment of the sets of variables assigned to true (in the propositional case) or containment of the extensions of predicates (in the first-order logic case).
See also
References
Logic in computer science
Knowledge representation
Non-classical logic
|
https://en.wikipedia.org/wiki/Software%20token
|
A software token (a.k.a. soft token) is a piece of a two-factor authentication security device that may be used to authorize the use of computer services. Software tokens are stored on a general-purpose electronic device such as a desktop computer, laptop, PDA, or mobile phone and can be duplicated. (Contrast hardware tokens, where the credentials are stored on a dedicated hardware device and therefore cannot be duplicated — absent physical invasion of the device)
Because software tokens are something one does not physically possess, they are exposed to unique threats based on duplication of the underlying cryptographic material - for example, computer viruses and software attacks. Both hardware and software tokens are vulnerable to bot-based man-in-the-middle attacks, or to simple phishing attacks in which the one-time password provided by the token is solicited, and then supplied to the genuine website in a timely manner. Software tokens do have benefits: there is no physical token to carry, they do not contain batteries that will run out, and they are cheaper than hardware tokens.
Security architecture
There are two primary architectures for software tokens: shared secret and public-key cryptography.
For a shared secret, an administrator will typically generate a configuration file for each end-user. The file will contain a username, a personal identification number, and the secret. This configuration file is given to the user.
The shared secret architecture is potentially vulnerable in a number of areas. The configuration file can be compromised if it is stolen and the token is copied. With time-based software tokens, it is possible to borrow an individual's PDA or laptop, set the clock forward, and generate codes that will be valid in the future. Any software token that uses shared secrets and stores the PIN alongside the shared secret in a software client can be stolen and subjected to offline attacks. Shared secret tokens can be difficult to distribu
|
https://en.wikipedia.org/wiki/Combined%20sewer
|
A combined sewer is a type of gravity sewer with a system of pipes, tunnels, pump stations etc. to transport sewage and urban runoff together to a sewage treatment plant or disposal site. This means that during rain events, the sewage gets diluted, resulting in higher flowrates at the treatment site. Uncontaminated stormwater simply dilutes sewage, but runoff may dissolve or suspend virtually anything it contacts on roofs, streets, and storage yards. As rainfall travels over roofs and the ground, it may pick up various contaminants including soil particles and other sediment, heavy metals, organic compounds, animal waste, and oil and grease. Combined sewers may also receive dry weather drainage from landscape irrigation, construction dewatering, and washing buildings and sidewalks.
Combined sewers can cause serious water pollution problems during combined sewer overflow (CSO) events when combined sewage and surface runoff flows exceed the capacity of the sewage treatment plant, or of the maximum flow rate of the system which transmits the combined sources. In instances where exceptionally high surface runoff occurs (such as large rainstorms), the load on individual tributary branches of the sewer system may cause a back-up to a point where raw sewage flows out of input sources such as toilets, causing inhabited buildings to be flooded with a toxic sewage-runoff mixture, incurring massive financial burdens for cleanup and repair. When combined sewer systems experience these higher than normal throughputs, relief systems cause discharges containing human and industrial waste to flow into rivers, streams, or other bodies of water. Such events frequently cause both negative environmental and lifestyle consequences, including beach closures, contaminated shellfish unsafe for consumption, and contamination of drinking water sources, rendering them temporarily unsafe for drinking and requiring boiling before uses such as bathing or washing dishes.
Mitigation of combined
|
https://en.wikipedia.org/wiki/Delay-locked%20loop
|
In electronics, a delay-locked loop (DLL) is a pseudo-digital circuit similar to a phase-locked loop (PLL), with the main difference being the absence of an internal voltage-controlled oscillator, replaced by a delay line.
A DLL can be used to change the phase of a clock signal (a signal with a periodic waveform), usually to enhance the clock rise-to-data output valid timing characteristics of integrated circuits (such as DRAM devices). DLLs can also be used for clock recovery (CDR). From the outside, a DLL can be seen as a negative delay gate placed in the clock path of a digital circuit.
The main component of a DLL is a delay chain composed of many delay gates connected output-to-input. The input of the chain (and thus of the DLL) is connected to the clock that is to be negatively delayed. A multiplexer is connected to each stage of the delay chain; a control circuit automatically updates the selector of this multiplexer to produce the negative delay effect. The output of the DLL is the resulting, negatively delayed clock signal.
Another way to view the difference between a DLL and a PLL is that a DLL uses a variable phase (=delay) block, whereas a PLL uses a variable frequency block.
A DLL compares the phase of its last output with the input clock to generate an error signal which is then integrated and fed back as the control to all of the delay elements.
The integration allows the error to go to zero while keeping the control signal, and thus the delays, where they need to be for phase lock. Since the control signal directly impacts the phase this is all that is required.
A PLL compares the phase of its oscillator with the incoming signal to generate an error signal which is then integrated to create a control signal for the voltage-controlled oscillator. The control signal impacts the oscillator's frequency, and phase is the integral of frequency, so a second integration is unavoidably performed by the oscillator itself.
In the Control Systems jargon, th
|
https://en.wikipedia.org/wiki/Teleradiology
|
Teleradiology is the transmission of radiological patient images from procedures such as x-rays photographs, Computed tomography (CT), and MRI imaging, from one location to another for the purposes of sharing studies with other radiologists and physicians. Teleradiology allows radiologists to provide services without actually having to be at the location of the patient. This is particularly important when a sub-specialist such as an MRI radiologist, neuroradiologist, pediatric radiologist, or musculoskeletal radiologist is needed, since these professionals are generally only located in large metropolitan areas working during daytime hours. Teleradiology allows for specialists to be available at all times.
Teleradiology utilizes standard network technologies such as the Internet, telephone lines, wide area networks, local area networks (LAN) and the latest advanced technologies such as medical cloud computing. Specialized software is used to transmit the images and enable the radiologist to effectively analyze potentially hundreds of images of a given study. Technologies such as advanced graphics processing, voice recognition, artificial intelligence, and image compression are often used in teleradiology. Through teleradiology and mobile DICOM viewers, images can be sent to another part of the hospital or to other locations around the world with equal effort.
Teleradiology is a growth technology given that imaging procedures are growing approximately 15% annually against an increase of only 2% in the radiologist population.
Reports
Teleradiologists can provide a preliminary read for emergency room cases and other emergency cases or a final read for the official patient record and for use in billing.
Preliminary reports include all pertinent findings and a telephone call for any critical findings. For some teleradiology services, the turnaround time is rapid with a 30-minute standard turnaround and expedited for critical and stroke studies.
Teleradiology final
|
https://en.wikipedia.org/wiki/Q%20meter
|
A Q meter is a piece of equipment used in the testing of radio frequency circuits. It has been largely replaced in professional laboratories by other types of impedance measuring devices, though it is still in use among radio amateurs. It was developed at Boonton Radio Corporation in Boonton, New Jersey in 1934 by William D. Loughlin.
Description
A Q meter measures the quality factor of a circuit, Q, which expresses how much energy is dissipated per cycle in a non-ideal reactive circuit:
This expression applies to an RF and microwave filter, bandpass LC filter, or any resonator. It also can be applied to an inductor or capacitor at a chosen frequency. For inductors
Where is the reactance of the inductor, is the inductance, is the angular frequency and is the resistance of the inductor. The resistance represents the loss in the inductor, mainly due to the resistance of the wire. A Q meter works on the principle of series resonance.
For LC band pass circuits and filters:
Where is the resonant frequency (center frequency) and is the filter bandwidth. In a band pass filter using an LC resonant circuit, when the loss (resistance) of the inductor increases, its Q factor is reduced, and so the bandwidth of the filter is increased. In a coaxial cavity filter, there are no inductors and capacitors, but the cavity has an equivalent LC model with losses (resistance) and the Q factor can be applied as well.
Operation
Internally, a minimal Q meter consists of a tuneable RF generator with a very low (pass) impedance output and a detector with a very high impedance input. There is usually provision to add a calibrated amount of high Q capacitance across the component under test to allow inductors to be measured in isolation. The generator is effectively placed in series with the tuned circuit formed by the components under test, and having negligible output resistance, does not materially affect the Q factor, while the detector measures the voltage developed across o
|
https://en.wikipedia.org/wiki/Big%20Numbers%20%28comics%29
|
Big Numbers is an unfinished graphic novel by writer Alan Moore and artist Bill Sienkiewicz. In 1990 Moore's short-lived imprint Mad Love published two of the planned twelve issues. The series was picked up by Kevin Eastman's Tundra Publishing, but the completed third issue did not print, and the remaining issues, whose artwork was to be handled by Sienkiewicz's assistant Al Columbia, were never finished.
The work marks a move, on Moore's part, away from genre fiction, in the wake of the success of Watchmen. Moore weaves mathematics (in particular the work of mathematician Benoit Mandelbrot on fractal geometry and chaos theory) into a narrative of socioeconomic changes wrought by an American corporation's building of a shopping mall in a small, traditional English town, and the effects of the economic policies of the Margaret Thatcher administration in the 1980s.
Publication history
The planned 500-page graphic novel was to be serialised one chapter at a time over twelve issues. The series was printed on high-quality paper in an unusual square format.
The first two issues were produced by Alan Moore's self-publishing company Mad Love, with writing by Moore and artwork by Bill Sienkiewicz. However, the workload for the comic was intense, and Sienkiewicz stalled. By the time he backed out of the series, the third issue was still incomplete and rising overhead crippled the production. Kevin Eastman, creator of Teenage Mutant Ninja Turtles, stepped in and attempted to have his company Tundra Publishing publish Big Numbers. Moore and Eastman asked Sienkiewicz' assistant, Al Columbia, to become the series' sole artist and Roxanne Starr to be its letterer. Columbia worked on the fourth issue but, for reasons which remain unclear, destroyed his own artwork and abandoned the project as well. Big Numbers #3 and #4 were never published, and the series remains unfinished.
In 1999, ten pages of Sienkiewicz's art for Big Numbers #3 were published in the first (and only) issu
|
https://en.wikipedia.org/wiki/Folk%20theorem%20%28game%20theory%29
|
In game theory, folk theorems are a class of theorems describing an abundance of Nash equilibrium payoff profiles in repeated games . The original Folk Theorem concerned the payoffs of all the Nash equilibria of an infinitely repeated game. This result was called the Folk Theorem because it was widely known among game theorists in the 1950s, even though no one had published it. Friedman's (1971) Theorem concerns the payoffs of certain subgame-perfect Nash equilibria (SPE) of an infinitely repeated game, and so strengthens the original Folk Theorem by using a stronger equilibrium concept: subgame-perfect Nash equilibria rather than Nash equilibria.
The Folk Theorem suggests that if the players are patient enough and far-sighted (i.e. if the discount factor ), then repeated interaction can result in virtually any average payoff in an SPE equilibrium. "Virtually any" is here technically defined as "feasible" and "individually rational".
Setup and definitions
We start with a basic game, also known as the stage game, which is an n-player game. In this game, each player has finitely many actions to choose from, and they make their choices simultaneously and without knowledge of the other player's choices. The collective choices of the players leads to a payoff profile, i.e. to a payoff for each of the players. The mapping from collective choices to payoff profiles is known to the players, and each player aims to maximize their payoff. If the collective choice is denoted by x, the payoff that player i receives, also known as player i's utility, will be denoted by .
We then consider a repetition of this stage game, finitely or infinitely many times. In each repetition, each player chooses one of their stage game options, and when making that choice, they may take into account the choices of the other players in the prior iterations. In this repeated game, a strategy for one of the players is a deterministic rule that specifies the player's choice in each iteration of th
|
https://en.wikipedia.org/wiki/Helge%20Tverberg
|
Helge Arnulf Tverberg (March 6, 1935December 28, 2020) was a Norwegian mathematician. He was a professor in the Mathematics Department at the University of Bergen, his speciality being combinatorics; he retired at the mandatory age of seventy.
He was born in Bergen. He took the cand.real. degree at the University of Bergen in 1958, and the dr.philos. degree in 1968. He was a lecturer from 1958 to 1971 and professor from 1971 to his retirement in 2005. He was a visiting scholar at the University of Reading in 1966 and at the Australian National University, in Canberra, from 1980 to 1981, 1987 to 1988 and in 2004. He was a member of the Norwegian Academy of Science and Letters.
Tverberg, in 1965, proved a result on intersection patterns of partitions of point configurations that has come to be known as Tverberg's partition theorem. It inaugurated a new branch of combinatorial geometry, with many variations and applications. An account by Günter M. Ziegler of Tverberg's work in this direction appeared in the issue of the Notices of the American Mathematical Society for April, 2011.
See also
Geometric separator
References
1935 births
Living people
20th-century Norwegian mathematicians
Combinatorialists
Academic staff of the University of Bergen
University of Bergen alumni
Members of the Norwegian Academy of Science and Letters
|
https://en.wikipedia.org/wiki/4C%20Entity
|
The 4C Entity is a digital rights management (DRM) consortium formed by IBM, Intel, Panasonic and Toshiba that has established and licensed interoperable cryptographic protection mechanisms for removable media technologies. 4C Entity was founded in 1999 when Warner Music approached the companies to develop stronger DRM technologies for the then-novel DVD-Audio format after Intel’s CSS DRM technology was hacked.
The group developed and currently lease the Content Protection for Recordable Media (CPRM) and the Content Protection for Prerecorded Media (CPPM) schemes, which use Media Key Block technology and the Cryptomeria cipher along with audio watermarks. 4C Entity has also written the Content Protection System Architecture (CPSA), which describes how content protection solutions work together and the role of each current technology.
CPPM and CPRM are implemented in SD Cards, DVD-Audio, Flash media, and other digital media formats. Like many DRM technologies, 4C Entity and its products have been criticized, with the Associated Press writing that CPRM “spark[ed] privacy concerns.”
References
External links
The 4C Entity
Consortia in the United States
Digital rights management
|
https://en.wikipedia.org/wiki/Zeta%20function%20regularization
|
In mathematics and theoretical physics, zeta function regularization is a type of regularization or summability method that assigns finite values to divergent sums or products, and in particular can be used to define determinants and traces of some self-adjoint operators. The technique is now commonly applied to problems in physics, but has its origins in attempts to give precise meanings to ill-conditioned sums appearing in number theory.
Definition
There are several different summation methods called zeta function regularization for defining the sum of a possibly divergent series
One method is to define its zeta regularized sum to be ζA(−1) if this is defined, where the zeta function is defined for large Re(s) by
if this sum converges, and by analytic continuation elsewhere.
In the case when an = n, the zeta function is the ordinary Riemann zeta function. This method was used by Euler to "sum" the series 1 + 2 + 3 + 4 + ... to ζ(−1) = −1/12.
showed that in flat space, in which the eigenvalues of Laplacians are known, the zeta function corresponding to the partition function can be computed explicitly. Consider a scalar field φ contained in a large box of volume V in flat spacetime at the temperature T = β−1. The partition function is defined by a path integral over all fields φ on the Euclidean space obtained by putting τ = it which are zero on the walls of the box and which are periodic in τ with period β. In this situation from the partition function he computes energy, entropy and pressure of the radiation of the field φ. In case of flat spaces the eigenvalues appearing in the physical quantities are generally known, while in case of curved space they are not known: in this case asymptotic methods are needed.
Another method defines the possibly divergent infinite product a1a2.... to be exp(−ζ′A(0)). used this to define the determinant of a positive self-adjoint operator A (the Laplacian of a Riemannian manifold in their application) with eigenvalues
|
https://en.wikipedia.org/wiki/Motion%20detector
|
A motion detector is an electrical device that utilizes a sensor to detect nearby motion. Such a device is often integrated as a component of a system that automatically performs a task or alerts a user of motion in an area. They form a vital component of security, automated lighting control, home control, energy efficiency, and other useful systems.
Overview
An active electronic motion detector contains an optical, microwave, or acoustic sensor, as well as a transmitter. However, a passive contains only a sensor and only senses a signature from the moving object via emission or reflection. Changes in the optical, microwave or acoustic field in the device's proximity are interpreted by the electronics based on one of several technologies. Most low-cost motion detectors can detect motion at distances of about . Specialized systems are more expensive but have either increased sensitivity or much longer ranges. Tomographic motion detection systems can cover much larger areas because the radio waves it senses are at frequencies which penetrate most walls and obstructions, and are detected in multiple locations.
Motion detectors have found wide use in commercial applications. One common application is activating automatic door openers in businesses and public buildings. Motion sensors are also widely used in lieu of a true occupancy sensor in activating street lights or indoor lights in walkways, such as lobbies and staircases. In such smart lighting systems, energy is conserved by only powering the lights for the duration of a timer, after which the person has presumably left the area. A motion detector may be among the sensors of a burglar alarm that is used to alert the home owner or security service when it detects the motion of a possible intruder. Such a detector may also trigger a security camera to record the possible intrusion.
Sensor technology
Several types of motion detection are in wide use:
Passive infrared (PIR)
Passive infrared (PIR) sensors are sen
|
https://en.wikipedia.org/wiki/Glass%20break%20detector
|
A glass break detector is a sensor that detects if a pane of glass has been shattered or broken. These sensors are commonly used near glass doors or glass storefront windows. They are widely used in electronic burglar-alarm systems.
The detection process begins with a microphone that picks up noises and vibrations coming from the glass. If the vibrations exceed a certain threshold (which is sometimes user selectable), then they are analyzed by detector circuitry. Simpler detectors merely use narrowband microphones tuned to frequencies typical of glass shattering. These are merely designed to react to sound magnitudes above a certain threshold, whereas more complex designs analytically compare the sound to one or more glass-break profiles using signal transforms similar to DCT and FFT. These digitally sophisticated detectors only react if both the amplitude threshold and statistically expressed similarity threshold are breached. Advances in technology have also led to the use of wireless glass-break detectors.
See also
Chubb Locks
Yale (company)
Burglar alarm
References
Perimeter security
Glass engineering and science
Detectors
|
https://en.wikipedia.org/wiki/Minification%20%28programming%29
|
Minification (also minimisation or minimization) is the process of removing all unnecessary characters from the source code of interpreted programming languages or markup languages without changing its functionality. These unnecessary characters usually include white space characters, new line characters, comments, and sometimes block delimiters, which are used to add readability to the code but are not required for it to execute. Minification reduces the size of the source code, making its transmission over a network (e.g. the Internet) more efficient. In programmer culture, aiming at extremely minified source code is the purpose of recreational code golf competitions.
Minification can be distinguished from the more general concept of data compression in that the minified source can be interpreted immediately without the need for an uncompression step: the same interpreter can work with both the original as well as with the minified source.
The goals of minification are not the same as the goals of obfuscation; the former is often intended to be reversed using a pretty-printer or unminifier. However, to achieve its goals, minification sometimes uses techniques also used by obfuscation; for example, shortening variable names and refactoring the source code. When minification uses such techniques, the pretty-printer or unminifier can only fully reverse the minification process if it is supplied details of the transformations done by such techniques. If not supplied those details, the reversed source code will contain different variable names and control flow, even though it will have the same functionality as the original source code.
Example
For example, the JavaScript code
// This is a comment that will be removed by the minifier
var array = [];
for (var i = 0; i < 20; i++) {
array[i] = i;
}
is equivalent to but longer than
for(var a=[],i=0;i<20;a[i]=i++);
History
In 2001 Douglas Crockford introduced JSMin, which removed comments and whitespace from JavaScrip
|
https://en.wikipedia.org/wiki/Tribometer
|
A tribometer is an instrument that measures tribological quantities, such as coefficient of friction, friction force, and wear volume, between two surfaces in contact. It was invented by the 18th century Dutch scientist Musschenbroek
A tribotester is the general name given to a machine or device used to perform tests and simulations of wear, friction and lubrication which are the subject of the study of tribology. Often tribotesters are extremely specific in their function and are fabricated by manufacturers who desire to test and analyze the long-term performance of their products. An example is that of orthopedic implant manufacturers who have spent considerable sums of money to develop tribotesters that accurately reproduce the motions and forces that occur in human hip joints so that they can perform accelerated wear tests of their products.
Theory
A simple tribometer is described by a hanging mass and a mass resting on a horizontal surface, connected to each other via a string and pulley. The coefficient of friction, µ, when the system is stationary, is determined by increasing the hanging mass until the moment that the resting mass begins to slide. Then using the general equation for friction force:
Where N, the normal force, is equal to the weight (mass x gravity) of the sitting mass (mT) and F, the loading force, is equal to the weight (mass x gravity) of the hanging mass (mH).
To determine the kinetic coefficient of friction the hanging mass is increased or decreased until the mass system moves at a constant speed.
In both cases, the coefficient of friction is simplified to the ratio of the two masses:
In most test applications using tribometers, wear is measured by comparing the mass or surfaces of test specimens before and after testing. Equipment and methods used to examine the worn surfaces include optical microscopes, scanning electron microscopes, optical interferometry and mechanical roughness testers.
Types
Tribometers are often referred to
|
https://en.wikipedia.org/wiki/Newton%E2%80%93Euler%20equations
|
In classical mechanics, the Newton–Euler equations describe the combined translational and rotational dynamics of a rigid body.
Traditionally the Newton–Euler equations is the grouping together of Euler's two laws of motion for a rigid body into a single equation with 6 components, using column vectors and matrices. These laws relate the motion of the center of gravity of a rigid body with the sum of forces and torques (or synonymously moments) acting on the rigid body.
Center of mass frame
With respect to a coordinate frame whose origin coincides with the body's center of mass for τ(torque) and an inertial frame of reference for F(force), they can be expressed in matrix form as:
where
F = total force acting on the center of mass
m = mass of the body
I3 = the 3×3 identity matrix
acm = acceleration of the center of mass
vcm = velocity of the center of mass
τ = total torque acting about the center of mass
Icm = moment of inertia about the center of mass
ω = angular velocity of the body
α = angular acceleration of the body
Any reference frame
With respect to a coordinate frame located at point P that is fixed in the body and not coincident with the center of mass, the equations assume the more complex form:
where c is the location of the center of mass expressed in the body-fixed frame,
and
denote skew-symmetric cross product matrices.
The left hand side of the equation—which includes the sum of external forces, and the sum of external moments about P—describes a spatial wrench, see screw theory.
The inertial terms are contained in the spatial inertia matrix
while the fictitious forces are contained in the term:
When the center of mass is not coincident with the coordinate frame (that is, when c is nonzero), the translational and angular accelerations (a and α) are coupled, so that each is associated with force and torque components.
Applications
The Newton–Euler equations are used as the basis for more complicated "multi-body" formulations (screw
|
https://en.wikipedia.org/wiki/Business%20Process%20Model%20and%20Notation
|
Business Process Model and Notation (BPMN) is a graphical representation for specifying business processes in a business process model.
Originally developed by the Business Process Management Initiative (BPMI), BPMN has been maintained by the Object Management Group (OMG) since the two organizations merged in 2005. Version 2.0 of BPMN was released in January 2011, at which point the name was amended to Business Process Model and Notation to reflect the introduction of execution semantics, which were introduced alongside the existing notational and diagramming elements. Though it is an OMG specification, BPMN is also ratified as ISO 19510. The latest version is BPMN 2.0.2, published in January 2014.
Overview
Business Process Model and Notation (BPMN) is a standard for business process modeling that provides a graphical notation for specifying business processes in a Business Process Diagram (BPD), based on a flowcharting technique very similar to activity diagrams from Unified Modeling Language (UML). The objective of BPMN is to support business process management, for both technical users and business users, by providing a notation that is intuitive to business users, yet able to represent complex process semantics. The BPMN specification also provides a mapping between the graphics of the notation and the underlying constructs of execution languages, particularly Business Process Execution Language (BPEL).
BPMN has been designed to provide a standard notation readily understandable by all business stakeholders, typically including business analysts, technical developers and business managers. BPMN can therefore be used to support the generally desirable aim of all stakeholders on a project adopting a common language to describe processes, helping to avoid communication gaps that can arise between business process design and implementation.
BPMN is one of a number of business process modeling language standards used by modeling tools and processes. While the cu
|
https://en.wikipedia.org/wiki/Maximum%20entropy%20thermodynamics
|
In physics, maximum entropy thermodynamics (colloquially, MaxEnt thermodynamics) views equilibrium thermodynamics and statistical mechanics as inference processes. More specifically, MaxEnt applies inference techniques rooted in Shannon information theory, Bayesian probability, and the principle of maximum entropy. These techniques are relevant to any situation requiring prediction from incomplete or insufficient data (e.g., image reconstruction, signal processing, spectral analysis, and inverse problems). MaxEnt thermodynamics began with two papers by Edwin T. Jaynes published in the 1957 Physical Review.
Maximum Shannon entropy
Central to the MaxEnt thesis is the principle of maximum entropy. It demands as given some partly specified model and some specified data related to the model. It selects a preferred probability distribution to represent the model. The given data state "testable information" about the probability distribution, for example particular expectation values, but are not in themselves sufficient to uniquely determine it. The principle states that one should prefer the distribution which maximizes the Shannon information entropy,
This is known as the Gibbs algorithm, having been introduced by J. Willard Gibbs in 1878, to set up statistical ensembles to predict the properties of thermodynamic systems at equilibrium. It is the cornerstone of the statistical mechanical analysis of the thermodynamic properties of equilibrium systems (see partition function).
A direct connection is thus made between the equilibrium thermodynamic entropy STh, a state function of pressure, volume, temperature, etc., and the information entropy for the predicted distribution with maximum uncertainty conditioned only on the expectation values of those variables:
kB, the Boltzmann constant, has no fundamental physical significance here, but is necessary to retain consistency with the previous historical definition of entropy by Clausius (1865) (see Boltzmann constan
|
https://en.wikipedia.org/wiki/List%20of%20recombinant%20proteins
|
The following is a list of notable proteins that are produced from recombinant DNA, using biomolecular engineering. In many cases, recombinant human proteins have replaced the original animal-derived version used in medicine. The prefix "rh" for "recombinant human" appears less and less in the literature. A much larger number of recombinant proteins is used in the research laboratory. These include both commercially available proteins (for example most of the enzymes used in the molecular biology laboratory), and those that are generated in the course specific research projects.
Human recombinants that largely replaced animal or harvested from human types
Medicinal applications
Human growth hormone (rHGH): Humatrope from Lilly and Serostim from Serono replaced cadaver harvested human growth hormone
human insulin (BHI): Humulin from Lilly and Novolin from Novo Nordisk among others largely replaced bovine and porcine insulin for human therapy. Some prefer to continue using the animal-sourced preparations, as there is some evidence that synthetic insulin varieties are more likely to induce hypoglycemia unawareness. Remaining manufacturers of highly purified animal-sourced insulin include the U.K.'s Wockhardt Ltd. (headquartered in India), Argentina's Laboratorios Beta S.A., and China's Wanbang Biopharma Co.
Follicle-stimulating hormone (FSH) as a recombinant gonadotropin preparation replaced Serono's Pergonal which was previously isolated from post-menopausal female urine
Factor VIII: Kogenate from Bayer replaced blood harvested factor VIII
Research applications
Ribosomal proteins: For the studies of individual ribosomal proteins, the use of proteins that are produced and purified from recombinant sources has largely replaced those that are obtained through isolation. However, isolation is still required for the studies of the whole ribosome.
Lysosomal proteins: Lysosomal proteins are difficult to produce recombinantly due to the number and type of post-trans
|
https://en.wikipedia.org/wiki/Projective%20object
|
In category theory, the notion of a projective object generalizes the notion of a projective module. Projective objects in abelian categories are used in homological algebra. The dual notion of a projective object is that of an injective object.
Definition
An object in a category is projective if for any epimorphism and morphism , there is a morphism such that , i.e. the following diagram commutes:
That is, every morphism factors through every epimorphism .
If C is locally small, i.e., in particular is a set for any object X in C, this definition is equivalent to the condition that the hom functor (also known as corepresentable functor)
preserves epimorphisms.
Projective objects in abelian categories
If the category C is an abelian category such as, for example, the category of abelian groups, then P is projective if and only if
is an exact functor, where Ab is the category of abelian groups.
An abelian category is said to have enough projectives if, for every object of , there is a projective object of and an epimorphism from P to A or, equivalently, a short exact sequence
The purpose of this definition is to ensure that any object A admits a projective resolution, i.e., a (long) exact sequence
where the objects are projective.
Projectivity with respect to restricted classes
discusses the notion of projective (and dually injective) objects relative to a so-called bicategory, which consists of a pair of subcategories of "injections" and "surjections" in the given category C. These subcategories are subject to certain formal properties including the requirement that any surjection is an epimorphism. A projective object (relative to the fixed class of surjections) is then an object P so that Hom(P, −) turns the fixed class of surjections (as opposed to all epimorphisms) into surjections of sets (in the usual sense).
Properties
The coproduct of two projective objects is projective.
The retract of a projective object is projective.
Examples
|
https://en.wikipedia.org/wiki/De%20Rham%E2%80%93Weil%20theorem
|
In algebraic topology, the De Rham–Weil theorem allows computation of sheaf cohomology using an acyclic resolution of the sheaf in question.
Let be a sheaf on a topological space and a resolution of by acyclic sheaves. Then
where denotes the -th sheaf cohomology group of with coefficients in
The De Rham–Weil theorem follows from the more general fact that derived functors may be computed using acyclic resolutions instead of simply injective resolutions.
References
Homological algebra
Sheaf theory
|
https://en.wikipedia.org/wiki/Hi%20no%20Tori%20Hououhen%20%28MSX%29
|
is a 1987 video game for the MSX2 developed by Konami, produced alongside a similarly named game for the Famicom. Both games are based on the series and story arc with the same names.
It is in essence a Knightmare-like vertical scrolling shooter with the player viewing his character on the back and enemies and obstacles entering from the top of the screen. In addition, the game's six stages are laid out in a labyrinthine way, adding puzzle elements to the mix. In order to find, reach and defeat the game's final boss, the player would have to travel back and forth between the various stages to obtain a large assortment of keys. These keys then allow access to parts of other stages, even earlier ones. This traveling between the stages is highly unusual for a shoot 'em up.
References
External links
1987 video games
Japan-exclusive video games
Konami games
MSX2 games
MSX2-only games
Phoenixes in popular culture
Video games about birds
Video games based on anime and manga
Single-player video games
Video games developed in Japan
|
https://en.wikipedia.org/wiki/Formal%20moduli
|
In mathematics, formal moduli are an aspect of the theory of moduli spaces (of algebraic varieties or vector bundles, for example), closely linked to deformation theory and formal geometry. Roughly speaking, deformation theory can provide the Taylor polynomial level of information about deformations, while formal moduli theory can assemble consistent Taylor polynomials to make a formal power series theory. The step to moduli spaces, properly speaking, is an algebraization question, and has been largely put on a firm basis by Artin's approximation theorem.
A formal universal deformation is by definition a formal scheme over a complete local ring, with special fiber the scheme over a field being studied, and with a universal property amongst such set-ups. The local ring in question is then the carrier of the formal moduli.
References
Moduli theory
Algebraic geometry
Geometric algebra
|
https://en.wikipedia.org/wiki/Goal%20programming
|
Goal programming is a branch of multiobjective optimization, which in turn is a branch of multi-criteria decision analysis (MCDA). It can be thought of as an extension or generalisation of linear programming to handle multiple, normally conflicting objective measures. Each of these measures is given a goal or target value to be achieved. Deviations are measured from these goals both above and below the target. Unwanted deviations from this set of target values are then minimised in an achievement function. This can be a vector or a weighted sum dependent on the goal programming variant used. As satisfaction of the target is deemed to satisfy the decision maker(s), an underlying satisficing philosophy is assumed. Goal programming is used to perform three types of analysis:
Determine the required resources to achieve a desired set of objectives.
Determine the degree of attainment of the goals with the available resources.
Providing the best satisfying solution under a varying amount of resources and priorities of the goals.
History
Goal programming was first used by Charnes, Cooper and Ferguson in 1955, although the actual name first appeared in a 1961 text by Charnes and Cooper. Seminal works by Lee, Ignizio, Ignizio and Cavalier, and Romero followed. Schniederjans gives in a bibliography of a large number of pre-1995 articles relating to goal programming, and Jones and Tamiz give an annotated bibliography of the period 1990-2000. A recent textbook by Jones and Tamiz . gives a comprehensive overview of the state-of-the-art in goal programming.
The first engineering application of goal programming, due to Ignizio in 1962, was the design and placement of the antennas employed on the second stage of the Saturn V. This was used to launch the Apollo space capsule that landed the first men on the moon.
Variants
The initial goal programming formulations ordered the unwanted deviations into a number of priority levels, with the minimisation of a deviation in a high
|
https://en.wikipedia.org/wiki/Asian%20Pacific%20Mathematics%20Olympiad
|
The Asian Pacific Mathematics Olympiad (APMO) starting from 1989 is a regional mathematics competition which involves countries from the Asian Pacific region. The United States also takes part in the APMO. Every year, APMO is held in the afternoon of the second Monday of March for participating countries in the North and South Americas, and in the morning of the second Tuesday of March for participating countries on the Western Pacific and in Asia.
APMO's Aims
the discovering, encouraging and challenging of mathematically gifted school students in all Pacific-Rim countries
the fostering of friendly international relations and cooperation between students and teachers in the Pacific-Rim Region
the creating of an opportunity for the exchange of information on school syllabi and practice throughout the Pacific Region
the encouragement and support of mathematical involvement with Olympiad type activities, not only in the APMO participating countries, but also in other Pacific-Rim countries.
Scoring and Format
The APMO contest consists of one four-hour paper consisting of five questions of varying difficulty and each having a maximum score of 7 points. Contestants should not have formally enrolled at a university (or equivalent post-secondary institution) and they must be younger than 20 years of age on 1 July of the year of the contest.
APMO Member Nations/Regions
Observer Nations
Honduras and South Africa
Results
https://cms.math.ca/Competitions/APMO/
https://www.apmo-official.org/
https://www.apmo-official.org/2017/ResultsByName.html
http://imomath.com/index.php?options=Ap&mod=23&ttn=Asian-Pacific
See also
International Mathematical Olympiad
External links
APMO Official Website
Mathematics competitions
|
https://en.wikipedia.org/wiki/Ingress%20filtering
|
In computer networking, ingress filtering is a technique used to ensure that incoming packets are actually from the networks from which they claim to originate. This can be used as a countermeasure against various spoofing attacks where the attacker's packets contain fake IP addresses. Spoofing is often used in denial-of-service attacks, and mitigating these is a primary application of ingress filtering.
Problem
Networks receive packets from other networks. Normally a packet will contain the IP address of the computer that originally sent it. This allows devices in the receiving network to know where it came from, allowing a reply to be routed back (amongst other things), except when IP addresses are used through a proxy or a spoofed IP address, which does not pinpoint a specific user within that pool of users.
A sender IP address can be faked (spoofed), characterizing a spoofing attack. This disguises the origin of packets sent, for example in a denial-of-service attack. The same holds true for proxies, although in a different manner than IP spoofing.
Potential solutions
One potential solution involves implementing the use of intermediate Internet gateways (i.e., those servers connecting disparate networks along the path followed by any given packet) filtering or denying any packet deemed to be illegitimate. The gateway processing the packet might simply ignore the packet completely, or where possible, it might send a packet back to the sender relaying a message that the illegitimate packet has been denied. Host intrusion prevention systems (HIPS) are one example of technical engineering applications that help to identify, prevent and/or deter unwanted, unsuspected or suspicious events and intrusions.
Any router that implements ingress filtering checks the source IP field of IP packets it receives and drops packets if the packets don't have an IP address in the IP address block to which the interface is connected. This may not be possible if the end host i
|
https://en.wikipedia.org/wiki/Length%20function
|
In the mathematical field of geometric group theory, a length function is a function that assigns a number to each element of a group.
Definition
A length function L : G → R+ on a group G is a function satisfying:
Compare with the axioms for a metric and a filtered algebra.
Word metric
An important example of a length is the word metric: given a presentation of a group by generators and relations, the length of an element is the length of the shortest word expressing it.
Coxeter groups (including the symmetric group) have combinatorial important length functions, using the simple reflections as generators (thus each simple reflection has length 1). See also: length of a Weyl group element.
A longest element of a Coxeter group is both important and unique up to conjugation (up to different choice of simple reflections).
Properties
A group with a length function does not form a filtered group, meaning that the sublevel sets do not form subgroups in general.
However, the group algebra of a group with a length functions forms a filtered algebra: the axiom corresponds to the filtration axiom.
Group theory
Geometric group theory
|
https://en.wikipedia.org/wiki/Primary%20ideal
|
In mathematics, specifically commutative algebra, a proper ideal Q of a commutative ring A is said to be primary if whenever xy is an element of Q then x or yn is also an element of Q, for some n > 0. For example, in the ring of integers Z, (pn) is a primary ideal if p is a prime number.
The notion of primary ideals is important in commutative ring theory because every ideal of a Noetherian ring has a primary decomposition, that is, can be written as an intersection of finitely many primary ideals. This result is known as the Lasker–Noether theorem. Consequently, an irreducible ideal of a Noetherian ring is primary.
Various methods of generalizing primary ideals to noncommutative rings exist, but the topic is most often studied for commutative rings. Therefore, the rings in this article are assumed to be commutative rings with identity.
Examples and properties
The definition can be rephrased in a more symmetric manner: an ideal is primary if, whenever , we have or or . (Here denotes the radical of .)
An ideal Q of R is primary if and only if every zero divisor in R/Q is nilpotent. (Compare this to the case of prime ideals, where P is prime if and only if every zero divisor in R/P is actually zero.)
Any prime ideal is primary, and moreover an ideal is prime if and only if it is primary and semiprime (also called radical ideal in the commutative case).
Every primary ideal is primal.
If Q is a primary ideal, then the radical of Q is necessarily a prime ideal P, and this ideal is called the associated prime ideal of Q. In this situation, Q is said to be P-primary.
On the other hand, an ideal whose radical is prime is not necessarily primary: for example, if , , and , then is prime and , but we have , , and for all n > 0, so is not primary. The primary decomposition of is ; here is -primary and is -primary.
An ideal whose radical is maximal, however, is primary.
Every ideal with radical is contained in a smallest -primary ideal: all elements
|
https://en.wikipedia.org/wiki/Icemaker
|
An icemaker, ice generator, or ice machine may refer to either a consumer device for making ice, found inside a home freezer; a stand-alone appliance for making ice, or an industrial machine for making ice on a large scale. The term "ice machine" usually refers to the stand-alone appliance.
The ice generator is the part of the ice machine that actually produces the ice. This would include the evaporator and any associated drives/controls/subframe that are directly involved with making and ejecting the ice into storage. When most people refer to an ice generator, they mean this ice-making subsystem alone, minus refrigeration.
An ice machine, however, particularly if described as 'packaged', would typically be a complete machine including refrigeration, controls, and dispenser, requiring only connection to power and water supplies.
The term icemaker is more ambiguous, with some manufacturers describing their packaged ice machine as an icemaker, while others describe their generators in this way.
History
In 1748, the first known artificial refrigeration was demonstrated by William Cullen at the University of Glasgow. Mr. Cullen never used his discovery for any practical purposes. This may be the reason why the history of the icemakers begins with Oliver Evans, an American inventor who designed the first refrigeration machine in 1805. In 1834, Jacob Perkins built the first practical refrigerating machine using ether in a vapor compression cycle. The American inventor, mechanical engineer and physicist received 21 American and 19 English patents (for innovations in steam engines, the printing industry and gun manufacturing among others) and is considered today the father of the refrigerator.
In 1844, an American physician, John Gorrie, built a refrigerator based on Oliver Evans' design to make ice to cool the air for his yellow fever patients. His plans date back to 1842, making him one of the founding fathers of the refrigerator. Unfortunately for John Gorrie, his
|
https://en.wikipedia.org/wiki/MOO
|
A MOO ("MUD, object-oriented") is a text-based online virtual reality system to which multiple users (players) are connected at the same time.
The term MOO is used in two distinct, but related, senses. One is to refer to those programs descended from the original MOO server, and the other is to refer to any MUD that uses object-oriented techniques to organize its database of objects, particularly if it does so in a similar fashion to the original MOO or its derivatives. Most of this article refers to the original MOO and its direct descendants, but see non-descendant MOOs for a list of MOO-like systems.
The original MOO server was authored by Stephen White, based on his experience from creating the programmable TinyMUCK system. There was additional later development and maintenance from LambdaMOO founder, and former Xerox PARC employee, Pavel Curtis.
One of the most distinguishing features of a MOO is that its users can perform object-oriented programming within the server, ultimately expanding and changing how it behaves to everyone. Examples of such changes include authoring new rooms and objects, creating new generic objects for others to use, and changing the way the MOO interface operates. The programming language used for extension is the MOO programming language, and many MOOs feature convenient libraries of verbs that can be used by programmers in their coding known as Utilities. The MOO programming language is a domain-specific language.
Background
MOOs are network accessible, multi-user, programmable, interactive systems well-suited to the construction of text-based adventure games, conferencing systems, and other collaborative software. Their most common use, however, is as multi-participant, low-bandwidth virtual realities. They have been used in academic environments for distance education, collaboration (such as Diversity University), group decision systems, and teaching object-oriented concepts; but others are primarily social in nature, o
|
https://en.wikipedia.org/wiki/Credential%20service%20provider
|
A credential service provider (CSP) is a trusted entity that issues security tokens or electronic credentials to subscribers. A CSP forms part of an authentication system, most typically identified as a separate entity in a Federated authentication system. A CSP may be an independent third party, or may issue credentials for its own use. The term CSP is used frequently in the context of the US government's eGov and e-authentication initiatives. An example of a CSP would be an online site whose primary purpose may be, for example, internet banking - but whose users may be subsequently authenticated to other sites, applications or services without further action on their part.
History
In any authentication system, some entity is required to authenticate the user on behalf of the target application or service. For many years there was poor understanding of the impact of security and the multiplicity of services and applications that would ultimately require authentication. The result of this is that not only are users burdened with many credentials that they must remember or carry around with them, but also applications and services must perform some level of registration and then some level of authentication of those users. As a result, Credential Service Providers were created. A CSP separates those functions from the application or service and typically provides trust to that application or service over a network (such as the Internet).
CSP Process
The CSP establishes a mechanism to uniquely identify each subscriber and the associated tokens and credentials issued to that subscriber. The CSP registers or gives the subscriber a token to be used in an authentication protocol and issues credentials as needed to bind that token to the identity, or to bind the identity to some other useful verified attribute. The subscriber may be given electronic credentials to go with the token at the time of registration, or credentials may be generated later as needed. Subscribers
|
https://en.wikipedia.org/wiki/Evidence%20Eliminator
|
Evidence Eliminator is a computer software program that runs on Microsoft Windows operating systems at least through Windows 7. The program deletes hidden information from the user's hard drive that normal procedures may fail to delete. Such "cleaner" or "eraser" programs typically overwrite previously allocated disk space, in order to make it more difficult to salvage deleted information. In the absence of such overwrite procedures, information that a user thinks has been deleted may actually remain on the hard drive until that physical space is claimed for another use (i.e. to store another file). When it was offered for sale, the program cost between $20 early on to $150 later.
History
Evidence Eliminator was produced by Robin Hood Software, based in Nottingham, England, up to version 6.04.
Controversy
There has been controversy surrounding Evidence Eliminator's marketing tactics. The company has used popup ads to market the program, including claims that the user's system was being compromised. In response, Robin Hood Software produced a "dis-information page" addressing these concerns. Radsoft, a competitor to Robin Hood, criticised its operation.
Legal
On June 1, 2005, Peter Beale, one of the "Phoenix Four" used Evidence Eliminator to remove all trace of certain files from his PC the day after the appointment of DTI inspectors to investigate the collapse of MG Rover.
In a 2011 case, MGA v. Mattel, a federal court found that a former employee used the program to delete information that he was accused of giving to MGA while employed at Mattel.
References
Windows security software
Anti-forensic software
|
https://en.wikipedia.org/wiki/Zemmix
|
Zemmix was a trade mark and brand name of the South Korean electronics company Daewoo Electronics Co., Ltd. It was an MSX-based video game console brand whose name is no longer in use.
Under the name Zemmix, Daewoo released a series of gaming consoles compatible with the MSX home computer standards. The consoles were in production between 1985 and 1995. The consoles were not sold outside South Korea.
Hardware
Console Models
All consoles were designed to broadcast standard NTSC, have low and high outputs for connecting to a TV and have a universal adapter for connection to the mains 120/230 volts.
The consoles also had a letter coming after the serial number. These letters indicated the color combination of the console. The key is as follows.
W - white and silver colors
R - red and black colors
B - yellow, blue and black colors
For example, CPC 51W would be a white or silver Zemmix V (see below).
Consoles compatible with the MSX standard
CPC-50 (Zemmix)
CPC-51 (Zemmix V)
Consoles compatible with the MSX2 standard
CPC-61 (Zemmix Super V)
Consoles compatible with the MSX2+ standard
CPG-120 (Zemmix Turbo)
FPGA based MSX2+ compatible console
Zemmix Neo (by Retroteam Neo)
Zemmix Neo Lite (by Retroteam Neo)
Raspberry PI based MSX2+ (Turbo R) compatible console
CPC-Mini (Licensed)- Zemmix Mini (by Retroteam Neo)
Peripherals
Other Zemmix products:
By Daewoo
CPJ-905: MSX joystick for Zemmix CPC-51 console
CPJ-600: MSX joypad for Zemmix CPC-61 console
CPK-30: keyboard for Zemmix CPC-61
CPJ-102K: joystick for CPC-330
CPK-31K: input device for CPC-330
By Zemina
A Keyboard & Cartridge port divider
The Zemina Music Box
An MSX2 Upgrade Kit
A Zemmix PC card
MSX RAM expansion cards
A 'Family Card' that allows the user to play Famicom games on the Zemmix
Software
Korean software companies that produced software for the Zemmix gaming console:
Aproman
Boram
Clover
Daou Infosys
FA Soft
Mirinae
Prosoft
Screen
Topia
Uttum
Zemina
Most Zemmix software wor
|
https://en.wikipedia.org/wiki/Hierarchical%20task%20network
|
In artificial intelligence, hierarchical task network (HTN) planning is an approach to automated planning in which the dependency among actions can be given in the form of hierarchically structured networks.
Planning problems are specified in the hierarchical task network approach by
providing a set of tasks, which can be:
primitive (initial state) tasks, which roughly correspond to the actions of STRIPS;
compound tasks (intermediate state), which can be seen as composed of a set of simpler tasks;
goal tasks (goal state), which roughly corresponds to the goals of STRIPS, but are more general.
A solution to an HTN problem is then an executable sequence of primitive tasks that can be obtained from the initial task network by decomposing compound tasks into their set of simpler tasks, and by inserting ordering constraints.
A primitive task is an action that can be executed directly given the state in which it is executed supports its precondition. A compound task is a complex task composed of a partially ordered set of further tasks, which can either be primitive or abstract. A goal task is a task of satisfying a condition. The difference between primitive and other tasks is that the primitive actions can be directly executed. Compound and goal tasks both require a sequence of primitive actions to be performed; however, goal tasks are specified in terms of conditions that have to be made true, while compound tasks can only be specified in terms of other tasks via the task network outlined below.
Constraints among tasks are expressed in the form of networks, called (hierarchical) task networks. A task network is a set of tasks and constraints among them. Such a network can be used as the precondition for another compound or goal task to be feasible. This way, one can express that a given task is feasible only if a set of other actions (those mentioned in the network) are done, and they are done in such a way that the constraints among them (specified by the net
|
https://en.wikipedia.org/wiki/Glossy%20display
|
A glossy display is an electronic display with a glossy surface. In certain light environments, glossy displays provide better color intensity and contrast ratios than matte displays. The primary disadvantage of these displays is their tendency to reflect any external light, often resulting in an undesirable glare.
Technology
Some LCDs use an antireflective coating, or nanotextured glass surface, to reduce the amount of external light reflecting from the surface without affecting light emanating from the screen as an alternative to matte display.
Disadvantages
Because of the reflective nature of the display, in most lighting conditions that include direct light sources facing the screen, glossy displays create reflections, which can be distracting to the user of the computer. This can be especially distracting to users working in an environment where the position of lights and windows are fixed, such as in an office, as these create unavoidable reflections on glossy displays.
Adverse health effects
Ergonomic studies show that prolonged work in the office environment with the presence of discomforting glares and disturbances from light reflections on the screen can cause mild to severe health effects, ranging from eye strain and headaches to photosensitive epileptic episodes. These effects are usually explained by the physiology of the human eye and the human visual system. The image of light sources reflected in the screen can cause the human visual system to focus on that image, which is usually at a much farther distance than the information shown on the screen. This competition between two images that can be focused is considered to be the primary source of such effects.
Advantages
In controlled environments, such as darkened rooms, or rooms where all light sources are diffused, glossy displays create more saturated colors, deeper blacks, brighter whites, and are sharper than matte displays. This is why supporters of glossy screens consider these typ
|
https://en.wikipedia.org/wiki/FrostWire
|
FrostWire is a free and open-source BitTorrent client first released in September 2004, as a fork of LimeWire. It was initially very similar to LimeWire in appearance and functionality, but over time developers added more features, including support for the BitTorrent protocol. In version 5, support for the Gnutella network was dropped entirely, and FrostWire became a BitTorrent-only client.
History
FrostWire, a BitTorrent client (formerly a Gnutella client), is a collaborative, open-source project licensed under the GPL-3.0-or-later license. In late 2005, concerned developers of LimeWire's open source community announced the start of a new project fork "FrostWire" that would protect the developmental source code of the LimeWire client. FrostWire has evolved to replace LimeWire's BitTorrent core for that of Vuze, the Azureus BitTorrent Engine, and ultimately to remove the LimeWire's Gnutella core to become a 100% BitTorrent client powered by the libtorrent library through FrostWire's jLibtorrent Java wrapper library since August 2014.
Gnutella client
The project was started in September 2004 after LimeWire's distributor considered adding "blocking" code in response to RIAA pressure. The RIAA threatened legal action against several peer-to-peer developers including LimeWire as a result of the U.S. Supreme Court's decision in MGM Studios, Inc. v. Grokster, Ltd..
The second beta release of FrostWire was available in the last quarter of 2005.
Multiprotocol P2P client
Since version 4.20.x, FrostWire was able to handle torrent files and featured a new junk filter. Also, in version 4.21.x support was added for most Android devices.
BitTorrent client
Since version 5.0 (2011), FrostWire relaunched itself as a BitTorrent application, so those using the Gnutella network either have to use version 4, or switch to another client altogether.
Preview before download
Since version 6.0, FrostWire adds preview files before download.
Adware and malware
Since around 2008 som
|
https://en.wikipedia.org/wiki/Brewer%20and%20Nash%20model
|
The Brewer and Nash model was constructed to provide information security access controls that can change dynamically. This security model, also known as the Chinese wall model, was designed to provide controls that mitigate conflict of interest in commercial organizations and is built upon an information flow model.
In the Brewer and Nash model, no information can flow between the subjects and objects in a way that would create a conflict of interest.
This model is commonly used by consulting and accounting firms. For example, once a consultant accesses data belonging to Acme Ltd, a consulting client, they may no longer access data to any of Acme's competitors. In this model, the same consulting firm can have clients that are competing with Acme Ltd while advising Acme Ltd. This model uses the principle of data isolation within each conflict class of data to keep users out of potential conflict of interest situations. Because company relationships change all the time, dynamic and up-to-date updates to members and definitions for conflict classes are important.
See also
Bell–LaPadula model
Biba model
Clark–Wilson model
Graham–Denning model
References
Harris, Shon, All-in-one CISSP Exam Guide, Third Edition, McGraw Hill Osborne, Emeryville, California, 2005.
Chapple, Mike, et al, Certified Information System Security Professional - Official Study Guide, Eighth Edition, Sybex, John Wiley & Sons, Indiana, 2018.
External links
Computer security models
|
https://en.wikipedia.org/wiki/Graham%E2%80%93Denning%20model
|
The Graham–Denning model is a computer security model that shows how subjects and objects should be securely created and deleted.
It also addresses how to assign specific access rights. It is mainly used in access control mechanisms for distributed systems. There are three main parts to the model: A set of subjects, a set of objects, and a set of eight rules. A subject may be a process or a user that makes a request to access a resource. An object is the resource that a user or process wants to access.
Features
This model addresses the security issues associated with how to define a set of basic rights on how specific subjects can execute security functions on an object.
The model has eight basic protection rules (actions) that outline:
How to securely create an object.
How to securely create a subject.
How to securely delete an object.
How to securely delete a subject.
How to securely provide the read access right.
How to securely provide the grant access right.
How to securely provide the delete access right.
How to securely provide the transfer access right.
Moreover, each object has an owner that has special rights on it, and each subject has another subject (controller) that has special rights on it.
The model is based on the Access Control Matrix model where rows correspond to subjects and columns correspond to objects and subjects, each element contains a set of rights between subject i and object j or between subject i and subject k.
For example an action A[s,o] contains the rights that subject s has on object o (example: {own, execute}).
When executing one of the 8 rules, for example creating an object, the matrix is changed: a new column is added for that object, and the subject that created it becomes its owner.
Each rule is associated with a precondition, for example if subject x wants to delete object o, it must be its owner (A[x,o] contains the 'owner' right).
Limitations
Harrison-Ruzzo-Ullman extended this model by defining
|
https://en.wikipedia.org/wiki/Security%20modes
|
Generally, security modes refer to information systems security modes of operations used in mandatory access control (MAC) systems. Often, these systems contain information at various levels of security classification. The mode of operation is determined by:
The type of users who will be directly or indirectly accessing the system.
The type of data, including classification levels, compartments, and categories, that are processed on the system.
The type of levels of users, their need to know, and formal access approvals that the users will have.
Dedicated security mode
In this mode of operation, all users must have:
Signed NDA for ALL information on the system.
Proper clearance for ALL information on the system.
Formal access approval for ALL information on the system.
A valid need to know for ALL information on the system.
All users can access ALL data.
System high security mode
In system high mode of operation, all users must have:
Signed NDA for ALL information on the system.
Proper clearance for ALL information on the system.
Formal access approval for ALL information on the system.
A valid need to know for SOME information on the system.
All users can access SOME data, based on their need to know.
Compartmented security mode
In this mode of operation, all users must have:
Signed NDA for ALL information on the system.
Proper clearance for ALL information on the system.
Formal access approval for SOME information they will access on the system.
A valid need to know for SOME information on the system.
All users can access SOME data, based on their need to know and formal access approval.
Multilevel security mode
In multilevel security mode of operation (also called Controlled Security Mode), all users must have:
Signed NDA for ALL information on the system.
Proper clearance for SOME information on the system.
Formal access approval for SOME information on the system.
A valid need to know for SOME information on the system.
All users can
|
https://en.wikipedia.org/wiki/Mohammad%20Fahim%20Dashty
|
Mohammad Fahim Dashty ( 1973 – 4 or 5 September 2021) was an Afghan journalist, politician and military official. In 2021, he served as spokesman of the National Resistance Front of Afghanistan during the Republican insurgency in Afghanistan.
Biography
Born around 1973, Dashty was the nephew of Afghan politician, Abdullah Abdullah, and a close associate of the family of the Northern Alliance leader, Ahmad Shah Massoud. He was with Massoud when the latter was killed by a suicide bombing on 9 September 2001. Dashty was badly wounded in the bombing.
After the United States invasion of Afghanistan, Dashty founded a newspaper based at Kabul and became known for supporting journalists and advocating freedom of speech in Afghanistan. He was a leader of the Afghanistan National Journalist Union (ANJU) as well as a key figure in the Federation for "Afghan Journalists and Media Entities", founded in 2012. In addition, he contributed to the South Asia Press Freedom Report.
In 2021, following the takeover of Afghanistan by Taliban, Dashty joined the National Resistance Front of Afghanistan as a spokesman. Beforehand, he had reportedly refused offers of a government post by the Taliban. Dashty was one of the main sources of information in the Panjshir Valley as the Taliban pressed in, issuing statements on Twitter. Shortly before his death, he stated "If we die, history will write about us, as people who stood for their country till the end of the line". On 4 or 5 September 2021, Dashty was killed in combat during the Taliban offensive into Panjshir. His death was confirmed by his friend Noor Rahman Akhlaqi on Facebook as well as other sources. The Taliban claimed that he had died as they had advanced into Bazarak, capital of the Panjshir Province. In contrast, the International Federation of Journalists stated that he had died alongside General Abdul Wodo Zara at Dashtak, Anaba District. According to unspecified sources and defense analyst Babak Taghvaee, Dashty was killed b
|
https://en.wikipedia.org/wiki/Computer%20security%20model
|
A computer security model is a scheme for specifying and enforcing security policies. A security model may be founded upon a formal model of access rights, a model of computation, a model of distributed computing, or no particular theoretical grounding at all. A computer security model is implemented through a computer security policy.
For a more complete list of available articles on specific security models, see :Category:Computer security models.
Selected topics
Access control list (ACL)
Attribute-based access control (ABAC)
Bell–LaPadula model
Biba model
Brewer and Nash model
Capability-based security
Clark-Wilson model
Context-based access control (CBAC)
Graham-Denning model
Harrison-Ruzzo-Ullman (HRU)
High-water mark (computer security)
Lattice-based access control (LBAC)
Mandatory access control (MAC)
Multi-level security (MLS)
Non-interference (security)
Object-capability model
Protection ring
Role-based access control (RBAC)
Take-grant protection model
Discretionary access control (DAC)
References
Krutz, Ronald L. and Vines, Russell Dean, The CISSP Prep Guide; Gold Edition, Wiley Publishing, Inc., Indianapolis, Indiana, 2003.
CISSP Boot Camp Student Guide, Book 1 (v.082807), Vigilar, Inc.
Computer security models
|
https://en.wikipedia.org/wiki/Operations%20security
|
Operations security (OPSEC) or operational security is a process that identifies critical information to determine whether friendly actions can be observed by enemy intelligence, determines if information obtained by adversaries could be interpreted to be useful to them, and then executes selected measures that eliminate or reduce adversary exploitation of friendly critical information.
The term "operations security" was coined by the United States military during the Vietnam War.
History
Vietnam
In 1966, United States Admiral Ulysses Sharp established a multidisciplinary security team to investigate the failure of certain combat operations during the Vietnam War. This operation was dubbed Operation Purple Dragon, and included personnel from the National Security Agency and the Department of Defense.
When the operation concluded, the Purple Dragon team codified their recommendations. They called the process "Operations Security" in order to distinguish the process from existing processes and ensure continued inter-agency support.
NSDD 298
In 1988, President Ronald Reagan signed National Security Decision Directive (NSDD) 298. This document established the National Operations Security Program and named the Director of the National Security Agency as the executive agent for inter-agency OPSEC support. This document also established the Interagency OPSEC Support Staff (IOSS).
Private-sector application
The private sector has also adopted OPSEC as a defensive measure against competitive intelligence collection efforts.
See also
For Official Use Only – FOUO
Information security
Intelligence cycle security
Security
Security Culture
Sensitive but unclassified – SBU
Controlled Unclassified Information - CUI
Social engineering
References
Further reading
National Security Decision Directive 298
Purple Dragon: The Origin & Development of the United States OPSEC Program, NSA, 1993.
Operations Security (JP 3-13.3) PDF U.S. DoD Operations Security Doctrine.
Extern
|
https://en.wikipedia.org/wiki/Ternary%20computer
|
A ternary computer, also called trinary computer, is one that uses ternary logic (i.e., base 3) instead of the more common binary system (i.e., base 2) in its calculations. This means it uses trits (instead of bits, as most computers do).
Types of states
Ternary computing deals with three discrete states, but the ternary digits themselves can be defined differently:
Ternary quantum computers use qutrits rather than trits. A qutrit is a quantum state that is a complex unit vector in three dimensions, which can be written as in the bra-ket notation. The labels given to the basis vectors () can be replaced with other labels, for example those given above.
History
One early calculating machine, built entirely from wood by Thomas Fowler in 1840, operated in balanced ternary. The first modern, electronic ternary computer, Setun, was built in 1958 in the Soviet Union at the Moscow State University by Nikolay Brusentsov, and it had notable advantages over the binary computers that eventually replaced it, such as lower electricity consumption and lower production cost. In 1970 Brusentsov built an enhanced version of the computer, which he called Setun-70. In the United States, the ternary computing emulator Ternac working on a binary machine was developed in 1973.
The ternary computer QTC-1 was developed in Canada.
Balanced ternary
Ternary computing is commonly implemented in terms of balanced ternary, which uses the three digits −1, 0, and +1. The negative value of any balanced ternary digit can be obtained by replacing every + with a − and vice versa. It is easy to subtract a number by inverting the + and − digits and then using normal addition. Balanced ternary can express negative values as easily as positive ones, without the need for a leading negative sign as with unbalanced numbers. These advantages make some calculations more efficient in ternary than binary. Considering that digit signs are mandatory, and nonzero digits are magnitude 1 only, notation th
|
https://en.wikipedia.org/wiki/Web%20content%20development
|
Web content development is the process of researching, writing, gathering, organizing, and editing information for publication on websites. Website content may consist of prose, graphics, pictures, recordings, movies, or other digital assets that could be distributed by a hypertext transfer protocol server, and viewed by a web browser.
Content developers and web developers
When the World Wide Web began, web developers either developed online content themselves, or modified existing documents and coded them into hypertext markup language (HTML). In time, the field of website development came to encompass many technologies, so it became difficult for website developers to maintain so many different skills. Content developers are specialized website developers who have content generation skills such as graphic design, multimedia development, professional writing, and documentation. They can integrate content into new or existing websites without using information technology skills such as script language programming and database programming.
Content developers or technical content developers can also be technical writers who produce technical documentation that helps people understand and use a product or service. This documentation includes online help, manuals, white papers, design specifications, developer guides, deployment guides, release notes, etc.
Search engine optimization
Content developers may also be search engine optimization specialists, or internet marketing professionals. High quality, unique content is what search engines are looking for. Content development specialists, therefore, have a very important role to play in the search engine optimization process. One issue currently plaguing the world of web content development is keyword-stuffed content which are prepared solely for the purpose of manipulating search engine rankings. The effect is that content is written to appeal to search engine (algorithms) rather than human readers. Search engine
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.