source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Metabidiminished%20icosahedron
|
In geometry, the metabidiminished icosahedron is one of the Johnson solids (). The name refers to one way of constructing it, by removing two pentagonal pyramids () from a regular icosahedron, replacing two sets of five triangular faces of the icosahedron with two adjacent pentagonal faces. If two pentagonal pyramids are removed to form nonadjacent pentagonal faces, the result is instead the pentagonal antiprism.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/End-to-end%20delay
|
End-to-end delay or one-way delay (OWD) refers to the time taken for a packet to be transmitted across a network from source to destination. It is a common term in IP network monitoring, and differs from round-trip time (RTT) in that only path in the one direction from source to destination is measured.
Measurement
The ping utility measures the RTT, that is, the time to go and come back to a host. Half the RTT is often used as an approximation of OWD but this assumes that the forward and back paths are the same in terms of congestion, number of hops, or quality of service (QoS). This is not always a good assumption. To avoid such problems, the OWD may be measured directly.
Direct
OWDs may be measured between two points A and B of an IP network through the use of synchronized clocks; A records a timestamp on the packet and sends it to B, which notes the receiving time and calculates the OWD as their difference. The transmitted packets need to be identified at source and destination in order to avoid packet loss or packet reordering. However, this method suffers several limitations, such as requiring intensive cooperation between both parties, and the accuracy of the measured delay is subject to the synchronization precision.
The Minimum-Pairs Protocol is an example by which several cooperating entities, A, B, and C, could measure OWDs between one of them and a fourth less cooperative one (e.g., between B and X).
Estimate
Transmission between two network nodes may be asymmetric, and the forward and reverse delays are not equal. Half the RTT value is the average of the forward and reverse delays and so may be sometimes used as an approximation to the end-to-end delay. The accuracy of such an estimate depends on the nature of delay distribution in both directions. As delays in both directions become more symmetric, the accuracy increases.
The probability mass function (PMF) of absolute error, E, between the smaller of the forward and reverse OWDs and their average
|
https://en.wikipedia.org/wiki/LAN%20Manager
|
LAN Manager is a discontinued network operating system (NOS) available from multiple vendors and developed by Microsoft in cooperation with 3Com Corporation. It was designed to succeed 3Com's 3+Share network server software which ran atop a heavily modified version of MS-DOS.
History
The LAN Manager OS/2 operating system was co-developed by IBM and Microsoft, using the Server Message Block (SMB) protocol. It originally used SMB atop either the NetBIOS Frames (NBF) protocol or a specialized version of the Xerox Network Systems (XNS) protocol. These legacy protocols had been inherited from previous products such as MS-Net for MS-DOS, Xenix-NET for MS-Xenix, and the afore-mentioned 3+Share. A version of LAN Manager for Unix-based systems called LAN Manager/X was also available. LAN Manager/X was the basis for Digital Equipment Corporation's Pathworks product for OpenVMS, Ultrix and Tru64.
In 1990, Microsoft announced LAN Manager 2.0 with a host of improvements, including support for TCP/IP as a transport protocol for SMB, using NetBIOS over TCP/IP (NBT). The last version of LAN Manager, 2.2, which included an MS-OS/2 1.31 base operating system, remained Microsoft's strategic server system until the release of Windows NT Advanced Server in 1993.
Versions
1987 – MS LAN Manager 1.0 (Basic/Enhanced)
1989 – MS LAN Manager 1.1
1991 – MS LAN Manager 2.0
1992 – MS LAN Manager 2.1
1992 – MS LAN Manager 2.1a
1993 – MS LAN Manager 2.2
1994 – MS LAN Manager 2.2a
Many vendors shipped licensed versions, including:
3Com Corporation 3+Open
HP LAN Manager/X
IBM LAN Server
Tapestry Torus
The Santa Cruz Operation
Password hashing algorithm
The LM hash is computed as follows:
The user's password is restricted to a maximum of fourteen characters.
The user's password is converted to uppercase.
The user's password is encoded in the System OEM code page.
This password is NULL-padded to 14 bytes.
The “fixed-length” password is split into two 7-byte halves.
These values
|
https://en.wikipedia.org/wiki/Limit%20load%20%28physics%29
|
Limit load is the maximum load that a structure can safely carry. It's the load at which the structure is in a state of incipient plastic collapse. As the load on the structure increases, the displacements increases linearly in the elastic range until the load attains the yield value. Beyond this, the load-displacement response becomes non-linear and the plastic or irreversible part of the displacement increases steadily with the applied load. Plasticity spreads throughout the solid and at the limit load, the plastic zone becomes very large and the displacements become unbounded and the component is said to have collapsed.
Any load above the limit load will lead to the formation of plastic hinge in the structure. Engineers use limit states to define and check a structure's performance.
Bounding Theorems of Plastic-Limit Load Analysis:
Plastic limit theorems provide a way to calculate limit loads without having to solve the boundary value problem in continuum mechanics. Finite element analysis provides an alternative way to estimate limit loads. They are:
The Upper Bound Plastic Collapse Theorem
The Lower Bound Plastic Collapse Theorem
The Lower Bound Shakedown Theorem
The Upper Bound Shakedown Theorem
The Upper Bound Plastic Collapse Theorem states that an upper bound to the collapse loads can be obtained by postulating a collapse mechanism and computing the ratio of its plastic dissipation to the work done by the applied loads.
References
Notes
Sources
Brown University Engineering Notes
Structural engineering
Continuum mechanics
|
https://en.wikipedia.org/wiki/IBM%20LAN%20Server
|
IBM LAN Server is a discontinued network operating system introduced by International Business Machines (IBM) in 1988. LAN Server started as a close cousin of Microsoft's LAN Manager and first shipped in early 1988. It was originally designed to run on top of Operating System/2 (OS/2) Extended Edition. The network client was called IBM LAN Requester and was included with OS/2 EE 1.1 by default. (Eventually IBM shipped other clients and supported yet more. Examples include the IBM OS/2 File/Print Client, IBM OS/2 Peer, and client software for Microsoft Windows.) Here the short term LAN Server refers to the IBM OS/2 LAN Server product. There were also LAN Server products for other operating systems, notably AIX—now called Fast Connect—and OS/400.
Version history
Predecessors included IBM PC LAN Program (PCLP). Variants included LAN Server Ultimedia (optimized for network delivery of multimedia files) and LAN On-Demand. Add-ons included Directory and Security Server, Print Services Facility/2 (later known as Advanced Printing), Novell NetWare for OS/2, and LAN Server for Macintosh.
Innovations
LAN Server pioneered certain file and print sharing concepts such as domains (and domain controllers), networked COM ports, domain aliases, and automatic printer driver selection and installation.
See also
LAN messenger
Server Message Block (SMB)
References
Further reading
Computer-related introductions in 1988
LAN Server
Network operating systems
OS/2
Servers (computing)
|
https://en.wikipedia.org/wiki/Foster%27s%20rule
|
Foster's rule, also known as the island rule or the island effect, is an ecogeographical rule in evolutionary biology stating that members of a species get smaller or bigger depending on the resources available in the environment. For example, it is known that pygmy mammoths evolved from normal mammoths on small islands. Similar evolutionary paths have been observed in elephants, hippopotamuses, boas, sloths, deer (such as Key deer) and humans. It is part of the more general phenomenon of island syndrome which describes the differences in morphology, ecology, physiology and behaviour of insular species compared to their continental counterparts.
The rule was first formulated by van Valen in 1973 based on the study by mammalogist J. Bristol Foster in 1964. In it, Foster compared 116 island species to their mainland varieties. Foster proposed that certain island creatures evolved larger body size (insular gigantism) while others became smaller (insular dwarfism). Foster proposed the simple explanation that smaller creatures get larger when predation pressure is relaxed because of the absence of some of the predators of the mainland, and larger creatures become smaller when food resources are limited because of land area constraints.
The idea was expanded upon in The Theory of Island Biogeography, by Robert MacArthur and Edward O. Wilson. In 1978, Ted J. Case published a longer paper on the topic in the journal Ecology.
Recent literature has also applied the island rule to plants.
There are some cases that do not neatly fit the rule; for example, artiodactyls have on several islands evolved into both dwarf and giant forms.
The Island Rule is a contested topic in evolutionary biology. Some argue that, since body size is a trait that is affected by multiple factors, and not just by organisms moving to an island, genetic variations across all populations could also cause the body mass differences between mainland and island populations.
References
External links
|
https://en.wikipedia.org/wiki/Bathtub%20curve
|
The bathtub curve is a particular shape of a failure rate graph. This graph is used in reliability engineering and deterioration modeling. The 'bathtub' refers to the shape of a line that curves up at both ends, similar in shape to a bathtub. The bathtub curve has 3 regions:
The first region has a decreasing failure rate due to early failures.
The middle region is a constant failure rate due to random failures.
The last region is an increasing failure rate due to wear-out failures.
Not all products exhibit a bathtub curve failure rate. A product is said to follow the bathtub curve if in the early life of a product, the failure rate decreases as defective products are identified and discarded, and early sources
of potential failure such as manufacturing defects or damage during transit are detected. In the mid-life of a product the failure rate is constant. In the later life of the product, the failure rate increases due to wearout. Many electronic consumer product life cycles follow the bathtub curve. It is difficult to know where a product is along the bathtub curve, or even if the bathtub curve is applicable to a certain product without large amounts of products in use and associated failure rate data.
If products are retired early or have decreased usage near their end of life, the product may show fewer failures per unit calendar time (but not per unit use time) than the bathtub curve predicts.
In reliability engineering, the cumulative distribution function corresponding to a bathtub curve may be analysed using a Weibull chart or in a reliability contour map.
See also
Gompertz–Makeham law of mortality
References
Further reading
Reliability engineering
Engineering failures
Curves
|
https://en.wikipedia.org/wiki/Certificate%20signing%20request
|
In public key infrastructure (PKI) systems, a certificate signing request (CSR or certification request) is a message sent from an applicant to a certificate authority of the public key infrastructure (PKI) in order to apply for a digital identity certificate. The CSR usually contains the public key for which the certificate should be issued, identifying information (such as a domain name) and a proof of authenticity including integrity protection (e.g., a digital signature). The most common format for CSRs is the PKCS #10 specification; others include the more capable Certificate Request Message Format (CRMF) and the SPKAC (Signed Public Key and Challenge) format generated by some web browsers.
Procedure
Before creating a CSR for an X.509 certificate, the applicant first generates a key pair, keeping the private key of that pair secret. The CSR contains information identifying the applicant (such as a distinguished name), the public key chosen by the applicant, and possibly further information. When using the PKCS #10 format, the request must be self-signed using the applicant's private key, which provides proof-of-possession of the private key but limits the use of this format to keys that can be used for signing. The CSR should be accompanied by a proof of origin (i.e., proof of identity of the applicant) that is required by the certificate authority, and the certificate authority may contact the applicant for further information.
Typical information required in a CSR (sample column from sample X.509 certificate). Note that there are often alternatives for the Distinguished Names (DN), the preferred value is listed.
If the request is successful, the certificate authority will send back an identity certificate that has been digitally signed using the private key of the certificate authority.
Structure of a PKCS #10 CSR
A certification request in PKCS #10 format consists of three main parts: the certification request information, a signature algorithm identif
|
https://en.wikipedia.org/wiki/Long%20hundred
|
The long hundred, also known as the great hundred or twelfty, is the number 120 (in base-10 Hindu-Arabic numerals) that was referred to as hund, hund-teontig, hundrað, hundrath, or hundred in Germanic languages prior to the 15th century, and is now known as one hundred twenty, or six score. The number was translated into Latin in Germanic-speaking countries as (Roman numeral C), but the qualifier long is now added because English now uses hundred exclusively to refer to the number of five score (100) instead.
The long hundred was 120, but the long thousand was reckoned decimally as 10 long hundreds (1200).
English unit
The hundred () was an English unit of measurement used in the production, sale and taxation of various items in the medieval kingdom of England. The value was often different from 100 units, mostly because of the continued medieval use of the Germanic long hundred of 120. The unit's use as a measure of weight is now described as a hundredweight.
The Latin edition of the Assize of Weights and Measures, one of the statutes of uncertain date from around 1300, describes hundreds of (red) herring (a long hundred of 120 fish), beeswax, sugar, pepper, cumin, and alum (" stone, each stone containing 8 pounds" or 108 Tower lbs.), coarse and woven linen, hemp canvas (a long hundred of 120 ells), and iron or horseshoes and shillings (a short hundred of 100 pieces). Later versions used the Troy or the avoirdupois pound in their reckonings instead and included hundreds of fresh herrings (a short hundred of 100 fish), cinnamon, nutmegs ( stone of 8 lb), and garlic ("15 ropes of 15 heads" or 225 heads).
History
The existence of a non-decimal base in the earliest traces of the Germanic languages is attested by the presence of glosses such as "tenty-wise" or "ten-count" to denote that certain numbers are to be understood as decimal. Such glosses would not be expected where decimal counting was usual. In the Gothic Bible, some marginalia glosses a five hundred ()
|
https://en.wikipedia.org/wiki/ADC%20Telecommunications
|
ADC Telecommunications was a communications company located in Eden Prairie, Minnesota, a southwest suburb of Minneapolis. It was acquired by TE Connectivity (Tyco Electronics) in December 2010 and ceased to exist as a separate entity. It vacated its Eden Prairie location in May 2011 and moved staff and resources to other locations. ADC products were sold by CommScope after it acquired the Broadband Network Solutions business unit (including ADC) from TE Connectivity in August 2015.
History
In 1935, fellow engineers Ralph Allison and Walter Lehnert were each operating business efforts out of their respective basements; Ralph Allison was building audio amplifiers and Walter Lehnert was building transformers. In the fall of 1936, the two combined their efforts to form the Audio Development Company (ADC). The company was later renamed to ADC Telecommunications, Inc.
During their first year in business, ADC built hearing aids and audiometers—a machine used for evaluating hearing acuity. Initially the audiometers were built for Maico, but in 1945 ADC began building audiometers under its own name. Additionally, by 1942, the company had designed a sophisticated audio system for the University of Minnesota, and the resulting jacks, plugs, patch cords and jackfields became the cornerstones for ADC's later entry into telecommunications.
In 1949, ADC sold its audiometer product line and Ralph Allison left the company to form a new business in California. With Walter Lehnert remaining as president of the company, ADC diversified and focused its efforts in the area of transformers and filters for power lines, military electronics, telephone jacks and plugs.
In 1961, ADC merged with Magnetic Controls Company, a manufacturer of power supplies and magnetic amplifiers with strong ties to the U.S. space program. The resulting company, ADC Magnetic Controls, had a decade of mixed success. Although transformer sales boomed during the 1960s, other new product initiatives failed to
|
https://en.wikipedia.org/wiki/Optimization%20problem
|
In mathematics, engineering, computer science and economics, an optimization problem is the problem of finding the best solution from all feasible solutions.
Optimization problems can be divided into two categories, depending on whether the variables are continuous or discrete:
An optimization problem with discrete variables is known as a discrete optimization, in which an object such as an integer, permutation or graph must be found from a countable set.
A problem with continuous variables is known as a continuous optimization, in which an optimal value from a continuous function must be found. They can include constrained problems and multimodal problems.
Continuous optimization problem
The standard form of a continuous optimization problem is
where
is the objective function to be minimized over the -variable vector ,
are called inequality constraints
are called equality constraints, and
and .
If , the problem is an unconstrained optimization problem. By convention, the standard form defines a minimization problem. A maximization problem can be treated by negating the objective function.
Combinatorial optimization problem
Formally, a combinatorial optimization problem is a quadruple , where
is a set of instances;
given an instance , is the set of feasible solutions;
given an instance and a feasible solution of , denotes the measure of , which is usually a positive real.
is the goal function, and is either or .
The goal is then to find for some instance an optimal solution, that is, a feasible solution with
For each combinatorial optimization problem, there is a corresponding decision problem that asks whether there is a feasible solution for some particular measure . For example, if there is a graph which contains vertices and , an optimization problem might be "find a path from to that uses the fewest edges". This problem might have an answer of, say, 4. A corresponding decision problem would be "is there a path from to th
|
https://en.wikipedia.org/wiki/Temporally%20ordered%20routing%20algorithm
|
The Temporally Ordered Routing Algorithm (TORA) is an algorithm for routing data across Wireless Mesh Networks or Mobile ad hoc networks.
It was developed by Vincent Park and Scott Corson at the University of Maryland and the Naval Research Laboratory. Park has patented his work, and it was licensed by Nova Engineering, who are marketing a wireless router product based on Park's algorithm.
Operation
The TORA attempts to achieve a high degree of scalability using a "flat", non-hierarchical routing algorithm. In its operation the algorithm attempts to suppress, to the greatest extent possible, the generation of far-reaching control message propagation. In order to achieve this, the TORA does not use a shortest path solution, an approach which is unusual for routing algorithms of this type.
TORA builds and maintains a Directed Acyclic Graph (DAG) rooted at a destination. No two nodes may have the same height.
Information may flow from nodes with higher heights to nodes with lower heights. Information can therefore be thought of as a fluid that may only flow downhill. By maintaining a set of totally ordered heights at all times, TORA achieves loop-free multipath routing, as information cannot 'flow uphill' and so cross back on itself.
The key design concepts of TORA is localization of control messages to a very small set of nodes near the occurrence of a topological change. To accomplish this, nodes need to maintain the routing information about adjacent (one hop) nodes. The protocol performs three basic functions:
Route creation
Route maintenance
Route erasure
During the route creation and maintenance phases, nodes use a height metric to establish a directed acyclic graph (DAG) rooted at destination. Thereafter links are assigned based on the relative height metric of neighboring nodes. During the times of mobility the DAG is broken and the route maintenance unit comes into picture to reestablish a DAG routed at the destination.
Timing is an important factor
|
https://en.wikipedia.org/wiki/Invariant%20%28mathematics%29
|
In mathematics, an invariant is a property of a mathematical object (or a class of mathematical objects) which remains unchanged after operations or transformations of a certain type are applied to the objects. The particular class of objects and type of transformations are usually indicated by the context in which the term is used. For example, the area of a triangle is an invariant with respect to isometries of the Euclidean plane. The phrases "invariant under" and "invariant to" a transformation are both used. More generally, an invariant with respect to an equivalence relation is a property that is constant on each equivalence class.
Invariants are used in diverse areas of mathematics such as geometry, topology, algebra and discrete mathematics. Some important classes of transformations are defined by an invariant they leave unchanged. For example, conformal maps are defined as transformations of the plane that preserve angles. The discovery of invariants is an important step in the process of classifying mathematical objects.
Examples
A simple example of invariance is expressed in our ability to count. For a finite set of objects of any kind, there is a number to which we always arrive, regardless of the order in which we count the objects in the set. The quantity—a cardinal number—is associated with the set, and is invariant under the process of counting.
An identity is an equation that remains true for all values of its variables. There are also inequalities that remain true when the values of their variables change.
The distance between two points on a number line is not changed by adding the same quantity to both numbers. On the other hand, multiplication does not have this same property, as distance is not invariant under multiplication.
Angles and ratios of distances are invariant under scalings, rotations, translations and reflections. These transformations produce similar shapes, which is the basis of trigonometry. In contrast, angles and ratios a
|
https://en.wikipedia.org/wiki/Invariant%20%28physics%29
|
In theoretical physics, an invariant is an observable of a physical system which remains unchanged under some transformation. Invariance, as a broader term, also applies to the no change of form of physical laws under a transformation, and is closer in scope to the mathematical definition. Invariants of a system are deeply tied to the symmetries imposed by its environment.
Invariance is an important concept in modern theoretical physics, and many theories are expressed in terms of their symmetries and invariants.
Examples
In classical and quantum mechanics, invariance of space under translation results in momentum being an invariant and the conservation of momentum, whereas invariance of the origin of time, i.e. translation in time, results in energy being an invariant and the conservation of energy. In general, by Noether's theorem, any invariance of a physical system under a continuous symmetry leads to a fundamental conservation law.
In crystals, the electron density is periodic and invariant with respect to discrete translations by unit cell vectors. In very few materials, this symmetry can be broken due to enhanced electron correlations.
Another examples of physical invariants are the speed of light, and charge and mass of a particle observed from two reference frames moving with respect to one another (invariance under a spacetime Lorentz transformation), and invariance of time and acceleration under a Galilean transformation between two such frames moving at low velocities.
Quantities can be invariant under some common transformations but not under others. For example, the velocity of a particle is invariant when switching coordinate representations from rectangular to curvilinear coordinates, but is not invariant when transforming between frames of reference that are moving with respect to each other. Other quantities, like the speed of light, are always invariant.
Physical laws are said to be invariant under transformations when their predictions r
|
https://en.wikipedia.org/wiki/Daily%20build
|
A daily build or nightly build is the practice of completing a software build of the latest version of a program, on a daily basis. This is so it can first be compiled to ensure that all required dependencies are present, and possibly tested to show no bugs have been introduced. The daily build is also often publicly available allowing access to the latest features for feedback.
In this context, a build is the result of compiling and linking all the files that make up a program. The use of such disciplined procedures as daily builds is particularly necessary in large organizations where many programmers are working on a single piece of software. Performing daily builds helps ensure that developers can work knowing with reasonable certainty that any new bugs that show up are a result of their own work done within the last day.
Daily builds typically include a set of tests, sometimes called a "smoke test." These tests are included to assist in determining what may have been broken by the changes included in the latest build. The critical piece of this process is to include new and revised tests as the project progresses.
Continuous integration builds
Although daily builds were considered a best practice of software development in the 1990s, they have now been superseded. Continuous integration is now run on an almost continual basis, with a typical cycle time of around 20-30 minutes since the last change to the source code. Continuous integration servers continually monitor the source code control system. When these servers detect new changes, they use a build tool to rebuild the software. Good practice today is also to use continuous integration as part of continuous testing, so that unit tests are re-run for each build, and more extensive functional testing (which takes longer to perform than the build) performed as frequently as its duration permits.
See also
Neutral build
Smoke testing in software development
External links
IEEE Best software practices
|
https://en.wikipedia.org/wiki/Distributed%20Interactive%20Simulation
|
Distributed Interactive Simulation (DIS) is an IEEE standard for conducting real-time platform-level wargaming across multiple host computers and is used worldwide, especially by military organizations but also by other agencies such as those involved in space exploration and medicine.
History
The standard was developed over a series of "DIS Workshops" at the Interactive Networked Simulation for Training symposium, held by the University of Central Florida's Institute for Simulation and Training (IST). The standard itself is very closely patterned after the original SIMNET distributed interactive simulation protocol, developed by Bolt, Beranek and Newman (BBN) for Defense Advanced Research Project Agency (DARPA) in the early through late 1980s. BBN introduced the concept of dead reckoning to efficiently transmit the state of battle field entities.
In the early 1990s, IST was contracted by the United States Defense Advanced Research Project Agency to undertake research in support of the US Army Simulator Network (SimNet) program. Funding and research interest for DIS standards development decreased following the proposal and promulgation of its successor, the High Level Architecture (simulation) (HLA) in 1996. HLA was produced by the merger of the DIS protocol with the Aggregate Level Simulation Protocol (ALSP) designed by MITRE.
There was a NATO standardisation agreement (STANAG 4482, Standardised Information Technology Protocols for Distributed Interactive Simulation (DIS), adopted in 1995) on DIS for modelling and simulation interoperability. This was retired in favour of HLA in 1998 and officially cancelled in 2010 by the NATO Standardization Agency (NSA).
The DIS family of standards
DIS is defined under IEEE Standard 1278:
IEEE 1278-1993 - Standard for Distributed Interactive Simulation - Application protocols
IEEE 1278.1-1995 - Standard for Distributed Interactive Simulation - Application protocols
IEEE 1278.1-1995 - Standard for Distributed Interactive
|
https://en.wikipedia.org/wiki/Ivar%20Jacobson
|
Ivar Hjalmar Jacobson (born 1939) is a Swedish computer scientist and software engineer, known as major contributor to UML, Objectory, Rational Unified Process (RUP), aspect-oriented software development and Essence.
Biography
Ivar Jacobson was born in Ystad, Sweden, on September 2, 1939. He received his Master of Electrical Engineering degree at Chalmers Institute of Technology in Gothenburg in 1962. After his work at Ericsson, he formalized the language and method he had been working on in his PhD at the Royal Institute of Technology in Stockholm in 1985 on the thesis Language Constructs for Large Real Time Systems.
After his master's degree, Jacobson joined Ericsson and worked in R&D on computerized switching systems AKE and AXE including PLEX. After his PhD thesis in April 1987, he started Objective Systems with Ericsson as a major customer. A majority stake of the company was acquired by Ericsson in 1991, and the company was renamed Objectory AB. Jacobson developed the software method Object-Oriented Software Engineering (OOSE) published 1992, which was a simplified version of the commercial software process Objectory (short for Object Factory).
In October, 1995, Ericsson divested Objectory to Rational Software and Jacobson started working with Grady Booch and James Rumbaugh, known collectively as the Three Amigos.
When IBM bought Rational in 2003, Jacobson decided to leave, after he stayed on until May 2004 as an executive technical consultant.
In mid-2003 Jacobson formed Ivar Jacobson International (IJI) which operates across three continents with offices in the UK, the US, Sweden, Switzerland, China, and Singapore.
Work
Ericsson
In 1967 at Ericsson, Jacobson proposed the use of software components in the new generation of software controlled telephone switches Ericsson was developing. In doing this he invented sequence diagrams, and developed collaboration diagrams. He also used state transition diagrams to describe the message flows between comp
|
https://en.wikipedia.org/wiki/Hot%20and%20high
|
In aviation, hot and high is a condition of low air density due to high ambient temperature and high airport elevation. Air density decreases with increasing temperature and altitude. The lower air density reduces the power output from the aircraft's engine and also requires a higher true airspeed before the aircraft can become airborne. Aviators gauge air density by calculating the density altitude.
An airport may be especially hot or high, without the other condition being present. Temperature and pressure altitude can change from one hour to the next. The fact that temperature decreases as altitude increases mitigates the "hot and high" effect to a small extent.
Negative effects of reduced engine power due to hot and high conditions
Airplanes require a longer takeoff run, potentially exceeding the amount of available runway.
Reduced take-off power hampers an aircraft's ability to climb. In some cases, an aircraft may be unable to climb rapidly enough to clear terrain surrounding a mountain airport.
Helicopters may be forced to operate in the shaded portion of the height-velocity diagram in order to become airborne at all. This creates the potential for an uncontrollable descent in the event of an engine failure.
In some cases, aircraft have landed at high-altitude airports by taking advantage of cold temperatures only to become stranded as temperatures warmed and air density decreased.
While unsafe at any altitude, an overloaded aircraft is much more dangerous under hot and high conditions.
Improving hot and high performance
Some ways to increase aircraft performance in hot and high conditions include:
Reduce aircraft weight. Weight can be reduced by carrying only enough fuel to reach the (lower-altitude) destination rather than filling the tanks completely. In some cases, unnecessary equipment can be removed from the aircraft. In many cases, however, the only practical way to adequately reduce aircraft weight is to depart with a smaller passenger, cargo
|
https://en.wikipedia.org/wiki/Pisano%20period
|
In number theory, the nth Pisano period, written as (n), is the period with which the sequence of Fibonacci numbers taken modulo n repeats. Pisano periods are named after Leonardo Pisano, better known as Fibonacci. The existence of periodic functions in Fibonacci numbers was noted by Joseph Louis Lagrange in 1774.
Definition
The Fibonacci numbers are the numbers in the integer sequence:
0, 1, 1, 2, 3, 5, 8, 13, 21, 34, 55, 89, 144, 233, 377, 610, 987, 1597, 2584, 4181, 6765, 10946, 17711, 28657, 46368, ...
defined by the recurrence relation
For any integer n, the sequence of Fibonacci numbers Fi taken modulo n is periodic.
The Pisano period, denoted (n), is the length of the period of this sequence. For example, the sequence of Fibonacci numbers modulo 3 begins:
0, 1, 1, 2, 0, 2, 2, 1, 0, 1, 1, 2, 0, 2, 2, 1, 0, 1, 1, 2, 0, 2, 2, 1, 0, ...
This sequence has period 8, so (3) = 8.
Properties
With the exception of (2) = 3, the Pisano period (n) is always even.
A simple proof of this can be given by observing that (n) is equal to the order of the Fibonacci matrix
in the general linear group GL2(ℤn) of invertible 2 by 2 matrices in the finite ring ℤn of integers modulo n. Since Q has determinant −1, the determinant of Q(n) is (−1)(n), and since this must equal 1 in ℤn, either n ≤ 2 or (n) is even.
If m and n are coprime, then (mn) is the least common multiple of (m) and (n), by the Chinese remainder theorem. For example, (3) = 8 and (4) = 6 imply (12) = 24. Thus the study of Pisano periods may be reduced to that of Pisano periods of prime powers q = pk, for k ≥ 1.
If p is prime, (pk) divides pk–1 (p). It is unknown if
for every prime p and integer k > 1. Any prime p providing a counterexample would necessarily be a Wall–Sun–Sun prime, and conversely every Wall–Sun–Sun prime p gives a counterexample (set k = 2).
So the study of Pisano periods may be further reduced to that of Pisano periods of primes. In this regard, two primes are anomalous. The prime 2
|
https://en.wikipedia.org/wiki/Tuple-versioning
|
Tuple-versioning (also called point-in-time) is a mechanism used in a relational database management system to store past states of a relation. Normally, only the current state is captured.
Using tuple-versioning techniques, typically two values for time are stored along with each tuple: a start time and an end time. These two values indicate the validity of the rest of the values in the tuple.
Typically when tuple-versioning techniques are used, the current tuple has a valid start time, but a null value for end time. Therefore, it is easy and efficient to obtain the current values for all tuples by querying for the null end time.
A single query that searches for tuples with start time less than, and end time greater than, a given time (where null end time is treated as a value greater than the given time) will give as a result the valid tuples at the given time.
For example, if a person's job changes from Engineer to Manager, there would be two tuples in an Employee table, one with the value Engineer for job and the other with the value Manager for job. The end time for the Engineer tuple would be equal to the start time for the Manager tuple.
The pattern known as log trigger uses this technique to automatically store historical information of a table in a database.
See also
Temporal database
Bitemporal data
Log trigger
References
Comparison of Access Methods for Time-Evolving Data, by Betty Salzberg and Vassilis J. Tsotras, ACM Computing Surveys, Vol. 31, No. 2, June 1999.
Data modeling
|
https://en.wikipedia.org/wiki/Hamming%20weight
|
The Hamming weight of a string is the number of symbols that are different from the zero-symbol of the alphabet used. It is thus equivalent to the Hamming distance from the all-zero string of the same length. For the most typical case, a string of bits, this is the number of 1's in the string, or the digit sum of the binary representation of a given number and the ℓ₁ norm of a bit vector. In this binary case, it is also called the population count, popcount, sideways sum, or bit summation.
History and usage
The Hamming weight is named after Richard Hamming although he did not originate the notion. The Hamming weight of binary numbers was already used in 1899 by James W. L. Glaisher to give a formula for the number of odd binomial coefficients in a single row of Pascal's triangle. Irving S. Reed introduced a concept, equivalent to Hamming weight in the binary case, in 1954.
Hamming weight is used in several disciplines including information theory, coding theory, and cryptography. Examples of applications of the Hamming weight include:
In modular exponentiation by squaring, the number of modular multiplications required for an exponent e is log2 e + weight(e). This is the reason that the public key value e used in RSA is typically chosen to be a number of low Hamming weight.
The Hamming weight determines path lengths between nodes in Chord distributed hash tables.
IrisCode lookups in biometric databases are typically implemented by calculating the Hamming distance to each stored record.
In computer chess programs using a bitboard representation, the Hamming weight of a bitboard gives the number of pieces of a given type remaining in the game, or the number of squares of the board controlled by one player's pieces, and is therefore an important contributing term to the value of a position.
Hamming weight can be used to efficiently compute find first set using the identity ffs(x) = pop(x ^ (x - 1)). This is useful on platforms such as SPARC that have hardware H
|
https://en.wikipedia.org/wiki/Isoschizomer
|
Isoschizomers are pairs of restriction enzymes specific to the same recognition sequence. For example, SphI (CGTAC/G) and BbuI (CGTAC/G) are isoschizomers of each other. The first enzyme discovered which recognizes a given sequence is known as the prototype; all subsequently identified enzymes that recognize that sequence are isoschizomers. Isoschizomers are isolated from different strains of bacteria and therefore may require different reaction conditions.
In some cases, only one out of a pair of isoschizomers can recognize both the methylated as well as unmethylated forms of restriction sites. In contrast, the other restriction enzyme can recognize only the unmethylated form of the restriction site.
This property of some isoschizomers allows identification of methylation state of the restriction site while isolating it from a bacterial strain.
For example, the restriction enzymes HpaII and MspI are isoschizomers, as they both recognize the sequence 5'-CCGG-3' when it is unmethylated. But when the second C of the sequence is methylated, only MspI can recognize it while HpaII cannot.
An enzyme that recognizes the same sequence but cuts it differently is a neoschizomer. Neoschizomers are a specific type (subset) of isoschizomer. For example, SmaI (CCC/GGG) and XmaI (C/CCGGG) are neoschizomers of each other. Similarly KpnI (GGTAC/C) and Acc65I (G/GTACC) are neoschizomers of each other.
An enzyme that recognizes a slightly different sequence, but produces the same ends is an isocaudomer.
References
See also
Molecular biology
Biotechnology
Restriction enzymes
|
https://en.wikipedia.org/wiki/Pityriasis%20rosea
|
Pityriasis rosea is a type of skin rash. Classically, it begins with a single red and slightly scaly area known as a "herald patch". This is then followed, days to weeks later, by an eruption of many smaller scaly spots; pinkish with a red edge in people with light skin and greyish in darker skin. About 20% of cases show atypical deviations from this pattern. It usually lasts less than three months and goes away without treatment. Sometimes malaise or a fever may occur before the start of the rash or itchiness, but often there are few other symptoms.
While the cause is not entirely clear, it is believed to be related to human herpesvirus 6 (HHV6) or human herpesvirus 7 (HHV7). It does not appear to be contagious. Certain medications may result in a similar rash. Diagnosis is based on the symptoms.
Evidence for specific treatment is limited. About 1.3% of people are affected at some point in time. It most often occurs in those between the ages of 10 and 35. The condition was described at least as early as 1798.
Signs and symptoms
The symptoms of this condition include:
Recent upper respiratory tract infections in 8–69% of patients have been reported by some studies.
Occasionally, prodromal flu-like symptoms, including headache, joint pain, mild fever, and fatigue, as well as gastrointestinal symptoms such as nausea, diarrhea, or vomiting, and feeling generally unwell, precede other symptoms.
In most cases, a single, 2 to 10 cm (1" to 4") oval red "herald" patch appears, classically on the trunk or neck, having an appearance similar to ringworm. Occasionally, the herald patch may occur in a hidden position (in the armpit, for example) and not be noticed immediately. The herald patch may also appear as a cluster of smaller oval spots, and be mistaken for acne. Rarely, it does not become present at all.
After the herald patch appears, usually some days or weeks later, a rash of many small (5–10 mm; " to ") pink or red, flaky, oval or round spots appear. They are
|
https://en.wikipedia.org/wiki/Chromosome%20jumping
|
Chromosome jumping is a tool of molecular biology that is used in the physical mapping of genomes. It is related to several other tools used for the same purpose, including chromosome walking.
Chromosome jumping is used to bypass regions difficult to clone, such as those containing repetitive DNA, that cannot be easily mapped by chromosome walking, and is useful in moving along a chromosome rapidly in search of a particular gene. Unlike chromosome walking, chromosome jumping is able to start on one point of the chromosome in order to traverse potential distant point of the same chromosome without cloning the intervening sequences. The ends of a large DNA fragment is the target cloning section of the chromosome jumping while the middle section gets removed by sequences of chemical manipulations prior to the cloning step.
Process
Chromosome jumping enables two ends of a DNA sequence to be cloned without the middle section. Genomic DNA may be partially digested using restriction endonuclease and with the aid of DNA ligase, the fragments are circularized at low concentration. From a known sequence, a primer is designed to sequence across the circularized junction. This primer is used to jump 100 kb-300 kb intervals: a sequence 100 kb away would have come near the known sequence on circularization, it permits jumping and sequencing in an alternative manner. Thus, sequences not reachable by chromosome walking can be sequenced. Chromosome walking can also be used from the new jump position (in either direction) to look for gene-like sequences, or additional jumps can be used to progress further along the chromosome. Combining chromosome jumping to chromosome walking through the chromosome allows bypassing repetitive DNA for the search of the target gene.
Library
Chromosome jumping library is different from chromosome walking due to the manipulations executed before the cloning step. In order to construct the library of chromosome jumping, individual clones originate
|
https://en.wikipedia.org/wiki/Square%20pyramid
|
In geometry, a square pyramid is a pyramid with a square base, having a total of five faces. If the pyramid's apex lies on a line erected perpendicularly from the center of the square, it is a right square pyramid with four isosceles triangles; otherwise, it is an oblique square pyramid. When all of the pyramid's edges are equal in length, its triangles are all equilateral, and it is called an equilateral square pyramid.
Square pyramids have arisen throughout the history of architecture, with an example being the Egyptian pyramid. They also crop up in chemistry in square pyramidal molecular structures. Square pyramids are often used in the construction of other polyhedra.
Properties
Right and equilateral square pyramid
A square pyramid has eight edges, five faces that include four triangles, and one square as the base. It also has five vertices, one of which is an apex, a vertex where all lateral edges meet. A lateral edge is a segment line between the apex and another vertex at the square base. When the apex is perpendicularly above the center of the base, all the lateral edges have the same length, and the faces other than the base are congruent isosceles triangles, a right square pyramid. A square pyramid where the apex is not perpendicularly above the center of the base, and the faces are not an isosceles triangle is an oblique square pyramid.
When all edges have the same length, the four isosceles triangles become equilateral, and therefore all the faces are regular polygons. This pyramid is known as an equilateral square pyramid. Its dihedral angle between two adjacent triangular faces and between the base and triangular face is
respectively. It has three-dimensional symmetry group of cyclic group pyramidal symmetry, a symmetry of order eight; this means that it is symmetrical as one rotates it for every quarter-turn of a full angle around the axis of symmetry, two vertical planes passing through the diagonals of the square base, and two other vertical
|
https://en.wikipedia.org/wiki/Pentagonal%20pyramid
|
In geometry, a pentagonal pyramid is a pyramid with a pentagonal base upon which are erected five triangular faces that meet at a point (the apex). Like any pyramid, it is self-dual.
The regular pentagonal pyramid has a base that is a regular pentagon and lateral faces that are equilateral triangles. It is one of the Johnson solids ().
It can be seen as the "lid" of an icosahedron; the rest of the icosahedron forms a gyroelongated pentagonal pyramid,
More generally an order-2 vertex-uniform pentagonal pyramid can be defined with a regular pentagonal base and 5 isosceles triangle sides of any height.
Cartesian coordinates
The pentagonal pyramid can be seen as the "lid" of a regular icosahedron; the rest of the icosahedron forms a gyroelongated pentagonal pyramid, J11. From the Cartesian coordinates of the icosahedron, Cartesian coordinates for a pentagonal pyramid with edge length 2 may be inferred as
where (sometimes written as φ) is the golden ratio.
The height H, from the midpoint of the pentagonal face to the apex, of a pentagonal pyramid with edge length a may therefore be computed as:
Its surface area A can be computed as the area of the pentagonal base plus five times the area of one triangle:
Its volume can be calculated as:
Related polyhedra
The pentagrammic star pyramid has the same vertex arrangement, but connected onto a pentagram base:
Example
References
External links
Virtual Reality Polyhedra www.georgehart.com: The Encyclopedia of Polyhedra ( VRML model)
Pyramids and bipyramids
Self-dual polyhedra
Prismatoid polyhedra
Johnson solids
|
https://en.wikipedia.org/wiki/Triangular%20cupola
|
In geometry, the triangular cupola is one of the Johnson solids (). It can be seen as half a cuboctahedron.
Formulae
The following formulae for the volume (), the surface area () and the height () can be used if all faces are regular, with edge length a:
Dual polyhedron
The dual of the triangular cupola has 6 triangular and 3 kite faces:
Related polyhedra and honeycombs
The triangular cupola can be augmented by 3 square pyramids, leaving adjacent coplanar faces. This isn't a Johnson solid because of its coplanar faces. Merging those coplanar triangles into larger ones, topologically this is another triangular cupola with isosceles trapezoidal side faces. If all the triangles are retained and the base hexagon is replaced by 6 triangles, it generates a coplanar deltahedron with 22 faces.
The triangular cupola can form a tessellation of space with square pyramids and/or octahedra, the same way octahedra and cuboctahedra can fill space.
The family of cupolae with regular polygons exists up to n=5 (pentagons), and higher if isosceles triangles are used in the cupolae.
References
External links
Prismatoid polyhedra
Johnson solids
|
https://en.wikipedia.org/wiki/Square%20cupola
|
In geometry, the square cupola, sometimes called lesser dome, is one of the Johnson solids (). It can be obtained as a slice of the rhombicuboctahedron. As in all cupolae, the base polygon has twice as many edges and vertices as the top; in this case the base polygon is an octagon.
Formulae
The following formulae for the circumradius, surface area, volume, and height can be used if all faces are regular, with edge length a:
Related polyhedra and honeycombs
Other convex cupolae
Dual polyhedron
The dual of the square cupola has 8 triangular and 4 kite faces:
Crossed square cupola
The crossed square cupola is one of the nonconvex Johnson solid isomorphs, being topologically identical to the convex square cupola. It can be obtained as a slice of the nonconvex great rhombicuboctahedron or quasirhombicuboctahedron, analogously to how the square cupola may be obtained as a slice of the rhombicuboctahedron. As in all cupolae, the base polygon has twice as many edges and vertices as the top; in this case the base polygon is an octagram.
It may be seen as a cupola with a retrograde square base, so that the squares and triangles connect across the bases in the opposite way to the square cupola, hence intersecting each other.
Honeycombs
The square cupola is a component of several nonuniform space-filling lattices:
with tetrahedra;
with cubes and cuboctahedra; and
with tetrahedra, square pyramids and various combinations of cubes, elongated square pyramids and elongated square bipyramids.
References
External links
Prismatoid polyhedra
Johnson solids
|
https://en.wikipedia.org/wiki/Pentagonal%20rotunda
|
In geometry, the pentagonal rotunda is one of the Johnson solids (). It can be seen as half of an icosidodecahedron, or as half of a pentagonal orthobirotunda. It has a total of 17 faces.
Formulae
The following formulae for volume, surface area, circumradius, and height are valid if all faces are regular, with edge length a:
Dual polyhedron
The dual of the pentagonal rotunda has 20 faces: 10 triangular, 5 rhombic, and 5 kites.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Elongated%20square%20cupola
|
In geometry, the elongated square cupola is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a square cupola () by attaching an octagonal prism to its base. The solid can be seen as a rhombicuboctahedron with its "lid" (another square cupola) removed.
Formulae
The following formulae for volume, surface area and circumradius can be used if all faces are regular, with edge length a:
Dual polyhedron
The dual of the elongated square cupola has 20 faces: 8 isosceles triangles, 4 kites, 8 quadrilaterals.
Related polyhedra and honeycombs
The elongated square cupola forms space-filling honeycombs with tetrahedra and cubes; with cubes and cuboctahedra; and with tetrahedra, elongated square pyramids, and elongated square bipyramids. (The latter two units can be decomposed into cubes and square pyramids.)
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Elongated%20square%20gyrobicupola
|
In geometry, the elongated square gyrobicupola or pseudo-rhombicuboctahedron is one of the Johnson solids (). It is not usually considered to be an Archimedean solid, even though its faces consist of regular polygons that meet in the same pattern at each of its vertices, because unlike the 13 Archimedean solids, it lacks a set of global symmetries that map every vertex to every other vertex (though Grünbaum has suggested it should be added to the traditional list of Archimedean solids as a 14th example). It strongly resembles, but should not be mistaken for, the rhombicuboctahedron, which is an Archimedean solid. It is also a canonical polyhedron.
This shape may have been discovered by Johannes Kepler in his enumeration of the Archimedean solids, but its first clear appearance in print appears to be the work of Duncan Sommerville in 1905. It was independently rediscovered by J. C. P. Miller by 1930 (by mistake while attempting to construct a model of the rhombicuboctahedron) and again by V. G. Ashkinuse in 1957.
Construction and relation to the rhombicuboctahedron
As the name suggests, it can be constructed by elongating a square gyrobicupola (J29) and inserting an octagonal prism between its two halves.
The solid can also be seen as the result of twisting one of the square cupolae (J4) on a rhombicuboctahedron (one of the Archimedean solids; a.k.a. the elongated square orthobicupola) by 45 degrees. It is therefore a gyrate rhombicuboctahedron. Its similarity to the rhombicuboctahedron gives it the alternative name pseudo-rhombicuboctahedron. It has occasionally been referred to as "the fourteenth Archimedean solid".
This property does not carry over to its pentagonal-faced counterpart, the gyrate rhombicosidodecahedron.
Symmetry and classification
The pseudo-rhombicuboctahedron possesses D4d symmetry. It is locally vertex-regular – the arrangement of the four faces incident on any vertex is the same for all vertices; this is unique among the Johnson solids
|
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20rotunda
|
In geometry, the elongated pentagonal rotunda is one of the Johnson solids (J21). As the name suggests, it can be constructed by elongating a pentagonal rotunda (J6) by attaching a decagonal prism to its base. It can also be seen as an elongated pentagonal orthobirotunda (J42) with one pentagonal rotunda removed.
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
Dual polyhedron
The dual of the elongated pentagonal rotunda has 30 faces: 10 isosceles triangles, 10 rhombi, and 10 quadrilaterals.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Gyroelongated%20square%20cupola
|
In geometry, the gyroelongated square cupola is one of the Johnson solids (J23). As the name suggests, it can be constructed by gyroelongating a square cupola (J4) by attaching an octagonal antiprism to its base. It can also be seen as a gyroelongated square bicupola (J45) with one square bicupola removed.
Area and Volume
The surface area is,
The volume is the sum of the volume of a square cupola and the volume of an octagonal prism,
Dual polyhedron
The dual of the gyroelongated square cupola has 20 faces: 8 kites, 4 rhombi, and 8 pentagons.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Gyroelongated%20pentagonal%20rotunda
|
In geometry, the gyroelongated pentagonal rotunda is one of the Johnson solids (J25). As the name suggests, it can be constructed by gyroelongating a pentagonal rotunda (J6) by attaching a decagonal antiprism to its base. It can also be seen as a gyroelongated pentagonal birotunda (J48) with one pentagonal rotunda removed.
Area and Volume
With edge length a, the surface area is
and the volume is
Dual polyhedron
The dual of the gyroelongated pentagonal rotunda has 30 faces: 10 pentagons, 10 rhombi, and 10 quadrilaterals.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Square%20orthobicupola
|
In geometry, the square orthobicupola is one of the Johnson solids (). As the name suggests, it can be constructed by joining two square cupolae () along their octagonal bases, matching like faces. A 45-degree rotation of one cupola before the joining yields a square gyrobicupola ().
The square orthobicupola is the second in an infinite set of orthobicupolae.
The square orthobicupola can be elongated by the insertion of an octagonal prism between its two cupolae to yield a rhombicuboctahedron, or collapsed by the removal of an irregular hexagonal prism to yield an elongated square dipyramid (), which itself is merely an elongated octahedron.
It can be constructed from the disphenocingulum () by replacing the band of up-and-down triangles by a band of rectangles, while fixing two opposite sphenos.
Related polyhedra and honeycombs
The square orthobicupola forms space-filling honeycombs with tetrahedra; with cubes and cuboctahedra; with tetrahedra and cubes; with square pyramids, tetrahedra and various combinations of cubes, elongated square pyramids and/or elongated square bipyramids.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Square%20gyrobicupola
|
In geometry, the square gyrobicupola is one of the Johnson solids (). Like the square orthobicupola (), it can be obtained by joining two square cupolae () along their bases. The difference is that in this solid, the two halves are rotated 45 degrees with respect to one another.
The square gyrobicupola is the second in an infinite set of gyrobicupolae.
Related to the square gyrobicupola is the elongated square gyrobicupola. This polyhedron is created when an octagonal prism is inserted between the two halves of the square gyrobicupola. It is argued whether or not the elongated square gyrobicupola is an Archimedean solid because, although it meets every other standard necessary to be an Archimedean solid, it is not highly symmetric.
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
Related polyhedra and honeycombs
The square gyrobicupola forms space-filling honeycombs with tetrahedra, cubes and cuboctahedra; and with tetrahedra, square pyramids, and elongated square bipyramids. (The latter unit can be decomposed into elongated square pyramids, cubes, and/or square pyramids).
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Pentagonal%20orthobirotunda
|
In geometry, the pentagonal orthobirotunda is one of the Johnson solids (). It can be constructed by joining two pentagonal rotundae () along their decagonal faces, matching like faces.
Related polyhedra
The pentagonal orthobirotunda is also related to an Archimedean solid, the icosidodecahedron, which can also be called a pentagonal gyrobirotunda, similarly created by two pentagonal rotunda but with a 36-degree rotation.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Augmented%20tridiminished%20icosahedron
|
In geometry, the augmented tridiminished icosahedron is one of the
Johnson solids (). It can be obtained by joining a tetrahedron to another Johnson solid, the tridiminished icosahedron ().
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20gyrobirotunda
|
In geometry, the elongated pentagonal gyrobirotunda is one of the Johnson solids (). As the name suggests, it can be constructed by elongating a "pentagonal gyrobirotunda," or icosidodecahedron (one of the Archimedean solids), by inserting a decagonal prism between its congruent halves. Rotating one of the pentagonal rotundae () through 36 degrees before inserting the prism yields an elongated pentagonal orthobirotunda ().
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Elongated%20pentagonal%20orthobirotunda
|
In geometry, the elongated pentagonal orthobirotunda is one of the Johnson solids (). Its Conway polyhedron notation is at5jP5. As the name suggests, it can be constructed by elongating a pentagonal orthobirotunda () by inserting a decagonal prism between its congruent halves. Rotating one of the pentagonal rotundae () through 36 degrees before inserting the prism yields the elongated pentagonal gyrobirotunda ().
Formulae
The following formulae for volume and surface area can be used if all faces are regular, with edge length a:
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Gyroelongated%20pentagonal%20birotunda
|
In geometry, the gyroelongated pentagonal birotunda is one of the Johnson solids (). As the name suggests, it can be constructed by gyroelongating a pentagonal birotunda (either or the icosidodecahedron) by inserting a decagonal antiprism between its two halves.
The gyroelongated pentagonal birotunda is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each pentagonal face on the bottom half of the figure is connected by a path of two triangular faces to a pentagonal face above it and to the left. In the figure of opposite chirality (the mirror image of the illustrated figure), each bottom pentagon would be connected to a pentagonal face above it and to the right. The two chiral forms of are not considered different Johnson solids.
Area and Volume
With edge length a, the surface area is
and the volume is
See also
Birotunda
External links
Johnson solids
Chiral polyhedra
|
https://en.wikipedia.org/wiki/Gyroelongated%20square%20bicupola
|
In geometry, the gyroelongated square bicupola is one of the Johnson solids (). As the name suggests, it can be constructed by gyroelongating a square bicupola ( or ) by inserting an octagonal antiprism between its congruent halves.
The gyroelongated square bicupola is one of five Johnson solids which are chiral, meaning that they have a "left-handed" and a "right-handed" form. In the illustration to the right, each square face on the left half of the figure is connected by a path of two triangular faces to a square face below it and to the left. In the figure of opposite chirality (the mirror image of the illustrated figure), each square on the left would be connected to a square face above it and to the right. The two chiral forms of are not considered different Johnson solids.
Area and Volume
With edge length a, the surface area is
and the volume is
References
External links
Johnson solids
Chiral polyhedra
|
https://en.wikipedia.org/wiki/Temperature%20coefficient
|
A temperature coefficient describes the relative change of a physical property that is associated with a given change in temperature. For a property R that changes when the temperature changes by dT, the temperature coefficient α is defined by the following equation:
Here α has the dimension of an inverse temperature and can be expressed e.g. in 1/K or K−1.
If the temperature coefficient itself does not vary too much with temperature and , a linear approximation will be useful in estimating the value R of a property at a temperature T, given its value R0 at a reference temperature T0:
where ΔT is the difference between T and T0.
For strongly temperature-dependent α, this approximation is only useful for small temperature differences ΔT.
Temperature coefficients are specified for various applications, including electric and magnetic properties of materials as well as reactivity. The temperature coefficient of most of the reactions lies between −2 and 3.
Negative temperature coefficient
Most ceramics exhibit negative temperature dependence of resistance behaviour. This effect is governed by an Arrhenius equation over a wide range of temperatures:
where R is resistance, A and B are constants, and T is absolute temperature (K).
The constant B is related to the energies required to form and move the charge carriers responsible for electrical conduction hence, as the value of B increases, the material becomes insulating. Practical and commercial NTC resistors aim to combine modest resistance with a value of B that provides good sensitivity to temperature. Such is the importance of the B constant value, that it is possible to characterize NTC thermistors using the B parameter equation:
where is resistance at temperature .
Therefore, many materials that produce acceptable values of include materials that have been alloyed or possess variable negative temperature coefficient (NTC), which occurs when a physical property (such as thermal conductivity or electrical
|
https://en.wikipedia.org/wiki/Substantial%20equivalence
|
In food safety, the concept of substantial equivalence holds that the safety of a new food, particularly one that has been genetically modified (GM), may be assessed by comparing it with a similar traditional food that has proven safe in normal use over time. It was first formulated as a food safety policy in 1993, by the Organisation for Economic Co-operation and Development (OECD).
As part of a food safety testing process, substantial equivalence is the initial step, establishing toxicological and nutritional differences in the new food compared to a conventional counterpart—differences are analyzed and evaluated, and further testing may be conducted, leading to a final safety assessment.
Substantial equivalence is the underlying principle in GM food safety assessment for a number of national and international agencies, including the Canadian Food Inspection Agency (CFIA), Japan's Ministry of Health, Labour and Welfare (MHLW), the US Food and Drug Administration (FDA), and the United Nations' Food and Agriculture Organization (FAO) and World Health Organization.
Origin
The concept of comparing genetically modified foods to traditional foods as a basis for safety assessment was first introduced as a recommendation during the 1990 Joint FAO/WHO Expert Consultation on biotechnology and food safety (a scientific conference of officials and industry), although the term substantial equivalence was not used. Adopting the term, substantial equivalence was formulated as a food safety policy by the OECD, first described in their 1993 report, "Safety Evaluation of Foods Derived by Modern Biotechnology: Concepts and Principles. The term was borrowed from the FDA's 1976 substantial equivalence definition for new medical devices—under Premarket Notification 510(k), a new Class II device that is essentially similar to an existing device can be cleared for release without further testing. The underlying approach of comparing a new product or technique to an existing one has l
|
https://en.wikipedia.org/wiki/Twelfth%20root%20of%20two
|
The twelfth root of two or (or equivalently ) is an algebraic irrational number, approximately equal to 1.0594631. It is most important in Western music theory, where it represents the frequency ratio (musical interval) of a semitone () in twelve-tone equal temperament. This number was proposed for the first time in relationship to musical tuning in the sixteenth and seventeenth centuries. It allows measurement and comparison of different intervals (frequency ratios) as consisting of different numbers of a single interval, the equal tempered semitone (for example, a minor third is 3 semitones, a major third is 4 semitones, and perfect fifth is 7 semitones). A semitone itself is divided into 100 cents (1 cent = ).
Numerical value
The twelfth root of two to 20 significant figures is . Fraction approximations in increasing order of accuracy include , , , , and .
, its numerical value has been computed to at least twenty billion decimal digits.
The equal-tempered chromatic scale
A musical interval is a ratio of frequencies and the equal-tempered chromatic scale divides the octave (which has a ratio of 2:1) into twelve equal parts. Each note has a frequency that is 2 times that of the one below it.
Applying this value successively to the tones of a chromatic scale, starting from A above middle C (known as A4) with a frequency of 440 Hz, produces the following sequence of pitches:
The final A (A5: 880 Hz) is exactly twice the frequency of the lower A (A4: 440 Hz), that is, one octave higher.
Other tuning scales
Other tuning scales use slightly different interval ratios:
The just or Pythagorean perfect fifth is 3/2, and the difference between the equal tempered perfect fifth and the just is a grad, the twelfth root of the Pythagorean comma ().
The equal tempered Bohlen–Pierce scale uses the interval of the thirteenth root of three ().
Stockhausen's Studie II (1954) makes use of the twenty-fifth root of five (), a compound major third divided into 5×5 parts.
The
|
https://en.wikipedia.org/wiki/Orders%20of%20magnitude%20%28speed%29
|
To help compare different orders of magnitude, the following list describes various speed levels between approximately 2.2 m/s and 3.0 m/s (the speed of light). Values in bold are exact.
List of orders of magnitude for speed
See also
Typical projectile speeds - also showing the corresponding kinetic energy per unit mass
Neutron temperature
References
Units of velocity
Physical quantities
Speed
|
https://en.wikipedia.org/wiki/Altix
|
Altix is a line of server computers and supercomputers produced by Silicon Graphics (and successor company Silicon Graphics International), based on Intel processors. It succeeded the MIPS/IRIX-based Origin 3000 servers.
History
The line was first announced on January 7, 2003, with the Altix 3000 series, based on Intel Itanium 2 processors and SGI's NUMAlink processor interconnect.
At product introduction, the system supported up to 64 processors running Linux as a single system image and shipped with a Linux distribution called SGI Advanced Linux Environment, which was compatible with Red Hat Advanced Server.
By August 2003, many SGI Altix customers were running Linux on 128- and 256-processor SGI Altix systems. SGI officially announced 256-processor support within a single system image of Linux on March 10, 2004, using a 2.4-based Linux kernel. The SGI Advanced Linux Environment was eventually dropped after support using a standard, unmodified SUSE Linux Enterprise Server (SLES) distribution for SGI Altix was provided with SLES 8 and SLES 9.
Later, SGI Altix 512-processor systems were officially supported using an unmodified, standard Linux distribution with the launch of SLES 9 SP1. Besides full support of SGI Altix on SUSE Linux Enterprise Server, a standard and unmodified Red Hat Enterprise Linux was also fully supported starting with SGI Altix 3700 Bx2 with RHEL 4 and RHEL 5 with system processor limits defined by Red Hat for those releases.
On November 14, 2005, SGI introduced the Altix 4000 series based on the Itanium 2. The Altix 3000 and 4000 are distributed shared memory multiprocessors. SGI later officially supported 1024-processor systems on an unmodified, standard Linux distribution with the launch of SLES 10 in July 2006. SGI Altix 4700 was also officially supported by Red Hat with RHEL 4 and RHEL 5 — maximum processor limits were as defined by Red Hat for its RHEL releases.
The Altix brand was used for systems based on multi-core Intel Xeon proc
|
https://en.wikipedia.org/wiki/Electronic%20health%20record
|
An electronic health record (EHR) is the systematized collection of patient and population electronically stored health information in a digital format. These records can be shared across different health care settings. Records are shared through network-connected, enterprise-wide information systems or other information networks and exchanges. EHRs may include a range of data, including demographics, medical history, medication and allergies, immunization status, laboratory test results, radiology images, vital signs, personal statistics like age and weight, and billing information.
For several decades, electronic health records (EHRs) have been touted as key to increasing of quality care. Electronic health records are used for other reasons than charting for patients; today, providers are using data from patient records to improve quality outcomes through their care management programs. EHR combines all patients demographics into a large pool, and uses this information to assist with the creation of "new treatments or innovation in healthcare delivery" which overall improves the goals in healthcare. Combining multiple types of clinical data from the system's health records has helped clinicians identify and stratify chronically ill patients. EHR can improve quality care by using the data and analytics to prevent hospitalizations among high-risk patients.
EHR systems are designed to store data accurately and to capture the state of a patient across time. It eliminates the need to track down a patient's previous paper medical records and assists in ensuring data is up-to-date, accurate and legible. It also allows open communication between the patient and the provider, while providing "privacy and security." It can reduce risk of data replication as there is only one modifiable file, which means the file is more likely up to date and decreases risk of lost paperwork and is cost efficient. Due to the digital information being searchable and in a single file, EMRs
|
https://en.wikipedia.org/wiki/Olf%20%28unit%29
|
The olf is a unit used to measure the strength of a pollution source. It was introduced by Danish professor P. Ole Fanger; the name "olf" is derived from the Latin word , meaning "smelled".
One olf is the sensory pollution strength from a standard person defined as an average adult working in an office or similar non-industrial workplace, sedentary and in thermal comfort, with a hygienic standard equivalent of 0.7 baths per day and whose skin has a total area of 1.8 square metres. It was defined to quantify the strength of pollution sources that can be perceived by humans.
The perceived air quality is measured in decipol.
Examples of typical scent emissions
See also
Sick building syndrome
References
Professor Ole Fanger's page at the Technical University of Denmark, includes curriculum vitae mentioning him proposing the unit called olf.
Units of measurement
Olfaction
|
https://en.wikipedia.org/wiki/Terminal%20node%20controller
|
A terminal node controller (TNC) is a device used by amateur radio operators to participate in AX.25 packet radio networks. It is similar in function to the Packet Assembler/Disassemblers used on X.25 networks, with the addition of a modem to convert baseband digital signals to audio tones.
The first TNC, the VADCG board, was originally developed by Doug Lockhart, VE7APU, of Vancouver, British Columbia.
Amateur Radio TNCs were first developed in 1978 in Canada by the Montreal Amateur Radio Club and the Vancouver Area Digital Communications group. These never gained much popularity because only a bare printed circuit board was made available and builders had to gather up a large number of components.
In 1983, the Tucson Amateur Packet Radio (TAPR) association produced complete kits for their TNC-1 design. This was later available as the Heathkit HD-4040. A few years later, the improved TNC-2 became available, and it was licensed to commercial manufacturers such as MFJ.
In 1986, the improved "TNC+" was designed to run programs and protocols developed for the original TNC board.
TNC+ also included an assembler and a version of Forth (STOIC), which runs on the TNC+ itself, to support developing new programs and protocols.
Description
A typical model consists of a microprocessor, a modem, and software (in EPROM) that implements the AX.25 protocol and provides a command line interface to the user. (Commonly, this software provides other functionality as well, such as a basic bulletin board system to receive messages while the operator is away.) Because the TNC contains all the intelligence needed to communicate over an AX.25 network, no external computer is required. All of the network's resources can be accessed using a dumb terminal.
The TNC connects to the terminal and a radio transceiver. Data from the terminal is formatted into AX.25 packets and modulated into audio signals (in traditional applications) for transmission by the radio. Received signals
|
https://en.wikipedia.org/wiki/Red%20states%20and%20blue%20states
|
Starting with the 2000 United States presidential election, the terms "red state" and "blue state" have referred to U.S. states whose voters vote predominantly for one party — the Republican Party in red states and the Democratic Party in blue states — in presidential and other statewide elections. By contrast, states where the vote fluctuates between the Democratic and Republican candidates are known as "swing states" or "purple states". Examining patterns within states reveals that the reversal of the two parties' geographic bases has happened at the state level, but it is more complicated locally, with urban-rural divides associated with many of the largest changes.
All states contain considerable amounts of both liberal and conservative voters (i.e., they are "purple") and only appear blue or red on the electoral map because of the winner-take-all system used by most states in the Electoral College. However, the perception of some states as "blue" and some as "red" was reinforced by a degree of partisan stability from election to election — from the 2016 election to the 2020 presidential election, only five states changed "color"; and as of 2020, 35 out of 50 states have voted for the same party in every presidential election since the red-blue terminology was popularized in 2000, with only 15 having swung between the 2000 presidential election and the 2020 election. Although many red states and blue states stay in the same category for long periods, they may also switch from blue to red or from red to blue over time.
Origins of the color scheme
The colors red and blue are also featured on the United States flag. Traditional political mapmakers, at least throughout the 20th century, had used blue to represent the modern-day Republicans, as well as the earlier Federalist Party. This may have been a holdover from the Civil War, during which the predominantly Republican north was considered However, at that time, a maker of widely-sold maps accompanied them with
|
https://en.wikipedia.org/wiki/Density%20%28computer%20storage%29
|
Density is a measure of the quantity of information bits that can be stored on a given length (linear density) of track, area of the surface (areal density), or in a given volume (volumetric density) of a computer storage medium. Generally, higher density is more desirable, for it allows more data to be stored in the same physical space. Density therefore has a direct relationship to storage capacity of a given medium. Density also generally affects the performance within a particular medium, as well as price.
Storage device classes
Solid state media
Solid state drives use flash memory to store non-volatile media. They are the latest form of mass produced storage and rival magnetic disk media. Solid state media data is saved to a pool of NAND flash. NAND itself is made up of what are called floating gate transistors. Unlike the transistor designs used in DRAM, which must be refreshed multiple times per second, NAND flash is designed to retain its charge state even when not powered up. The highest capacity drives commercially available are the Nimbus Data Exadrive© DC series drives, these drives come in capacities ranging 16TB to 100TB. Nimbus states that for its size the 100TB SSD has a 6:1 space saving ratio over a nearline HDD
Magnetic disk media
Hard disk drives store data in the magnetic polarization of small patches of the surface coating on a disk. The maximum areal density is defined by the size of the magnetic particles in the surface, as well as the size of the "head" used to read and write the data. In 1956 the first hard drive, the IBM 350, had an areal density of 2,000 bit/in2. Since then, the increase in density has matched Moore's Law, reaching 1 Tbit/in2 in 2014. In 2015, Seagate introduced a hard drive with a density of 1.34 Tbit/in2, more than 600 million times that of the IBM 350. It is expected that current recording technology can "feasibly" scale to at least 5 Tbit/in2 in the near future. New technologies like heat-assisted magnetic record
|
https://en.wikipedia.org/wiki/Bioinorganic%20chemistry
|
Bioinorganic chemistry is a field that examines the role of metals in biology. Bioinorganic chemistry includes the study of both natural phenomena such as the behavior of metalloproteins as well as artificially introduced metals, including those that are non-essential, in medicine and toxicology. Many biological processes such as respiration depend upon molecules that fall within the realm of inorganic chemistry. The discipline also includes the study of inorganic models or mimics that imitate the behaviour of metalloproteins.
As a mix of biochemistry and inorganic chemistry, bioinorganic chemistry is important in elucidating the implications of electron-transfer proteins, substrate bindings and activation, atom and group transfer chemistry as well as metal properties in biological chemistry. The successful development of truly interdisciplinary work is necessary to advance bioinorganic chemistry.
Composition of living organisms
About 99% of mammals' mass are the elements carbon, nitrogen, calcium, sodium, chlorine, potassium, hydrogen, phosphorus, oxygen and sulfur. The organic compounds (proteins, lipids and carbohydrates) contain the majority of the carbon and nitrogen and most of the oxygen and hydrogen is present as water. The entire collection of metal-containing biomolecules in a cell is called the metallome.
History
Paul Ehrlich used organoarsenic (“arsenicals”) for the treatment of syphilis, demonstrating the relevance of metals, or at least metalloids, to medicine, that blossomed with Rosenberg's discovery of the anti-cancer activity of cisplatin (cis-PtCl2(NH3)2). The first protein ever crystallized (see James B. Sumner) was urease, later shown to contain nickel at its active site. Vitamin B12, the cure for pernicious anemia was shown crystallographically by Dorothy Crowfoot Hodgkin to consist of a cobalt in a corrin macrocycle. The Watson-Crick structure for DNA demonstrated the key structural role played by phosphate-containing polymers.
Them
|
https://en.wikipedia.org/wiki/Pentagonal%20cupola
|
In geometry, the pentagonal cupola is one of the Johnson solids (). It can be obtained as a slice of the rhombicosidodecahedron. The pentagonal cupola consists of 5 equilateral triangles, 5 squares, 1 pentagon, and 1 decagon.
Formulae
The following formulae for volume, surface area and circumradius can be used if all faces are regular, with edge length a:
The height of the pentagonal cupola is
.
Related polyhedra
Dual polyhedron
The dual of the pentagonal cupola has 10 triangular faces and 5 kite faces:
Other convex cupolae
Crossed pentagrammic cupola
In geometry, the crossed pentagrammic cupola is one of the nonconvex Johnson solid isomorphs, being topologically identical to the convex pentagonal cupola. It can be obtained as a slice of the nonconvex great rhombicosidodecahedron or quasirhombicosidodecahedron, analogously to how the pentagonal cupola may be obtained as a slice of the rhombicosidodecahedron. As in all cupolae, the base polygon has twice as many edges and vertices as the top; in this case the base polygon is a decagram.
It may be seen as a cupola with a retrograde pentagrammic base, so that the squares and triangles connect across the bases in the opposite way to the pentagrammic cuploid, hence intersecting each other more deeply.
References
External links
Prismatoid polyhedra
Johnson solids
|
https://en.wikipedia.org/wiki/Diminished%20rhombicosidodecahedron
|
In geometry, the diminished rhombicosidodecahedron is one of the Johnson solids (). It can be constructed as a rhombicosidodecahedron with one pentagonal cupola removed.
Related Johnson solids are:
: parabidiminished rhombicosidodecahedron with two opposing cupolae removed, and
: metabidiminished rhombicosidodecahedron with two non-opposing cupolae removed, and
: tridiminished rhombicosidodecahedron with three cupola removed.
External links
Editable printable net of a diminished rhombicosidodecahedron with interactive 3D view
Johnson solids
|
https://en.wikipedia.org/wiki/Gyrate%20rhombicosidodecahedron
|
In geometry, the gyrate rhombicosidodecahedron is one of the Johnson solids (). It is also a canonical polyhedron.
Related polyhedron
It can be constructed as a rhombicosidodecahedron with one pentagonal cupola rotated through 36 degrees. They have the same faces around each vertex, but vertex configurations along the rotation become a different order, .
Alternative Johnson solids, constructed by rotating different cupolae of a rhombicosidodecahedron, are:
The parabigyrate rhombicosidodecahedron () where two opposing cupolae are rotated;
The metabigyrate rhombicosidodecahedron () where two non-opposing cupolae are rotated;
And the trigyrate rhombicosidodecahedron () where three cupolae are rotated.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/List%20of%20Java%20keywords
|
In the Java programming language, a keyword is any one of 68 reserved words that have a predefined meaning in the language. Because of this, programmers cannot use keywords in some contexts, such as names for variables, methods, classes, or as any other identifier. Of these 68 keywords, 17 of them are only contextually reserved, and can sometimes be used as an identifier, unlike standard reserved words. Due to their special functions in the language, most integrated development environments for Java use syntax highlighting to display keywords in a different colour for easy identification.
List of Java keywords
_
Added in Java 9, the underscore has become a keyword and cannot be used as a variable name anymore.
abstract
A method with no definition must be declared as abstract and the class containing it must be declared as abstract. Abstract classes cannot be instantiated. Abstract methods must be implemented in the sub classes. The abstract keyword cannot be used with variables or constructors. Note that an abstract class isn't required to have an abstract method at all.
assert (added in J2SE 1.4)
Assert describes a predicate (a true–false statement) placed in a Java program to indicate that the developer thinks that the predicate is always true at that place. If an assertion evaluates to false at run-time, an assertion failure results, which typically causes execution to abort. Assertions are disabled at runtime by default, but can be enabled through a command-line option or programmatically through a method on the class loader.
boolean
Defines a boolean variable for the values "true" or "false" only. By default, the value of boolean primitive type is false. This keyword is also used to declare that a method returns a value of the primitive type boolean.
break
Used to end the execution in the current loop body.
Used to break out of a switch block.
byte
The byte keyword is used to declare a field that can hold an 8-bit signed two's complement integer. This k
|
https://en.wikipedia.org/wiki/Bencode
|
Bencode (pronounced like Bee-encode) is the encoding used by the peer-to-peer file sharing system BitTorrent for storing and transmitting loosely structured data.
It supports four different types of values:
byte strings,
integers,
lists, and
dictionaries (associative arrays).
Bencoding is most commonly used in torrent files, and as such is part of the BitTorrent specification. These metadata files are simply bencoded dictionaries.
Bencoding is simple and (because numbers are encoded as text in decimal notation) is unaffected by endianness, which is important for a cross-platform application like BitTorrent. It is also fairly flexible, as long as applications ignore unexpected dictionary keys, so that new ones can be added without creating incompatibilities.
Encoding algorithm
Bencode uses ASCII characters as delimiters and digits.
An integer is encoded as i<integer encoded in base ten ASCII>e. Leading zeros are not allowed (although the number zero is still represented as "0"). Negative values are encoded by prefixing the number with a hyphen-minus. The number 42 would thus be encoded as , 0 as , and -42 as . Negative zero is not permitted.
A byte string (a sequence of bytes, not necessarily characters) is encoded as <length>:<contents>. The length is encoded in base 10, like integers, but must be non-negative (zero is allowed); the contents are just the bytes that make up the string. The string "spam" would be encoded as . The specification does not deal with encoding of characters outside the ASCII set; to mitigate this, some BitTorrent applications explicitly communicate the encoding (most commonly UTF-8) in various non-standard ways. This is identical to how netstrings work, except that netstrings additionally append a comma suffix after the byte sequence.
A list of values is encoded as l<contents>e . The contents consist of the bencoded elements of the list, in order, concatenated. A list consisting of the string "spam" and the number 42 would be
|
https://en.wikipedia.org/wiki/Set%20partitioning%20in%20hierarchical%20trees
|
Set partitioning in hierarchical trees (SPIHT) is an image compression algorithm that exploits the inherent similarities across the subbands in a wavelet decomposition of an image. The algorithm was developed by Brazilian engineer Amir Said with William A. Pearlman in 1996.
General description
The algorithm codes the most important wavelet transform coefficients first, and transmits the bits so that an increasingly refined copy of the original image can be obtained progressively.
See also
Embedded Zerotrees of Wavelet transforms (EZW)
Wavelet
References
Image compression
Wavelets
Brazilian inventions
|
https://en.wikipedia.org/wiki/Neutron%20flux
|
The neutron flux, φ, is a scalar quantity used in nuclear physics and nuclear reactor physics. It is the total distance travelled by all free neutrons per unit time and volume. Equivalently, it can be defined as the number of neutrons travelling through a small sphere of radius in a time interval, divided by a maximal cross section of the sphere (the great disk area, ) and by the duration of the time interval. The dimension of neutron flux is and the usual unit is cm−2s−1 (reciprocal square centimetre times reciprocal second).
The neutron fluence is defined as the neutron flux integrated over a certain time period. So its dimension is and its usual unit is cm−2 (reciprocal square centimetre). An older term used instead of cm−2 was "n.v.t." (neutrons, velocity, time).
Natural neutron flux
Neutron flux in asymptotic giant branch stars and in supernovae is responsible for most of the natural nucleosynthesis producing elements heavier than iron. In stars there is a relatively low neutron flux on the order of 105 to 1011 cm−2 s−1, resulting in nucleosynthesis by the s-process (slow neutron-capture process). By contrast, after a core-collapse supernova, there is an extremely high neutron flux, on the order of 1032 cm−2 s−1, resulting in nucleosynthesis by the r-process (rapid neutron-capture process).
Earth atmospheric neutron flux, apparently from thunderstorms, can reach levels of 3·10−2 to 9·10+1 cm−2 s−1. However, recent results (considered invalid by the original investigators) obtained with unshielded scintillation neutron detectors show a decrease in the neutron flux during thunderstorms. Recent research appears to support lightning generating 1013–1015 neutrons per discharge via photonuclear processes.
Artificial neutron flux
Artificial neutron flux refers to neutron flux which is man-made, either as byproducts from weapons or nuclear energy production or for a specific application such as from a research reactor or by spallation. A flow of neutrons is oft
|
https://en.wikipedia.org/wiki/Metal%20rectifier
|
A metal rectifier is an early type of semiconductor rectifier in which the semiconductor is copper oxide, germanium or selenium. They were used in power applications to convert alternating current to direct current in devices such as radios and battery chargers. Westinghouse Electric was a major manufacturer of these rectifiers since the late 1920s, under the trade name Westector (now used as a trade name for an overcurrent trip device by Westinghouse Nuclear).
In some countries the term "metal rectifier" is applied to all such devices; in others the term "metal rectifier" normally refers to copper-oxide types, and "selenium rectifier" to selenium-iron types.
Description
Metal rectifiers consist of washer-like discs of different metals, either copper (with an oxide layer to provide the rectification) or steel or aluminium, plated with selenium. The discs are often separated by spacer sleeves to provide cooling.
Mode of operation
The principle of operation of a metal rectifier is related to modern semiconductor rectifiers (Schottky diodes and p–n diodes), but somewhat more complex. Both selenium and copper oxide are semiconductors, in practice doped by impurities during manufacture. When they are deposited on metals, it would be expected that the result is a simple metal–semiconductor junction and that the rectification would be a result of a Schottky barrier.
However, this is not always the case: the scientist S. Poganski discovered in the 1940s that the best selenium rectifiers were in fact semiconductor-semiconductor junctions between selenium and a thin cadmium selenide layer, generated out of the cadmium-tin metal coating during processing.
In any case the result is that there is a depletion region in the semiconductor, with a built-in electric field, and this provides the rectifying action.
Performance
Compared to later silicon or germanium devices, copper-oxide rectifiers tended to have poor efficiency, and the reverse voltage rating was rarely mor
|
https://en.wikipedia.org/wiki/Estimated%20sign
|
The estimated sign, , also referred to as the e-mark or () can be found on most prepacked products in the European Union (EU). Its use indicates that the prepackage fulfils EU Directive 76/211/EEC, which specifies the maximum permitted tolerances in package content. The shape and dimensions of the e-mark are defined in EU Directive 2009/34/EC. The e-mark is also used on prepackages in the United Kingdom, Australia and South Africa.
The scope of the directive is limited to prepackages that have a predetermined nominal weight of between 5 g and 10 kg or volume of 5 ml and 10 L, are filled without the purchaser present, and in which the quantity cannot be altered without opening or destroying the packing material.
The estimated sign indicates that:
the average quantity of product in a batch of prepackages shall not be less than the nominal quantity stated on the label;
the proportion of individual prepackages having a negative error greater than the tolerable negative error shall be sufficiently small for batches of prepackages to satisfy the requirements of the official reference test as specified in legislation;
none of the prepackages marked have a negative error greater than twice the tolerable negative error (since no such prepackage may bear the sign).
The tolerable negative error is related to the nominal quantity and varies between 9 per cent on prepackages nominally 50 g or 50 ml or less, to 1.5 per cent on prepackages nominally 1 kilogram or 1 litre or more. The tolerable error decreases as nominal quantity increases, and is done by alternating intervals where there is a percentage error and intervals where there is a fixed error (and thus over those intervals the percentage error decreases).
The sign looks like a stylised lowercase "e" and its shape, , is precisely defined by European Union Directive 2009/34/EC. It must be placed in the same field of vision as the nominal quantity. The sign has been added to the Unicode list of characters at position
|
https://en.wikipedia.org/wiki/Newzbin
|
Newzbin was a British Usenet indexing website, intended to facilitate access to content on Usenet. The site caused controversy over its stance on copyrighted material. Access to the Newzbin.com website was blocked by BT and Sky in late 2011, following legal action in the UK by Hollywood film studios.
The site announced that it had closed down on 28 November 2012.
Features
Newzbin indexed binary files that had been posted on Usenet, and offered the results through a search engine, with categories that included "Movies", "Music", "Apps" and "Books". The site created NZB files, which allowed the files to be downloaded with a suitable newsreader. NZB files are similar to torrent files, as they do not contain the file itself, but information about the location of the file to be downloaded. The search results could be browsed free of charge after creating a user account, but access to the NZB files was restricted to premium members who paid a subscription.
2010 legal action from Hollywood studios
In February and March 2010, Twentieth Century Fox Film Corporation, Universal City Studios Productions LLLP, Warner Bros. Entertainment Inc., Paramount Pictures Corporation, Disney Enterprises, Inc. and Columbia Pictures Industries, Inc. took joint legal action against Newzbin in the High Court in London, arguing that the site was encouraging widespread copyright infringement by indexing unofficial copies of films on Usenet.
In March 2010, Mr. Justice Kitchin ruled that Newzbin was deliberately indexing copyrighted content, observing that Newzbin had a "sophisticated and substantial infrastructure and in the region of 700,000 members, though not all premium", and that "for the year ended 31 December 2009, it had a turnover in excess of £1million, a profit in excess of £360,000 and paid dividends on ordinary shares of £415,000". Chris Elsworth, the main operator of Newzbin, had said repeatedly during the case that he had no knowledge of infringement occurring on the service,
|
https://en.wikipedia.org/wiki/SpywareBlaster
|
SpywareBlaster is an antispyware and antiadware program for Microsoft Windows designed to block the installation of ActiveX malware.
Overview
SpywareBlaster is a program intended to prevent the download, installation and execution of most spyware, adware, browser hijackers, dialers and other malicious programs based on ActiveX.
SpywareBlaster works on the basis of "blacklists" (Activating the "Killbit") Clsid of known malware programs, preventing them from infecting the protected computer. This approach differs from many other anti-spyware programs, which typically offer the user a chance to scan the hard drive and computer memory to remove unwanted software after it has been installed.
SpywareBlaster allows the user to prevent privacy risks such as tracking cookies. Another feature is the ability to restrict the actions of websites known as distributors of adware and spyware. SpywareBlaster supports several web browsers, including Internet Explorer, Mozilla Firefox, Google Chrome and Microsoft Edge.
SpywareBlaster is currently distributed as freeware, for non-commercial users.
See also
Ad-Aware
Spybot - Search & Destroy
References
External links
Official site
Spyware removal
Windows-only freeware
|
https://en.wikipedia.org/wiki/Animal%20locomotion
|
Animal locomotion, in ethology, is any of a variety of methods that animals use to move from one place to another. Some modes of locomotion are (initially) self-propelled, e.g., running, swimming, jumping, flying, hopping, soaring and gliding. There are also many animal species that depend on their environment for transportation, a type of mobility called passive locomotion, e.g., sailing (some jellyfish), kiting (spiders), rolling (some beetles and spiders) or riding other animals (phoresis).
Animals move for a variety of reasons, such as to find food, a mate, a suitable microhabitat, or to escape predators. For many animals, the ability to move is essential for survival and, as a result, natural selection has shaped the locomotion methods and mechanisms used by moving organisms. For example, migratory animals that travel vast distances (such as the Arctic tern) typically have a locomotion mechanism that costs very little energy per unit distance, whereas non-migratory animals that must frequently move quickly to escape predators are likely to have energetically costly, but very fast, locomotion.
The anatomical structures that animals use for movement, including cilia, legs, wings, arms, fins, or tails are sometimes referred to as locomotory organs or locomotory structures.
Etymology
The term "locomotion" is formed in English from Latin loco "from a place" (ablative of locus "place") + motio "motion, a moving".
Locomotion in different media
Animals move through, or on, five types of environment: aquatic (in or on water), terrestrial (on ground or other surface, including arboreal, or tree-dwelling), fossorial (underground), and aerial (in the air). Many animals—for example semi-aquatic animals, and diving birds—regularly move through more than one type of medium. In some cases, the surface they move on facilitates their method of locomotion.
Aquatic
Swimming
In water, staying afloat is possible using buoyancy. If an animal's body is less dense than water, i
|
https://en.wikipedia.org/wiki/ClamWin%20Free%20Antivirus
|
ClamWin Free Antivirus is a free and open-source antivirus tool for Windows. It provides a graphical user interface to the Clam AntiVirus engine.
Features
Scanning scheduler (only effective with user logged in).
Automatic virus database updates on a regular basis.
Standalone virus-scanner.
Context menu integration for Windows Explorer.
Add-in for Microsoft Outlook.
A portable version that can be used from a USB flash drive.
There are Firefox extensions that allow the users to process downloaded files with ClamWin.
No real-time scanning
ClamWin Free Antivirus scans on demand; it does not automatically scan files as they are read and written.
The non-affiliated projects Clam Sentinel and Winpooch are add-ons that provide a real-time scanning capability to ClamWin.
Updates
ClamWin Free Antivirus has a virus database which is updated automatically when it detects connection to the Internet. A small balloon tip appears on the taskbar icon indicating completion status of the update process. It retries to establish connection with the server if it fails to download the updates first time.
Effectiveness
Historically ClamWin Free Antivirus has suffered from poor detection rates and its scans have been slow and less effective than some other antivirus programs. For example, in 2009 ClamWin Free Antivirus failed to detect almost half of the trojan horses, password stealers, and other malware in AV-TEST's "zoo" of malware samples.
In the 1–21 June 2008 test performed by Virus.gr, ClamWin Free Antivirus version 0.93 detected 54.68% of all threats and ranked 37th out of 49 products tested; the best scored over 99%.
In the 10 August-05 September 2009 test performed by Virus.gr, ClamWin Free Antivirus version 0.95.2 detected 52.48% of all threats and ranked 43 out of 55 products tested; the best scored 98.89%.
On 6 September 2011 cNet gave ClamWin Free Antivirus a rating of excellent, 4 of 5 stars.
See also
List of antivirus software
List of free and open-so
|
https://en.wikipedia.org/wiki/Parabidiminished%20rhombicosidodecahedron
|
In geometry, the parabidiminished rhombicosidodecahedron is one of the Johnson solids (). It is also a canonical polyhedron.
It can be constructed as a rhombicosidodecahedron with two opposing pentagonal cupolae removed. Related Johnson solids are the diminished rhombicosidodecahedron () where one cupola is removed, the metabidiminished rhombicosidodecahedron () where two non-opposing cupolae are removed, and the tridiminished rhombicosidodecahedron () where three cupolae are removed.
Example
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Metabidiminished%20rhombicosidodecahedron
|
In geometry, the metabidiminished rhombicosidodecahedron is one of the Johnson solids ().
It can be constructed as a rhombicosidodecahedron with two non-opposing pentagonal cupolae () removed.
Related Johnson solids are:
The diminished rhombicosidodecahedron () where one cupola is removed,
The parabidiminished rhombicosidodecahedron () where two opposing cupolae are removed,
The gyrate bidiminished rhombicosidodecahedron () where two non-opposing cupolae are removed and a third is rotated 36 degrees,
And the tridiminished rhombicosidodecahedron () where three cupolae are removed.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Tridiminished%20rhombicosidodecahedron
|
In geometry, the tridiminished rhombicosidodecahedron is one of the Johnson solids (). It can be constructed as a rhombicosidodecahedron with three pentagonal cupolae removed.
Related Johnson solids are:
: diminished rhombicosidodecahedron with one cupola removed,
: parabidiminished rhombicosidodecahedron with two opposing cupolae removed, and
: metabidiminished rhombicosidodecahedron with two non-opposing cupolae removed.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Trigyrate%20rhombicosidodecahedron
|
In geometry, the trigyrate rhombicosidodecahedron is one of the Johnson solids (). It contains 20 triangles, 30 squares and 12 pentagons. It is also a canonical polyhedron.
It can be constructed as a rhombicosidodecahedron with three pentagonal cupolae rotated through 36 degrees. Related Johnson solids are:
The gyrate rhombicosidodecahedron () where one cupola is rotated;
The parabigyrate rhombicosidodecahedron () where two opposing cupolae are rotated;
And the metabigyrate rhombicosidodecahedron () where two non-opposing cupolae are rotated.
References
Norman W. Johnson, "Convex Solids with Regular Faces", Canadian Journal of Mathematics, 18, 1966, pages 169–200. Contains the original enumeration of the 92 solids and the conjecture that there are no others.
The first proof that there are only 92 Johnson solids.
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Snub%20disphenoid
|
In geometry, the snub disphenoid, Siamese dodecahedron, triangular dodecahedron, trigonal dodecahedron, or dodecadeltahedron is a convex polyhedron with twelve equilateral triangles as its faces. It is not a regular polyhedron because some vertices have four faces and others have five. It is a dodecahedron, one of the eight deltahedra (convex polyhedra with equilateral triangle faces), and is the 84th Johnson solid (non-uniform convex polyhedra with regular faces). It can be thought of as a square antiprism where both squares are replaced with two equilateral triangles.
The snub disphenoid is also the vertex figure of the isogonal 13-5 step prism, a polychoron constructed from a 13-13 duoprism by selecting a vertex on a tridecagon, then selecting the 5th vertex on the next tridecagon, doing so until reaching the original tridecagon. It cannot be made uniform, however, because the snub disphenoid has no circumscribed sphere.
History and naming
This shape was called a Siamese dodecahedron in the paper by Hans Freudenthal and B. L. van der Waerden (1947) which first described the set of eight convex deltahedra. The dodecadeltahedron name was given to the same shape by , referring to the fact that it is a 12-sided deltahedron. There are other simplicial dodecahedra, such as the hexagonal bipyramid, but this is the only one that can be realized with equilateral faces. Bernal was interested in the shapes of holes left in irregular close-packed arrangements of spheres, so he used a restrictive definition of deltahedra, in which a deltahedron is a convex polyhedron with triangular faces that can be formed by the centers of a collection of congruent spheres, whose tangencies represent polyhedron edges, and such that there is no room to pack another sphere inside the cage created by this system of spheres. This restrictive definition disallows the triangular bipyramid (as forming two tetrahedral holes rather than a single hole), pentagonal bipyramid (because the spheres for
|
https://en.wikipedia.org/wiki/Snub%20square%20antiprism
|
In geometry, the snub square antiprism is one of the Johnson solids ().
It is one of the elementary Johnson solids that do not arise from "cut and paste" manipulations of the Platonic and Archimedean solids, although it is a relative of the icosahedron that has fourfold symmetry instead of threefold.
Construction
The snub square antiprism is constructed as its name suggests, a square antiprism which is snubbed, and represented as ss{2,8}, with s{2,8} as a square antiprism. It can be constructed in Conway polyhedron notation as sY4 (snub square pyramid).
It can also be constructed as a square gyrobianticupolae, connecting two anticupolae with gyrated orientations.
Cartesian coordinates
Let k ≈ 0.82354 be the positive root of the cubic polynomial
Furthermore, let h ≈ 1.35374 be defined by
Then, Cartesian coordinates of a snub square antiprism with edge length 2 are given by the union of the orbits of the points
under the action of the group generated by a rotation around the z-axis by 90° and by a rotation by 180° around a straight line perpendicular to the z-axis and making an angle of 22.5° with the x-axis.
We may then calculate the surface area of a snub square antiprism of edge length a as
and its volume as
where ξ ≈ 3.60122 is the greatest real root of the polynomial
Snub antiprisms
Similarly constructed, the ss{2,6} is a snub triangular antiprism (a lower symmetry octahedron), and result as a regular icosahedron. A snub pentagonal antiprism, ss{2,10}, or higher n-antiprisms can be similar constructed, but not as a convex polyhedron with equilateral triangles. The preceding Johnson solid, the snub disphenoid also fits constructionally as ss{2,4}, but one has to retain two degenerate digonal faces (drawn in red) in the digonal antiprism.
References
External links
Johnson solids
|
https://en.wikipedia.org/wiki/Block%20code
|
In coding theory, block codes are a large and important family of error-correcting codes that encode data in blocks.
There is a vast number of examples for block codes, many of which have a wide range of practical applications. The abstract definition of block codes is conceptually useful because it allows coding theorists, mathematicians, and computer scientists to study the limitations of all block codes in a unified way.
Such limitations often take the form of bounds that relate different parameters of the block code to each other, such as its rate and its ability to detect and correct errors.
Examples of block codes are Reed–Solomon codes, Hamming codes, Hadamard codes, Expander codes, Golay codes, and Reed–Muller codes. These examples also belong to the class of linear codes, and hence they are called linear block codes. More particularly, these codes are known as algebraic block codes, or cyclic block codes, because they can be generated using boolean polynomials.
Algebraic block codes are typically hard-decoded using algebraic decoders.
The term block code may also refer to any error-correcting code that acts on a block of bits of input data to produce bits of output data . Consequently, the block coder is a memoryless device. Under this definition codes such as turbo codes, terminated convolutional codes and other iteratively decodable codes (turbo-like codes) would also be considered block codes. A non-terminated convolutional encoder would be an example of a non-block (unframed) code, which has memory and is instead classified as a tree code.
This article deals with "algebraic block codes".
The block code and its parameters
Error-correcting codes are used to reliably transmit digital data over unreliable communication channels subject to channel noise.
When a sender wants to transmit a possibly very long data stream using a block code, the sender breaks the stream up into pieces of some fixed size. Each such piece is called message and the proced
|
https://en.wikipedia.org/wiki/List%20of%20-ectomies
|
The surgical terminology suffix -ectomy was taken from Greek εκ-τομια = "act of cutting out". It means surgical removal of something, usually from inside the body.
A
Adenectomy is the surgical removal of a gland.
Adenoidectomy is the surgical removal of the adenoids, also known as the pharyngeal tonsils.
Adrenalectomy is the removal of one or both adrenal glands.
Apicoectomy is the surgical removal of tooth's root tip.
Appendectomy is the surgical removal of the appendix; it is also known as an appendicectomy.
Arthrectomy is the removal of a joint of the body.
Auriculectomy is the removal of the ear.
B
Bullectomy is the surgical removal of bullae from the lung.
Bunionectomy is the removal of a bunion.
Bursectomy is the removal of a bursa, a small sac filled with synovial fluid.
C
Cardiectomy is the removal of the cardia of the stomach.
Cecectomy is the removal of the cecum.
Cephalectomy is the surgical removal of the head (decapitation).
Cervicectomy is the removal of the cervix.
Cholecystectomy is the surgical removal of the gallbladder.
Choroidectomy is the removal of the choroid layer of the eye.
Clitoridectomy is the partial or total removal of the external part of the clitoris.
Colectomy is the removal of the colon.
Craniectomy is the surgical removal of a portion of the cranium.
Cystectomy is the removal of the urinary bladder. It also means removal of a cyst.
Corpectomy is the removal of a vertebral body as well as the adjacent inter-vertebral discs spine
D
Discectomy is a surgical procedure involving the dissection of an extravasted segment of the intervertebral disc.
Diverticulectomy is a surgical procedure to remove a diverticulum.
Duodenectomy is the removal of the duodenum.
E
Embolectomy is the removal of any type of embolism.
Encephalectomy is the removal of the brain.
Endarterectomy is the removal of plaque from the lining of the artery otherwise constricted by a buildup of fatty deposits.
Endoscopic thoracic sympathectomy is the burning, sever
|
https://en.wikipedia.org/wiki/Mythical%20number
|
A mythical number is a number used and accepted as deriving from scientific investigation and/or careful selection, but whose origin is unknown and whose basis is unsubstantiated. An example is the number 48 billion, which has often been accepted as the number of dollars per year of identity theft. This number "has appeared in hundreds of news stories, including a New York Times piece" despite the fact that it has been shown repeatedly to be highly inaccurate. The term was coined in 1971 by Max Singer, one of the founders of the Hudson Institute.
The origins of such numbers are akin to those of urban legends and may include (among others):
misinterpretation of examples
extrapolation from apparently similar fields
especially successful pranks
comical results
guess-estimates by public officials
deliberate misinformation
See also
Confabulation
Factoid
For all intents and purposes
Newspeak
Noble lie
Truthiness
Verisimilitude
References
Bibliography
Online at edwardtufte.com.
Numbers
|
https://en.wikipedia.org/wiki/Creation%20and%20evolution%20in%20public%20education
|
The status of creation and evolution in public education has been the subject of substantial debate and conflict in legal, political, and religious circles. Globally, there are a wide variety of views on the topic. Most western countries have legislation that mandates only evolutionary biology is to be taught in the appropriate scientific syllabuses.
Overview
While many Christian denominations do not raise theological objections to the modern evolutionary synthesis as an explanation for the present forms of life on planet Earth, various socially conservative, traditionalist, and fundamentalist religious sects and political groups within Christianity and Islam have objected vehemently to the study and teaching of biological evolution. Some adherents of these Christian and Islamic religious sects or political groups are passionately opposed to the consensus view of the scientific community. Literal interpretations of religious texts are the greatest cause of conflict with evolutionary and cosmological investigations and conclusions.
Internationally, biological evolution is taught in science courses with limited controversy, with the exception of a few areas of the United States and several Muslim-majority countries, primarily Turkey. In the United States, the Supreme Court has ruled the teaching of creationism as science in public schools to be unconstitutional, irrespective of how it may be purveyed in theological or religious instruction. In the United States, intelligent design (ID) has been represented as an alternative explanation to evolution in recent decades, but its "demonstrably religious, cultural, and legal missions" have been ruled unconstitutional by a lower court.
By country
Australia
Although creationist views are popular among religious education teachers and creationist teaching materials have been distributed by volunteers in some schools, many Australian scientists take an aggressive stance supporting the right of teachers to teach the theory
|
https://en.wikipedia.org/wiki/Matched%20filter
|
In signal processing, a matched filter is obtained by correlating a known delayed signal, or template, with an unknown signal to detect the presence of the template in the unknown signal. This is equivalent to convolving the unknown signal with a conjugated time-reversed version of the template. The matched filter is the optimal linear filter for maximizing the signal-to-noise ratio (SNR) in the presence of additive stochastic noise.
Matched filters are commonly used in radar, in which a known signal is sent out, and the reflected signal is examined for common elements of the out-going signal. Pulse compression is an example of matched filtering. It is so called because the impulse response is matched to input pulse signals. Two-dimensional matched filters are commonly used in image processing, e.g., to improve the SNR of X-ray observations.
Matched filtering is a demodulation technique with LTI (linear time invariant) filters to maximize SNR.
It was originally also known as a North filter.
Derivation
Derivation via matrix algebra
The following section derives the matched filter for a discrete-time system. The derivation for a continuous-time system is similar, with summations replaced with integrals.
The matched filter is the linear filter, , that maximizes the output signal-to-noise ratio.
where is the input as a function of the independent variable , and is the filtered output. Though we most often express filters as the impulse response of convolution systems, as above (see LTI system theory), it is easiest to think of the matched filter in the context of the inner product, which we will see shortly.
We can derive the linear filter that maximizes output signal-to-noise ratio by invoking a geometric argument. The intuition behind the matched filter relies on correlating the received signal (a vector) with a filter (another vector) that is parallel with the signal, maximizing the inner product. This enhances the signal. When we consider the additive
|
https://en.wikipedia.org/wiki/Ambiguity%20function
|
In pulsed radar and sonar signal processing, an ambiguity function is a two-dimensional function of propagation delay and Doppler frequency , . It represents the distortion of a returned pulse due to the receiver matched filter (commonly, but not exclusively, used in pulse compression radar) of the return from a moving target. The ambiguity function is defined by the properties of the pulse and of the filter, and not any particular target scenario.
Many definitions of the ambiguity function exist; some are restricted to narrowband signals and others are suitable to describe the delay and Doppler relationship of wideband signals. Often the definition of the ambiguity function is given as the magnitude squared of other definitions (Weiss).
For a given complex baseband pulse , the narrowband ambiguity function is given by
where denotes the complex conjugate and is the imaginary unit. Note that for zero Doppler shift (), this reduces to the autocorrelation of . A more concise way of representing the
ambiguity function consists of examining the one-dimensional
zero-delay and zero-Doppler "cuts"; that is, and
, respectively. The matched filter output as a function of time (the signal one would observe in a radar system) is a Doppler cut, with the constant frequency given by the target's Doppler shift: .
Background and motivation
Pulse-Doppler radar equipment sends out a series of radio frequency pulses. Each pulse has a certain shape (waveform)—how long the pulse is, what its frequency is, whether the frequency changes during the pulse, and so on. If the waves reflect off a single object, the detector will see a signal which, in the simplest case, is a copy of the original pulse but delayed by a certain time —related to the object's distance—and shifted by a certain frequency —related to the object's velocity (Doppler shift). If the original emitted pulse waveform is , then the detected signal (neglecting noise, attenuation, and distortion, and wideband correctio
|
https://en.wikipedia.org/wiki/Data%20migration
|
Data migration is the process of selecting, preparing, extracting, and transforming data and permanently transferring it from one computer storage system to another. Additionally, the validation of migrated data for completeness and the decommissioning of legacy data storage are considered part of the entire data migration process. Data migration is a key consideration for any system implementation, upgrade, or consolidation, and it is typically performed in such a way as to be as automated as possible, freeing up human resources from tedious tasks. Data migration occurs for a variety of reasons, including server or storage equipment replacements, maintenance or upgrades, application migration, website consolidation, disaster recovery, and data center relocation.
The standard phases
, "nearly 40 percent of data migration projects were over time, over budget, or failed entirely." Thus, proper planning is critical for an effective data migration. While the specifics of a data migration plan may vary—sometimes significantly—from project to project, IBM suggests there are three main phases to most any data migration project: planning, migration, and post-migration. Each of those phases has its own steps. During planning, dependencies and requirements are analyzed, migration scenarios get developed and tested, and a project plan that incorporates the prior information is created. During the migration phase, the plan is enacted, and during post-migration, the completeness and thoroughness of the migration is validated, documented, and closed out, including any necessary decommissioning of legacy systems. For applications of moderate to high complexity, these data migration phases may be repeated several times before the new system is considered to be fully validated and deployed.
Planning: The data and applications to be migrated are selected based on business, project, and technical requirements and dependencies. Hardware and bandwidth requirements are analyzed. Feasib
|
https://en.wikipedia.org/wiki/Memory%20scrubbing
|
Memory scrubbing consists of reading from each computer memory location, correcting bit errors (if any) with an error-correcting code (ECC), and writing the corrected data back to the same location.
Due to the high integration density of modern computer memory chips, the individual memory cell structures became small enough to be vulnerable to cosmic rays and/or alpha particle emission. The errors caused by these phenomena are called soft errors. Over 8% of DIMM modules experience at least one correctable error per year. This can be a problem for DRAM and SRAM based memories. The probability of a soft error at any individual memory bit is very small. However, together with the large amount of memory modern computersespecially serversare equipped with, and together with extended periods of uptime, the probability of soft errors in the total memory installed is significant.
The information in an ECC memory is stored redundantly enough to correct single bit error per memory word. Hence, an ECC memory can support the scrubbing of the memory content. Namely, if the memory controller scans systematically through the memory, the single bit errors can be detected, the erroneous bit can be determined using the ECC checksum, and the corrected data can be written back to the memory.
Overview
It is important to check each memory location periodically, frequently enough, before multiple bit errors within the same word are too likely to occur, because the one bit errors can be corrected, but the multiple bit errors are not correctable, in the case of usual (as of 2008) ECC memory modules.
In order to not disturb regular memory requests from the CPU and thus prevent decreasing performance, scrubbing is usually only done during idle periods. As the scrubbing consists of normal read and write operations, it may increase power consumption for the memory compared to non-scrubbing operation. Therefore, scrubbing is not performed continuously but periodically. For many servers,
|
https://en.wikipedia.org/wiki/Acrophobia%20%28game%29
|
Acrophobia is an online multiplayer word game. The game was originally conceived by Andrea Shubert, and programmed by Kenrick Mock and Michelle Hoyle in 1995. Originally available over Internet Relay Chat, the game has since been developed into a number of variants, as a download, playable through a browser, via Twitter or through Facebook.
Background
Created by Andrea Shubert in the mid to late 1990s, she developed a "spiritual successor" called TAG: The Acronym Game for startup gaming company play140.
Game play
Players enter a channel hosted by a bot which runs the game. In each round, the bot generates a random acronym. Players compete by racing to create the most coherent or humorous sentence that fits the acronym - in essence, a backronym. After a set amount of time expires, each player then votes anonymously via the bot for their favorite answer (aside from their own).
Points are awarded to the most popular backronym. Bonus points also may be given based on the fastest response and for voting for the winning option. Some implementations give the speed bonus to the player with the first answer that received at least one vote; this is to discourage players from quickly entering gibberish just to be the first. Bonus points for voting for the winner helps discourage players from intentionally voting for poor answers to avoid giving votes to answers that might beat their own.
Some versions of the game were criticized for the ease with which players could disrupt games with obscenities, and the anonymous nature of the site meant that there were no repercussions for this behavior. Usually, nonsense backronyms will score low and the most humorous sounding backronym which effectively makes a sentence from the initials will win. Some rounds may have a specific topic that the answers should fit, although enforcement of the topic depends on solely on the other players' willingness to vote for off-topic answers.
Acrophobia was commended as an online game that showed t
|
https://en.wikipedia.org/wiki/Refrigerator%20car
|
A refrigerator car (or "reefer") is a refrigerated boxcar (U.S.), a piece of railroad rolling stock designed to carry perishable freight at specific temperatures. Refrigerator cars differ from simple insulated boxcars and ventilated boxcars (commonly used for transporting fruit), neither of which are fitted with cooling apparatus. Reefers can be ice-cooled, come equipped with any one of a variety of mechanical refrigeration systems, or utilize carbon dioxide (either as dry ice, or in liquid form) as a cooling agent. Milk cars (and other types of "express" reefers) may or may not include a cooling system, but are equipped with high-speed trucks and other modifications that allow them to travel with passenger trains.
History
Background: North America
After the end of the American Civil War, Chicago, Illinois emerged as a major railway center for the distribution of livestock raised on the Great Plains to Eastern markets. Transporting the animals to market required herds to be driven up to to railheads in Kansas City, Missouri or other locations in the midwest, such as Abilene and Dodge City, Kansas, where they were loaded into specialized stock cars and transported live ("on-the-hoof") to regional processing centers. Driving cattle across the plains also caused tremendous weight loss, with some animals dying in transit.
Upon arrival at the local processing facility, livestock were slaughtered by wholesalers and delivered fresh to nearby butcher shops for retail sale, smoked, or packed for shipment in barrels of salt. Costly inefficiencies were inherent in transporting live animals by rail, particularly the fact that approximately 60% of the animal's mass is inedible. The death of animals weakened by the long drive further increased the per-unit shipping cost. Meat processors sought a method to ship dressed meats from their Chicago packing plants to eastern markets.
Early attempts at refrigerated transport
During the mid-19th century, attempts were made to ship
|
https://en.wikipedia.org/wiki/Link%20contract
|
A link contract is an approach to data control in a distributed data sharing network. Link contracts are a key feature of the XDI specifications under development at OASIS.
In XDI, a link contract is a machine-readable XDI document that governs the sharing of other XDI data. Unlike a conventional Web link, which is essentially a one-dimensional "string" that "pulls" a linked document into a browser, a link contract is a graph of metadata (typically in JSON) that can actively control the flow of data from a publisher to a subscriber by either "push" or "pull". The flow is controlled by the terms of the contract, which can be as flexible and extensible as real-world contracts, i.e., link contracts can govern:
Identification: Who are the parties to the contract?
Authority: Who controls the data being shared via the contract?
Authentication: How will each party prove its identity to the other?
Authorization: Who has what access rights and privileges to the data?
Scope: What data does it cover?
Permission and Privacy: What uses can be made of the data and by whom?
Synchronization: How and when will the subscriber receive updates to the data?
Termination: What happens when the data sharing relationship is ended?
Recourse: How will any disputes over the contract be resolved?
Like real-world contracts, link contracts can also refer to other link contracts. Using this design, the vast majority of link contracts can be very simple, referring to a very small number of more complex link contracts that have been carefully designed to reflect the requirements of common data exchange scenarios (e.g., business cards, mailing lists, e-commerce transactions, website registrations, etc.)
Link contracts have been proposed as a key element of digital trust frameworks such as those published by the non-profit Open Identity Exchange.
See also
XDI
Social Web
Creative Commons
External links
OASIS XDI Technical Committee
XDI.org
XML-based standards
Web services
|
https://en.wikipedia.org/wiki/Newbridge%20Networks
|
Newbridge Networks was an Ottawa, Ontario, Canada company founded by Welsh-Canadian entrepreneur Sir Terry Matthews. It was founded in 1986 to create data and voice networking products after Matthews was forced out of his original company Mitel. According to Matthews, he saw that data networking would grow far faster than voice networking, and he had wanted to take Mitel in much the same direction, but the 'risk-averse' British Telecom-dominated Mitel board refused and effectively ousted him. The name Newbridge Networks comes from Sir Terry Matthews' home town of Newbridge in south Wales.
Newbridge quickly became a major market player in this area using the voice switching and software engineering expertise that was prevalent in the Ottawa area.
The company initially had innovative channelbank products, which allowed telcos with existing wiring to offer a wide variety of new services. Newbridge also offered (for the time) the industry's most innovative network management (46020 NMS) and ISDN TAP 3500, RS-232C 3600 Mainstreet PC 4601A and ACC "river" routers (Congo/Amazon/Tigris etc.), including distributed star-topology routing through both proprietary software over the unused facilities' data links in optical hardware and telco switches.
Starting in 1992 the company became increasingly focused upon and well known for its family of ATM products such as the MainStreet 36150 and MainStreet Xpress 36170 (later renamed Alcatel 7470).
Newbridge later absorbed some routing technology that had been abandoned by Tandem Computers, with the purchase of Ungermann-Bass and entered the pure data networking market with a traditional routing and switching product. This was in addition to its internally developed ViViD product line, which was a network-wide distributed routing product (ridge or routing bridge—bridges with a centralized routing server).
Newbridge had 30% ownership of affiliate West End Systems Corp., which was headquartered in Kanata, Ontario, with R&D facilit
|
https://en.wikipedia.org/wiki/Gandalf%20Technologies
|
Gandalf Technologies, Inc., or simply Gandalf, was a Canadian data communications company based in Ottawa. It was best known for modems and terminal adapters that allowed computer terminals to connect to host computers through a single interface. Gandalf also pioneered a radio-based mobile data terminal that was popular for many years in taxi dispatch systems. The rapid rise of TCP/IP relegated many of Gandalf's products to niche status, and the company went bankrupt in 1997; its assets were acquired by Mitel.
History
Gandalf was founded by Desmond Cunningham and Colin Patterson in 1971, and started business from the lobby of the Skyline Hotel, which is now the Crowne Plaza Hotel, on Albert Street in Ottawa.
The company's first products were industrial-looking half-bridges for remote terminals which were supported by large terminal multiplexers on the "computer end". Gandalf referred to these systems as a "PACX", in analogy to the telephony PABX which provided similar services in the voice field. These systems allowed the user to "dial up" the Gandalf box and then instruct it what computer they wanted to connect to. In this fashion, large computer networks could be built in a single location using shared resources, as opposed to having to dedicate terminals to different machines. These systems were particularly popular in large companies and universities.
Gandalf supplanted these systems with "true" modems, both for host-to-host use and for remote workers. Unlike most modems, Gandalf's devices were custom systems intended to connect only to another Gandalf modem, and were designed to extract the maximum performance possible. Gandalf sold a number of different designs intended to be used with different line lengths and qualities, from 4-wire modems running at 9600 bit/s over "short" distances (bumped to 19,200 bit/s in later models), to 2400 bit/s models for 2-wire runs over longer distances. On the host-end, modem blocks could be attached to the same PACX multip
|
https://en.wikipedia.org/wiki/BS%207799
|
BS 7799 was a standard originally published by BSI Group (BSI) in 1995. It was written by the United Kingdom Government's Department of Trade and Industry (DTI), and consisted of several parts.
The first part, containing the best practices for Information Security Management, was revised in 1998; after a lengthy discussion in the worldwide standards bodies, was eventually adopted by ISO/IEC as ISO/IEC 17799, "Information Technology - Code of practice for information security management." in 2000. ISO/IEC 17799 was then revised in June 2005 and finally incorporated in the ISO 27000 series of standards as ISO/IEC 27002 in July 2007.
The second part to BS 7799 was first published by BSI in 1999, known as BS 7799 Part 2, titled "Information Security Management Systems - Specification with guidance for use." BS 7799-2 focused on how to implement an information security management system (ISMS), referring to the information security management structure and controls identified in BS 7799-2, which later became ISO/IEC 27001. The 2002 version of BS 7799-2 introduced the Plan-Do-Check-Act (PDCA) (Deming quality assurance model), aligning it with quality standards such as ISO 9000. BS 7799 Part 2 was adopted by ISO as ISO/IEC 27001 in November 2005.
BS 7799 Part 3 was published in 2005, covering risk analysis and management. It aligns with ISO/IEC 27001. It was revised in 2017.
See also
Cyber security standards
ISO/IEC 27000-series
ISO/IEC 27001
ISO/IEC 27002 (formerly ISO/IEC 17799)
References
External links
British Standards Institution -> BSI Shop
Certificate register
BS 7799 Part 2 PDCA Methodology
07799
Computer security in the United Kingdom
Computer security standards
|
https://en.wikipedia.org/wiki/1%2C000%2C000%2C000
|
1,000,000,000 (one billion, short scale; one thousand million or one milliard, one yard, long scale) is the natural number following 999,999,999 and preceding 1,000,000,001. With a number, "billion" can be abbreviated as b, bil or bn.
In standard form, it is written as 1 × 109. The metric prefix giga indicates 1,000,000,000 times the base unit. Its symbol is G.
One billion years may be called an eon in astronomy or geology.
Previously in British English (but not in American English), the word "billion" referred exclusively to a million millions (1,000,000,000,000). However, this is not common anymore, and the word has been used to mean one thousand million (1,000,000,000) for several decades.
The term milliard could also be used to refer to 1,000,000,000; whereas "milliard" is rarely used in English, variations on this name often appear in other languages.
In the South Asian numbering system, it is known as 100 crore or 1 .
1,000,000,000 is also the cube of 1000.
Sense of scale
The facts below give a sense of how large 1,000,000,000 (109) is in the context of time according to current scientific evidence:
Time
109 seconds (1 gigasecond) equal 11,574 days, 1 hour, 46 minutes and 40 seconds (approximately 31.7 years, or 31 years, 8 months, 8 days).
About 109 minutes ago, the Roman Empire was flourishing and Christianity was emerging. (109 minutes is roughly 1,901 years.)
About 109 hours ago, modern human beings and their ancestors were living in the Stone Age (more precisely, the Middle Paleolithic). (109 hours is roughly 114,080 years.)
About 109 days ago, Australopithecus, an ape-like creature related to an ancestor of modern humans, roamed the African savannas. (109 days is roughly years.)
About 109 months ago, dinosaurs walked the Earth during the late Cretaceous. (109 months is roughly years.)
About 109 years—a gigaannus—ago, the first multicellular eukaryotes appeared on Earth.
About 109 decades ago, the thin disk of the Milky Way started to fo
|
https://en.wikipedia.org/wiki/Broadcast%20delay
|
In radio and television, broadcast delay is an intentional delay when broadcasting live material, technically referred to as a deferred live. Such a delay may be to prevent mistakes or unacceptable content from being broadcast. Longer delays lasting several hours can also be introduced so that the material is aired at a later scheduled time (such as the prime time hours) to maximize viewership. Tape delays lasting several hours can also be edited down to remove filler material or to trim a broadcast to the network's desired run time for a broadcast slot, but this is not always the case.
Usage
A short delay is often used to prevent profanity, bloopers, nudity, or other undesirable material from making it to air. In this instance, it is often referred to as a "seven-second delay" or "profanity delay". Longer delays, however, may also be introduced, often to allow a show to air at the same time for the local market as is sometimes done with nationally broadcast programs in countries with multiple time zones. Considered as time shifting, that is often achieved by a "tape delay", using a video tape recorder, modern digital video recorders, or other similar technology.
Tape delay may also refer to the process of broadcasting an event at a later scheduled time because a scheduling conflict prevents a live telecast, or a broadcaster seeks to maximize ratings by airing an event in a certain timeslot. That can also be done because of time constraints of certain portions, usually those that do not affect the outcome of the show, are edited out, or the availability of hosts or other key production staff only at certain times of the day, and it is generally applicable for cable television programs.
In countries that span multiple time zones and have influential domestic eastern regions, such as Australia, Canada, Mexico and the United States, television networks usually delay the entirety of their schedule for stations in the west, so prime time programming can be time shifte
|
https://en.wikipedia.org/wiki/Internet%20Draft
|
An Internet Draft (I-D) is a document published by the Internet Engineering Task Force (IETF) containing preliminary technical specifications, results of networking-related research, or other technical information. Often, Internet Drafts are intended to be work-in-progress documents for work that is eventually to be published as a Request for Comments (RFC) and potentially leading to an Internet Standard.
It is considered inappropriate to rely on Internet Drafts for reference purposes. I-D citations should indicate the I-D is a work in progress.
An Internet Draft is expected to adhere to the basic requirements imposed on any RFC.
An Internet Draft is only valid for six months unless it is replaced by an updated version. An otherwise expired draft remains valid while it is under official review by the Internet Engineering Steering Group (IESG) when a request to publish it as an RFC has been submitted. Expired drafts are replaced with a "tombstone" version and remain available for reference.
Naming conventions
Internet Drafts produced by the IETF working groups follow the naming convention: draft-ietf-<wg>-<name>-<version number>.txt.
Internet Drafts produced by IRTF research groups following the naming convention: draft-irtf-<rg>-<name>-<version number>.txt.
Drafts produced by individuals following the naming convention: draft-<individual>-<name>-<version number>.txt
The IAB, RFC Editor, and other organizations associated with the IETF may also produce Internet Drafts. They follow the naming convention: draft-<org>-<name>-<version number>.txt.
The initial version number is represented as 00. The second version, i.e. the first revision is represented as 01, and incremented for all following revisions.
References
External links
Internet-Drafts
Status of IETF Internet Drafts (IANA)
Internet Draft search
An archive of expired IDs
Internet Standards
|
https://en.wikipedia.org/wiki/167%20%28number%29
|
167 (one hundred [and] sixty-seven) is the natural number following 166 and preceding 168.
In mathematics
167 is an emirp, an isolated prime, a Chen prime, a Gaussian prime, a safe prime, and an Eisenstein prime with no imaginary part and a real part of the form .
167 is the smallest number which requires six terms when expressed using the greedy algorithm as a sum of squares, 167 = 144 + 16 + 4 + 1 + 1 + 1,
although by Lagrange's four-square theorem its non-greedy expression as a sum of squares can be shorter, e.g. 167 = 121 + 36 + 9 + 1.
167 is a full reptend prime in base 10, since the decimal expansion of 1/167 repeats the following 166 digits: 0.00598802395209580838323353293413173652694610778443113772455089820359281437125748502994 0119760479041916167664670658682634730538922155688622754491017964071856287425149700...
167 is a highly cototient number, as it is the smallest number k with exactly 15 solutions to the equation x - φ(x) = k. It is also a strictly non-palindromic number.
167 is the smallest multi-digit prime such that the product of digits is equal to the number of digits times the sum of the digits, i. e., 1×6×7 = 3×(1+6+7)
167 is the smallest positive integer d such that the imaginary quadratic field Q() has class number = 11.
In astronomy
167 Urda is a main belt asteroid
167P/CINEOS is a periodic comet in the Solar System
IC 167 is interacting galaxies
In the military
Marine Light Attack Helicopter Squadron 167 is a United States Marine Corps helicopter squadron
Martin Model 167 was a U.S.-designed light bomber during World War II
was a U.S. Navy Diver-class rescue and salvage ship during World War II
was a U.S. Navy during World War II
was a U.S. Navy during World War II
was a U.S. Navy during World War I
was a U.S. Navy during World War II
was a transport ship during World War II
was a U.S. Navy during World War II
In sports
Martina Navratilova has 167 tennis titles, an all-time record for men or women
In transpor
|
https://en.wikipedia.org/wiki/Noam%20Elkies
|
Noam David Elkies (born August 25, 1966) is a professor of mathematics at Harvard University. At the age of 26, he became the youngest professor to receive tenure at Harvard. He is also a pianist, chess national master and a chess composer.
Early life
Elkies was born to an engineer father and a piano teacher mother. He attended Stuyvesant High School in New York City for three years before graduating in 1982 at age 15. A child prodigy, in 1981, at age 14, Elkies was awarded a gold medal at the 22nd International Mathematical Olympiad, receiving a perfect score of 42, one of the youngest to ever do so. He went on to Columbia University, where he won the Putnam competition at the age of sixteen years and four months, making him one of the youngest Putnam Fellows in history. Elkies was a Putnam Fellow twice more during his undergraduate years. He graduated valedictorian of his class in 1985. He then earned his PhD in 1987 under the supervision of Benedict Gross and Barry Mazur at Harvard University.
From 1987 to 1990, Elkies was a junior fellow of the Harvard Society of Fellows.
Work in mathematics
In 1987, Elkies proved that an elliptic curve over the rational numbers is supersingular at infinitely many primes. In 1988, he found a counterexample to Euler's sum of powers conjecture for fourth powers. His work on these and other problems won him recognition and a position as an associate professor at Harvard in 1990. In 1993, Elkies was made a full, tenured professor at the age of 26. This made him the youngest full professor in the history of Harvard. Along with A. O. L. Atkin he extended Schoof's algorithm to create the Schoof–Elkies–Atkin algorithm.
Elkies also studies the connections between music and mathematics; he is on the advisory board of the Journal of Mathematics and Music. He has discovered many new patterns in Conway's Game of Life and has studied the mathematics of still life patterns in that cellular automaton rule. Elkies is an associate of Harvard
|
https://en.wikipedia.org/wiki/Abstract%20semantic%20graph
|
In computer science, an abstract semantic graph (ASG) or term graph is a form of abstract syntax in which an expression of a formal or programming language is represented by a graph whose vertices are the expression's subterms. An ASG is at a higher level of abstraction than an abstract syntax tree (or AST), which is used to express the syntactic structure of an expression or program.
ASGs are more complex and concise than ASTs because they may contain shared subterms (also known as "common subexpressions"). Abstract semantic graphs are often used as an intermediate representation by compilers to store the results of performing common subexpression elimination upon abstract syntax trees. ASTs are trees and are thus incapable of representing shared terms. ASGs are usually directed acyclic graphs (DAG), although in some applications graphs containing cycles may be permitted. For example, a graph containing a cycle might be used to represent the recursive expressions that are commonly used in functional programming languages as non-looping iteration constructs. The mutability of these types of graphs, is studied in the field of graph rewriting.
The nomenclature term graph is associated with the field of term graph rewriting, which involves the transformation and processing of expressions by the specification of rewriting rules, whereas abstract semantic graph is used when discussing linguistics, programming languages, type systems and compilation.
Abstract syntax trees are not capable of sharing subexpression nodes because it is not possible for a node in a proper tree to have more than one parent. Although this conceptual simplicity is appealing, it may come at the cost of redundant representation and, in turn, possibly inefficiently duplicating the computation of identical terms. For this reason ASGs are often used as an intermediate language at a subsequent compilation stage to abstract syntax tree construction via parsing.
An abstract semantic graph is typicall
|
https://en.wikipedia.org/wiki/Antisense%20therapy
|
Antisense therapy is a form of treatment that uses antisense oligonucleotides (ASOs) to target messenger RNA (mRNA). ASOs are capable of altering mRNA expression through a variety of mechanisms, including ribonuclease H mediated decay of the pre-mRNA, direct steric blockage, and exon content modulation through splicing site binding on pre-mRNA. Several ASOs have been approved in the United States, the European Union, and elsewhere.
Nomenclature
The common stem for antisense oligonucleotides drugs is -rsen. The substem -virsen designates antiviral antisense oligonucleotides.
Pharmacokinetics and pharmacodynamics
Half-life and stability
ASO-based drugs employ highly modified, single-stranded chains of synthetic nucleic acids that achieve wide tissue distribution with very long half-lives. For instance, many ASO-based drugs contain phosphorothioate substitutions and 2' sugar modifications to inhibit nuclease degradation enabling vehicle-free delivery to cells.
In vivo delivery
Phosphorothioate ASOs can be delivered to cells without the need of a delivery vehicle. ASOs do not penetrate the blood brain barrier when delivered systemically but they can distribute across the neuraxis if injected in the cerebrospinal fluid typically by intrathecal administration. Newer formulations using conjugated ligands greatly enhances delivery efficiency and cell-type specific targeting.
Approved therapies
Amyotrophic lateral sclerosis
Tofersen (marketed as Qalsody) was approved by the FDA for the treatment of SOD1- associated amyotrophic lateral sclerosis (ALS) in 2023. It was developed by Biogen under a licensing agreement with Ionis Pharmaceuticals. In trials the drug was found to lower levels of an ALS biomarker, neurofilament light change, and in long-term trial extensions to slow disease. Under the terms of the FDA's accelerated approval program, a confirmatory study will be conducted in presymptomatic gene carriers to provide additional evidence.
Batten disease
Milase
|
https://en.wikipedia.org/wiki/Staff%20%28building%20material%29
|
Staff is a kind of artificial stone used for covering and ornamenting temporary buildings.
It is chiefly made of plaster of Paris (powdered Gypsum), with a little cement, glycerin, and dextrin, mixed with water until it is about as thick as molasses. When staff is cast in molds, it can form any shape. To strengthen it, coarse cloth or bagging, or fibers of hemp or jute, are put into the molds before casting. It becomes hard enough in about a half-hour to be removed and fastened on the building in construction. Staff may easily be bent, sawed, bored, or nailed. Its natural color is murky white, but it may be made to resemble any kind of stone.
Staff was invented in France in about 1876 and was used in the construction and ornamentation of the buildings of the Paris Expositions of 1878 and of 1889. It was also largely used in the construction of the buildings of the World's Columbian Exposition at Chicago in 1893, at the Omaha and Buffalo Expositions in 1898 and 1901, at the Louisiana Purchase Exposition in 1904, and at later expositions, and on temporary buildings of other kinds.
See also
Dewey Arch
Building material
Glazed architectural terra-cotta a material also made into many decorative forms, more permanent than Staff
List of building materials
Construction
Building materials
Artificial stone
|
https://en.wikipedia.org/wiki/Generalized%20eigenvector
|
In linear algebra, a generalized eigenvector of an matrix is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector.
Let be an -dimensional vector space and let be the matrix representation of a linear map from to with respect to some ordered basis.
There may not always exist a full set of linearly independent eigenvectors of that form a complete basis for . That is, the matrix may not be diagonalizable. This happens when the algebraic multiplicity of at least one eigenvalue is greater than its geometric multiplicity (the nullity of the matrix , or the dimension of its nullspace). In this case, is called a defective eigenvalue and is called a defective matrix.
A generalized eigenvector corresponding to , together with the matrix generate a Jordan chain of linearly independent generalized eigenvectors which form a basis for an invariant subspace of .
Using generalized eigenvectors, a set of linearly independent eigenvectors of can be extended, if necessary, to a complete basis for . This basis can be used to determine an "almost diagonal matrix" in Jordan normal form, similar to , which is useful in computing certain matrix functions of . The matrix is also useful in solving the system of linear differential equations where need not be diagonalizable.
The dimension of the generalized eigenspace corresponding to a given eigenvalue is the algebraic multiplicity of .
Overview and definition
There are several equivalent ways to define an ordinary eigenvector. For our purposes, an eigenvector associated with an eigenvalue of an × matrix is a nonzero vector for which , where is the × identity matrix and is the zero vector of length . That is, is in the kernel of the transformation . If has linearly independent eigenvectors, then is similar to a diagonal matrix . That is, there exists an invertible matrix such that is diagonalizable through the similarity transformation .
|
https://en.wikipedia.org/wiki/Statistical%20arbitrage
|
In finance, statistical arbitrage (often abbreviated as Stat Arb or StatArb) is a class of short-term financial trading strategies that employ mean reversion models involving broadly diversified portfolios of securities (hundreds to thousands) held for short periods of time (generally seconds to days). These strategies are supported by substantial mathematical, computational, and trading platforms.
Trading strategy
Broadly speaking, StatArb is actually any strategy that is bottom-up, beta-neutral in approach and uses statistical/econometric techniques in order to provide signals for execution. Signals are often generated through a contrarian mean reversion principle but can also be designed using such factors as lead/lag effects, corporate activity, short-term momentum, etc. This is usually referred to as a multi-factor approach to StatArb.
Because of the large number of stocks involved, the high portfolio turnover and the fairly small size of the effects one is trying to capture, the strategy is often implemented in an automated fashion and great attention is placed on reducing trading costs.
Statistical arbitrage has become a major force at both hedge funds and investment banks. Many bank proprietary operations now center to varying degrees around statistical arbitrage trading.
As a trading strategy, statistical arbitrage is a heavily quantitative and computational approach to securities trading. It involves data mining and statistical methods, as well as the use of automated trading systems.
Historically, StatArb evolved out of the simpler pairs trade strategy, in which stocks are put into pairs by fundamental or market-based similarities. When one stock in a pair outperforms the other, the under performing stock is bought long and the outperforming stock is sold short with the expectation that under performing stock will climb towards its outperforming partner.
Mathematically speaking, the strategy is to find a pair of stocks with high correlation, coin
|
https://en.wikipedia.org/wiki/Superkey
|
In the relational data model a superkey is a set of attributes that uniquely identifies each tuple of a relation. Because superkey values are unique, tuples with the same superkey value must also have the same non-key attribute values. That is, non-key attributes are functionally dependent on the superkey.
The set of all attributes is always a superkey (the trivial superkey). Tuples in a relation are by definition unique, with duplicates removed after each operation, so the set of all attributes is always uniquely valued for every tuple. A candidate key (or minimal superkey) is a superkey that can't be reduced to a simpler superkey by removing an attribute.
For example, in an employee schema with attributes employeeID, name, job, and departmentID, if employeeID values are unique then employeeID combined with any or all of the other attributes can uniquely identify tuples in the table. Each combination, {employeeID}, {employeeID, name}, {employeeID, name, job}, and so on is a superkey. {employeeID} is a candidate key, since no subset of its attributes is also a superkey. {employeeID, name, job, departmentID} is the trivial superkey.
If attribute set K is a superkey of relation R, then at all times it is the case that the projection of R over K has the same cardinality as R itself.
Example
First, list out all the sets of attributes:
• {}
• {Monarch Name}
• {Monarch Number}
• {Royal House}
• {Monarch Name, Monarch Number}
• {Monarch Name, Royal House}
• {Monarch Number, Royal House}
• {Monarch Name, Monarch Number, Royal House}
Second, eliminate all the sets which do not meet superkey's requirement. For example, {Monarch Name, Royal House} cannot be a superkey because for the same attribute values (Edward, Plantagenet), there are two distinct tuples:
(Edward, II, Plantagenet)
(Edward, III, Plantagenet)
Finally, after elimination, the remaining sets of attributes are the only possible superkeys in this example:
{Monarch Name, Monarch Number} — this is als
|
https://en.wikipedia.org/wiki/Sweet%20Track
|
The Sweet Track is an ancient trackway, or causeway, in the Somerset Levels, England, named after its finder, Ray Sweet. It was built in 3807 BC (determined using dendrochronology) and is the second-oldest timber trackway discovered in the British Isles, dating to the Neolithic. The Sweet Track was predominantly built along the course of an earlier structure, the Post Track.
The track extended across the now largely drained marsh between what was then an island at Westhay and a ridge of high ground at Shapwick, a distance close to or around . The track is one of a network that once crossed the Somerset Levels. Various artifacts and prehistoric finds, including a jadeitite ceremonial axe head, have been found in the peat bogs along its length.
Construction was of crossed wooden poles, driven into the waterlogged soil to support a walkway that consisted mainly of planks of oak, laid end-to-end. The track was used for a period of only around ten years and was then abandoned, probably due to rising water levels. Following its discovery in 1970, most of the track has been left in its original location, with active conservation measures taken, including a water pumping and distribution system to maintain the wood in its damp condition. Some of the track is stored at the British Museum and at the Museum of Somerset in Taunton. A reconstruction has been made on which visitors can walk, on the same line as the original, in Shapwick Heath National Nature Reserve.
Location
In the early fourth millennium BC the track was built between an island at Westhay and a ridge of high ground at Shapwick close to the River Brue. A group of mounds at Westhay mark the site of prehistoric lake dwellings, which were likely to have been similar to those found in the Iron Age Glastonbury Lake Village near Godney, itself built on a morass on an artificial foundation of timber filled with brushwood, bracken, rubble, and clay.
The remains of similar tracks have been uncovered nearby, connecti
|
https://en.wikipedia.org/wiki/Single-frequency%20network
|
A single-frequency network or SFN is a broadcast network where several transmitters simultaneously send the same signal over the same frequency channel.
Analog AM and FM radio broadcast networks as well as digital broadcast networks can operate in this manner. SFNs are not generally compatible with analog television transmission, since the SFN results in ghosting due to echoes of the same signal.
A simplified form of SFN can be achieved by a low power co-channel repeater, booster or broadcast translator, which is utilized as a gap filler transmitter.
The aim of SFNs is efficient utilization of the radio spectrum, allowing a higher number of radio and TV programs in comparison to traditional multi-frequency network (MFN) transmission. An SFN may also increase the coverage area and decrease the outage probability in comparison to an MFN, since the total received signal strength may increase to positions midway between the transmitters.
SFN schemes are somewhat analogous to what in non-broadcast wireless communication, for example cellular networks and wireless computer networks, is called transmitter macrodiversity, CDMA soft handoff and Dynamic Single Frequency Networks (DSFN).
SFN transmission can be considered as creating a severe form of multipath propagation. The radio receiver receives several echoes of the same signal, and the constructive or destructive interference among these echoes (also known as self-interference) may result in fading. This is problematic especially in wideband communication and high-data rate digital communications, since the fading in that case is frequency-selective (as opposed to flat fading), and since the time spreading of the echoes may result in intersymbol interference (ISI). Fading and ISI can be avoided by means of diversity schemes and equalization filters.
Transmitters, which are part of a SFN, should not be used for navigation via direction finding as the direction of signal minima or signal maxima can differ from the d
|
https://en.wikipedia.org/wiki/Source-code%20editor
|
A source-code editor is a text editor program designed specifically for editing source code of computer programs. It may be a standalone application or it may be built into an integrated development environment (IDE).
Characteristics
Source-code editors have characteristics specifically designed to simplify and speed up typing of source code, such as syntax highlighting, indentation, autocomplete and brace matching functionality. These editors also provide a convenient way to run a compiler, interpreter, debugger, or other program relevant for the software-development process. So, while many text editors like Notepad can be used to edit source code, if they don't enhance, automate or ease the editing of code, they are not source-code editors.
Structure editors are a different form of source-code editor, where instead of editing raw text, one manipulates the code's structure, generally the abstract syntax tree. In this case features such as syntax highlighting, validation, and code formatting are easily and efficiently implemented from the concrete syntax tree or abstract syntax tree, but editing is often more rigid than free-form text. Structure editors also require extensive support for each language, and thus are harder to extend to new languages than text editors, where basic support only requires supporting syntax highlighting or indentation. For this reason, strict structure editors are not popular for source code editing, though some IDEs provide similar functionality.
A source-code editor can check syntax while code is being entered and immediately warn of syntax problems. A few source-code editors compress source code, typically converting common keywords into single-byte tokens, removing unnecessary whitespace, and converting numbers to a binary form. Such tokenizing editors later uncompress the source code when viewing it, possibly prettyprinting it with consistent capitalization and spacing. A few source-code editors do both.
The Language Server Pro
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.