source
stringlengths 31
203
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Wristband
|
Wristbands are encircling strips worn on the wrist or lower forearm. The term may refer to a bracelet-like band, similar to that of a wristwatch, to the cuff or other part of a sleeve that covers the wrist, or decorative or functional bands worn on the wrist for many different reasons. Wristbands are often worn and used similarly to event passes such as lanyards to information or allow people entry to events. These wristbands are made from loops of plastic that are placed around the wrist and are used for identification purposes (demonstrating the wearer's authorization to be at a venue, for example).
Another type of wristband is the sweatband; usually made of a towel-like terrycloth material. These are usually used to wipe sweat from the forehead during sport but have been known to be used as a badge or fashion statement. A practice common in mid-1980s punk subculture was to cut the top off of a sock and fashion the elastic into this type of wristband.
Silicone wristbands
In the early-to-mid-2000s (decade), bracelets often made of silicone became popular. They are worn to demonstrate the wearer's support of a cause or charitable organization, similar to awareness ribbons. Such wristbands are sometimes called awareness bracelets to distinguish them from other types of wristbands. In early 2007 they became an increasingly popular item being sold as merchandise at concerts & sporting events worldwide. The wristbands bearing official logos or trademarks enabled the seller to offer a low price point merchandise option to fans. Silicone wristbands may also be called gel wristbands, jelly wristbands, rubber wristbands and fundraising wristbands. All of these wristbands are made from the same silicone material.
UV ultra violet wristbands
UV Ultra Violet Sensitive silicone wristbands appear clear/white when out of UV light, but when exposed to ultra violet light such as sunlight the wristbands' color changes to blue or fuchsia. These bands can be used as reminders fo
|
https://en.wikipedia.org/wiki/NetBIOS%20over%20TCP/IP
|
NetBIOS over TCP/IP (NBT, or sometimes NetBT) is a networking protocol that allows legacy computer applications relying on the NetBIOS API to be used on modern TCP/IP networks.
NetBIOS was developed in the early 1980s, targeting very small networks (about a dozen computers). Some applications still use NetBIOS, and do not scale well in today's networks of hundreds of computers when NetBIOS is run over NBF. When properly configured, NBT allows those applications to be run on large TCP/IP networks (including the whole Internet, although that is likely to be subject to security problems) without change.
NBT is defined by the RFC 1001 and RFC 1002 standard documents.
Services
NetBIOS provides three distinct services:
Name service for name registration and resolution (ports: 137/udp and 137/tcp)
Datagram distribution service for connectionless communication (port: 138/udp)
Session service for connection-oriented communication (port: 139/tcp)
NBT implements all of those services.
Name service
In NetBIOS, each participant must register on the network using a unique name of at most 15 characters. In legacy networks, when a new application wanted to register a name, it had to broadcast a message saying "Is anyone currently using that name?" and wait for an answer. If no answer came back, it was safe to assume that the name was not in use. However, the wait timeout was a few seconds, making the name registration a very lengthy process, as the only way of knowing that a name was not registered was to not receive any answer.
NBT can implement a central repository, or Name Service, that records all name registrations. An application wanting to register a name would therefore contact the name server (which has a known network address) and ask whether the name is already registered, using a "Name Query" packet. This is much faster, as the name server returns a negative response immediately if the name is not already in the database, meaning it is available. The Name Serv
|
https://en.wikipedia.org/wiki/Titer
|
Titer (American English) or titre (British English) is a way of expressing concentration. Titer testing employs serial dilution to obtain approximate quantitative information from an analytical procedure that inherently only evaluates as positive or negative. The titre corresponds to the highest dilution factor that still yields a positive reading. For example, positive readings in the first 8 serial, twofold dilutions translate into a titer of 1:256 (i.e., 2−8). Titres are sometimes expressed by the denominator only, for example 1:256 is written 256.
The term also has two other, conflicting meanings. In titration, the titer is the ratio of actual to nominal concentration of a titrant, e.g. a titer of 0.5 would require 1/0.5 = 2 times more titrant than nominal. This is to compensate for possible degradation of the titrant solution. Second, in textile engineering, titre is also a synonym for linear density.
Etymology
Titer has the same origin as the word "title", from the French word titre, meaning "title" but referring to the documented purity of a substance, often gold or silver. This comes from the Latin word titulus, also meaning "title".
Examples
Antibody titer
An antibody titer is a measurement of how much antibody an organism has produced that recognizes a particular epitope. It is conventionally expressed as the inverse of the greatest dilution level that still gives a positive result on some test. ELISA is a common means of determining antibody titers. For example, the indirect Coombs test detects the presence of anti-Rh antibodies in a pregnant woman's blood serum. A patient might be reported to have an "indirect Coombs titer" of 16. This means that the patient's serum gives a positive indirect Coombs test at any dilution down to 1/16 (1 part serum to 15 parts diluent). At greater dilutions the indirect Coombs test is negative. If a few weeks later the same patient had an indirect Coombs titer of 32 (1/32 dilution which is 1 part serum to 31 parts dilu
|
https://en.wikipedia.org/wiki/Kerr%E2%80%93Newman%20metric
|
The Kerr–Newman metric is the most general asymptotically flat, stationary solution of the Einstein–Maxwell equations in general relativity that describes the spacetime geometry in the region surrounding an electrically charged, rotating mass. It generalizes the Kerr metric by taking into account the field energy of an electromagnetic field, in addition to describing rotation. It is one of a large number of various different electrovacuum solutions, that is, of solutions to the Einstein–Maxwell equations which account for the field energy of an electromagnetic field. Such solutions do not include any electric charges other than that associated with the gravitational field, and are thus termed vacuum solutions.
This solution has not been especially useful for describing astrophysical phenomena, because observed astronomical objects do not possess an appreciable net electric charge, and the magnetic fields of stars arise through other processes. As a model of realistic black holes, it omits any description of infalling baryonic matter, light (null dusts) or dark matter, and thus provides at best an incomplete description of stellar mass black holes and active galactic nuclei. The solution is of theoretical and mathematical interest as it does provide a fairly simple cornerstone for further exploration.
The Kerr–Newman solution is a special case of more general exact solutions of the Einstein–Maxwell equations with non-zero cosmological constant.
History
In Dec 1963 Kerr and Schild found the Kerr–Schild metrics that gave all Einstein spaces that are exact linear perturbations of Minkowski space. In early 1964 Roy Kerr looked for all Einstein–Maxwell spaces with this same property. By Feb 1964 the special case where the Kerr–Schild spaces were charged (this includes the Kerr–Newman solution) was known but the general case where the special directions were not geodesics of the underlying Minkowski space proved very difficult. The problem was given to George Debney to
|
https://en.wikipedia.org/wiki/Robert%20S.%20Barton
|
Robert Stanley "Bob" Barton (February 13, 1925 – January 28, 2009) was the chief architect of the Burroughs B5000 and other computers such as the B1700, a co-inventor of dataflow architecture, and an influential professor at the University of Utah.
His students at Utah have had a large role in the development of computer science.
Barton designed machines at a more abstract level, not tied to the technology constraints of the time. He employed high-level languages and a stack machine in his design of the B5000 computer. Its design survives in the modern Unisys ClearPath MCP systems. His work with stack machine architectures was the first implementation in a mainframe computer.
Barton died on January 28, 2009, in Portland, Oregon, aged 83.
Career
Barton was born in New Britain, Connecticut in 1925 and received his BA in 1948, and his MS in 1949 in Mathematics, from the University of Iowa. His early experience with computers was when he worked in the IBM Applied Science Department in 1951.
In 1954, he joined the Shell Oil Company Technical Services, working on programming applications. He worked at Shell Development, a research group in Texas where he worked with a Burroughs/Datatron 205 computer. In 1958, he studied Irving Copi and Jan Łukasiewicz's work on symbolic logic and Polish notation, and considered its application to arithmetic expression processing on a computer.
Barton joined Burroughs Corporation, ElectroData Division, in Pasadena, California in the late 1950s. He managed a system programming group in 1959 which developed a compiler named BALGOL for the language ALGOL 58 on the Burroughs 220 computer.
In 1960, he became a consultant for Beckman Instruments working on data collection from satellite systems, for Lockheed Corporation working on satellite systems and organizing of data processing services, and for Burroughs continuing to work on the design concepts of the B5000.
After an assignment in Australia in 1963 for Control Data Corporation, he
|
https://en.wikipedia.org/wiki/Burroughs%20B6x00-7x00%20instruction%20set
|
The Burroughs B6x00-7x00 instruction set includes the set of valid operations for the Burroughs B6500,
B7500 and later Burroughs large systems, including the current (as of 2006) Unisys Clearpath/MCP systems; it does not include the instruction for other Burroughs large systems including the B5000, B5500, B5700 and the B8500. These unique machines have a distinctive design and instruction set. Each word of data is associated with a type, and the effect of an operation on that word can depend on the type. Further, the machines are stack based to the point that they had no user-addressable registers.
Overview
As you would expect from the unique architecture used in these systems, they also have an interesting instruction set. Programs are made up of 8-bit syllables, which may be Name Call, be Value Call or form an operator, which may be from one to twelve syllables in length. There are less than 200 operators, all of which fit into 8-bit syllables. If we ignore the powerful string scanning, transfer, and edit operators, the basic set is only about 120 operators. If we remove the operators reserved for the operating system such as MVST and HALT, the set of operators commonly used by user-level programs is less than 100. The Name Call and Value Call syllables contain address couples; the Operator syllables either use no addresses or use control words and descriptors on the stack.
Since there are no programmer-addressable registers, most of the register manipulating operations required in other architectures are not needed, nor are variants for performing operations between pairs of registers, since all operations are applied to the top of the stack. This also makes code files very compact, since operators are zero-address and do not need to include the address of registers or memory locations in the code stream. Some of the code density was due to moving vital operand information elsewhere, to 'tags' on every data word or into tables of pointers. Many of the operat
|
https://en.wikipedia.org/wiki/Enterprise%20architecture
|
Enterprise architecture (EA) is a business function concerned with the structures and behaviours of a business, especially business roles and processes that create and use business data. The international definition according to the Federation of Enterprise Architecture Professional Organizations is "a well-defined practice for conducting enterprise analysis, design, planning, and implementation, using a comprehensive approach at all times, for the successful development and execution of strategy. Enterprise architecture applies architecture principles and practices to guide organizations through the business, information, process, and technology changes necessary to execute their strategies. These practices utilize the various aspects of an enterprise to identify, motivate, and achieve these changes."
The United States Federal Government is an example of an organization that practices EA, in this case with its Capital Planning and Investment Control processes. Companies such as Independence Blue Cross, Intel, Volkswagen AG, and InterContinental Hotels Group also use EA to improve their business architectures as well as to improve business performance and productivity. Additionally, the Federal Enterprise Architecture's reference guide aids federal agencies in the development of their architectures.
Introduction
As a discipline, EA "proactively and holistically lead[s] enterprise responses to disruptive forces by identifying and analyzing the execution of change" towards organizational goals. EA gives business and IT leaders recommendations for policy adjustments and provides best strategies to support and enable business development and change within the information systems the business depends on. EA provides a guide for decision making towards these objectives. The National Computing Centre's EA best practice guidance states that an EA typically "takes the form of a comprehensive set of cohesive models that describe the structure and functions of an enterprise
|
https://en.wikipedia.org/wiki/NHK%20Science%20%26%20Technology%20Research%20Laboratories
|
NHK Science & Technology Research Laboratories (STRL, ), headquartered in Setagaya, Tokyo, Japan, is responsible for technical research at NHK, Japan's public broadcaster.
Work done by the STRL includes research on direct-broadcast satellite (BS), Integrated Services Digital Broadcasting, high-definition television, and ultra-high-definition television.
On May 9, 2013, NHK and Mitsubishi Electric announced that they had jointly developed the first High Efficiency Video Coding (HEVC) encoder for 8K Ultra HD TV, which is also called Super Hi-Vision (SHV). The HEVC encoder supports the Main 10 profile at Level 6.1 allowing it to encode 10-bit video with a resolution of 7680 × 4320 at 60 fps. The HEVC encoder has 17 3G-SDI inputs and uses 17 boards for parallel processing with each board encoding a row of 7680 × 256 pixels to allow for real time video encoding. The HEVC encoder was shown at the NHK Science & Technology Research Laboratories Open House 2013 that took place from May 30 to June 2.
See also
NHK
NHK Twinscam
Ultra-high-definition television (UHDTV)
High Efficiency Video Coding (HEVC) – Video codec that supports resolutions up to 8K UHDTV (7680 × 4320)
References
External links
STRL - Japanese
STRL - English
NHK Open House 2013 - English
NHK
Mass media in Tokyo
Television technology
Engineering research institutes
Scientific organizations established in 1930
Audio engineering
Radio technology
Research institutes in Japan
Sound production technology
Sound recording technology
1930 establishments in Japan
|
https://en.wikipedia.org/wiki/Visibility%20graph
|
In computational geometry and robot motion planning, a visibility graph is a graph of intervisible locations, typically for a set of points and obstacles in the Euclidean plane. Each node in the graph represents a point location, and each edge represents a visible connection between them. That is, if the line segment connecting two locations does not pass through any obstacle, an edge is drawn between them in the graph. When the set of locations lies in a line, this can be understood as an ordered series. Visibility graphs have therefore been extended to the realm of time series analysis.
Applications
Visibility graphs may be used to find Euclidean shortest paths among a set of polygonal obstacles in the plane: the shortest path between two obstacles follows straight line segments except at the vertices of the obstacles, where it may turn, so the Euclidean shortest path is the shortest path in a visibility graph that has as its nodes the start and destination points and the vertices of the obstacles. Therefore, the Euclidean shortest path problem may be decomposed into two simpler subproblems: constructing the visibility graph, and applying a shortest path algorithm such as Dijkstra's algorithm to the graph. For planning the motion of a robot that has non-negligible size compared to the obstacles, a similar approach may be used after expanding the obstacles to compensate for the size of the robot. attribute the visibility graph method for Euclidean shortest paths to research in 1969 by Nils Nilsson on motion planning for Shakey the robot, and also cite a 1973 description of this method by Russian mathematicians M. B. Ignat'yev, F. M. Kulakov, and A. M. Pokrovskiy.
Visibility graphs may also be used to calculate the placement of radio antennas, or as a tool used within architecture and urban planning through visibility graph analysis.
The visibility graph of a set of locations that lie in a line can be interpreted as a graph-theoretical representation of a time
|
https://en.wikipedia.org/wiki/Laplacian%20matrix
|
In the mathematical field of graph theory, the Laplacian matrix, also called the graph Laplacian, admittance matrix, Kirchhoff matrix or discrete Laplacian, is a matrix representation of a graph. Named after Pierre-Simon Laplace, the graph Laplacian matrix can be viewed as a matrix form of the negative discrete Laplace operator on a graph approximating the negative continuous Laplacian obtained by the finite difference method.
The Laplacian matrix relates to many useful properties of a graph. Together with Kirchhoff's theorem, it can be used to calculate the number of spanning trees for a given graph. The sparsest cut of a graph can be approximated through the Fiedler vector — the eigenvector corresponding to the second smallest eigenvalue of the graph Laplacian — as established by Cheeger's inequality. The spectral decomposition of the Laplacian matrix allows constructing low dimensional embeddings that appear in many machine learning applications and determines a spectral layout in graph drawing. Graph-based signal processing is based on the graph Fourier transform that extends the traditional discrete Fourier transform by substituting the standard basis of complex sinusoids for eigenvectors of the Laplacian matrix of a graph corresponding to the signal.
The Laplacian matrix is the easiest to define for a simple graph, but more common in applications for an edge-weighted graph, i.e., with weights on its edges — the entries of the graph adjacency matrix. Spectral graph theory relates properties of a graph to a spectrum, i.e., eigenvalues, and eigenvectors of matrices associated with the graph, such as its adjacency matrix or Laplacian matrix. Imbalanced weights may undesirably affect the matrix spectrum, leading to the need of normalization — a column/row scaling of the matrix entries — resulting in normalized adjacency and Laplacian matrices.
Definitions for simple graphs
Laplacian matrix
Given a simple graph with vertices , its Laplacian matrix is
defin
|
https://en.wikipedia.org/wiki/Black%20%26%20Lane%27s%20Ident%20Tones%20for%20Surround
|
Black & Lane's Ident Tones for Surround (BLITS) is a way of keeping track of channels in a mixed surround-sound, stereo, and mono world. It was developed by Martin Black and Keith Lane of Sky TV London in 2004. BLITS is used by Sky, the BBC and other European and US broadcasters to identify and lineup 5.1 broadcast circuits. It is also an EBU standard: EBU Tech 3304. It is designed to function as a 5.1 identification and phase-checking signal and to be meaningful in stereo when an automated downmix to stereo is employed.
BLITS is a set of tones designed for television 5.1 sound line-up.
It consists of three distinct sections.
The first section is made up from short tones at -18 dBfs to identify each
channel individually:
Ø L/R: Front LEFT and Front RIGHT - 880 Hz
Ø C: CENTRE - 1320 Hz
Ø Lfe: (Low Frequency Effects) - 82.5 Hz
Ø Ls/Rs: Surround LEFT and Surround RIGHT - 660 Hz.
The second section identifies front left and right channels (L/R) only:
1 kHz tone at -18 dBfs is interrupted four times on the left channel and is continuous on the right. This pattern of interrupts has been chosen to prevent confusion with either the EBU stereo ident or BBC GLITS tone after stereo mix down.
The last section consists of 2 kHz tone at -24dBFS on all six channels. This can be used to check phase between any of the 5.1 legs.
When the tone is summed to stereo using default down-mix values this section should produce tones of approximately -18 dBfs on each channel.
The BLITS sequence repeats approximately every 14 seconds.
See also
Glits
References
EBU Tech.3304 - BLITS Ident
External links
A zipped .wav file (interleaved multichannel format) of the BLITS 5.1 ident sequence is available from Sky.
Broadcast engineering
Test items
Telecommunications-related introductions in 2004
2004 in British television
2004 establishments in the United Kingdom
British inventions
|
https://en.wikipedia.org/wiki/Iterated%20function
|
In mathematics, an iterated function is a function (that is, a function from some set to itself) which is obtained by composing another function with itself a certain number of times. The process of repeatedly applying the same function is called iteration. In this process, starting from some initial object, the result of applying a given function is fed again in the function as input, and this process is repeated. For example on the image on the right:
with the circle‑shaped symbol of function composition.
Iterated functions are objects of study in computer science, fractals, dynamical systems, mathematics and renormalization group physics.
Definition
The formal definition of an iterated function on a set X follows.
Let be a set and be a function.
Defining as the n-th iterate of (a notation introduced by Hans Heinrich Bürmann and John Frederick William Herschel), where n is a non-negative integer, by:
and
where is the identity function on and denotes function composition. That is,
,
always associative.
Because the notation may refer to both iteration (composition) of the function or exponentiation of the function (the latter is commonly used in trigonometry), some mathematicians choose to use to denote the compositional meaning, writing for the -th iterate of the function , as in, for example, meaning . For the same purpose, was used by Benjamin Peirce whereas Alfred Pringsheim and Jules Molk suggested instead.
Abelian property and iteration sequences
In general, the following identity holds for all non-negative integers and ,
This is structurally identical to the property of exponentiation that , i.e. the special case .
In general, for arbitrary general (negative, non-integer, etc.) indices and , this relation is called the translation functional equation, cf. Schröder's equation and Abel equation. On a logarithmic scale, this reduces to the nesting property of Chebyshev polynomials, , since .
The relation also holds, anal
|
https://en.wikipedia.org/wiki/Art%20gallery%20problem
|
The art gallery problem or museum problem is a well-studied visibility problem in computational geometry. It originates from the following real-world problem:
"In an art gallery, what is the minimum number of guards who together can observe the whole gallery?"
In the geometric version of the problem, the layout of the art gallery is represented by a simple polygon and each guard is represented by a point in the polygon. A set of points is said to guard a polygon if, for every point in the polygon, there is some such that the line segment between and does not leave the polygon.
The art gallery problem can be applied in several domains such as in robotics, when artificial intelligences (AI) need to execute movements depending on their surroundings. Other domains, where this problem is applied, are in image editing, lighting problems of a stage or installation of infrastructures for the warning of natural disasters.
Two dimensions
There are numerous variations of the original problem that are also referred to as the art gallery problem. In some versions guards are restricted to the perimeter, or even to the vertices of the polygon. Some versions require only the perimeter or a subset of the perimeter to be guarded.
Solving the version in which guards must be placed on vertices and only vertices need to be guarded is equivalent to solving the dominating set problem on the visibility graph of the polygon.
Chvátal's art gallery theorem
Chvátal's art gallery theorem, named after Václav Chvátal, gives an upper bound on the minimal number of guards. It states:
"To guard a simple polygon with vertices, guards are always sufficient and sometimes necessary."
History
The question about how many vertices/watchmen/guards were needed, was posed to Chvátal by Victor Klee in 1973. Chvátal proved it shortly thereafter. Chvátal's proof was later simplified by Steve Fisk, via a 3-coloring argument. Chvátal has a more geometrical approach, whereas Fisk uses well-k
|
https://en.wikipedia.org/wiki/Degree%20matrix
|
In the mathematical field of algebraic graph theory, the degree matrix of an undirected graph is a diagonal matrix which contains information about the degree of each vertex—that is, the number of edges attached to each vertex. It is used together with the adjacency matrix to construct the Laplacian matrix of a graph: the Laplacian matrix is the difference of the degree matrix and the adjacency matrix.
Definition
Given a graph with , the degree matrix for is a diagonal matrix defined as
where the degree of a vertex counts the number of times an edge terminates at that vertex. In an undirected graph, this means that each loop increases the degree of a vertex by two. In a directed graph, the term degree may refer either to indegree (the number of incoming edges at each vertex) or outdegree (the number of outgoing edges at each vertex).
Example
The following undirected graph has a 6x6 degree matrix with values:
Note that in the case of undirected graphs, an edge that starts and ends in the same node increases the corresponding degree value by 2 (i.e. it is counted twice).
Properties
The degree matrix of a k-regular graph has a constant diagonal of .
According to the degree sum formula, the trace of the degree matrix is twice the number of edges of the considered graph.
References
Algebraic graph theory
Matrices
|
https://en.wikipedia.org/wiki/MicroStrategy
|
MicroStrategy Incorporated is an American company that provides business intelligence (BI), mobile software, and cloud-based services. Founded in 1989 by Michael J. Saylor, Sanju Bansal, and Thomas Spahr, the firm develops software to analyze internal and external data in order to make business decisions and to develop mobile apps. It is a public company headquartered in Tysons Corner, Virginia, in the Washington metropolitan area. Its primary business analytics competitors include SAP AG Business Objects, IBM Cognos, and Oracle Corporation's BI Platform. Saylor is the Executive Chairman and, from 1989 to 2022, was the CEO.
History
Saylor started MicroStrategy in 1989 with a consulting contract from DuPont, which provided Saylor with $250,000 in start-up capital and office space in Wilmington, Delaware. Saylor was soon joined by company co-founder Sanju Bansal, whom he had met while the two were students at Massachusetts Institute of Technology (MIT). The company produced software for data mining and business intelligence using nonlinear mathematics, an idea inspired by a course on systems-dynamics theory that they took at MIT.
In 1992, MicroStrategy gained its first major client when it signed a $10 million contract with McDonald's. It increased revenues by 100% each year between 1990 and 1996. In 1994, the company's offices and its 50 employees moved from Delaware to Tysons Corner, Virginia.
On June 11, 1998, MicroStrategy became a public company via an initial public offering.
In 2000, the company founded Alarm.com as part of its research and development unit.
On March 20, 2000, after a review of its accounting practices, the company announced that it would restate its financial results for the preceding two years. Its stock price, which had risen from $7 per share to as high as $333 per share in a year, fell $120 per share, or 62%, in a day in what is regarded as the bursting of the dot-com bubble.
In December 2000, the U.S. Securities and Exchange Commis
|
https://en.wikipedia.org/wiki/Mist%20net
|
Mist nets are used by hunters and poachers, but also by ornithologists and chiropterologists to capture wild birds and bats for banding or other research projects. Mist nets are typically made of nylon or polyester mesh suspended between two poles, resembling a volleyball net. When properly deployed in the correct habitat, the nets are virtually invisible. Mist nets have shelves created by horizontally strung lines that create a loose, baggy pocket. When a bird or bat hits the net, it falls into this pocket, where it becomes tangled.
The mesh size of the netting varies according to the size of the species targeted for capture. Mesh sizes can be measured along one side of the edge of a single mesh square, or along the diagonal of that square. Measures given here are along the diagonal. Small passerines are typically captured with 30–38 mm mesh, while larger birds, like hawks and ducks, are captured using mesh sizes of ~127 mm. Net dimensions can vary widely depending on the proposed use. Net height for avian mist netting is typically 1.2 - 2.6 m. Net width may vary from 3 to 18 m, although longer nets may also be used. A dho-gazza is a type of mist net that can be used for larger birds, such as raptors. This net lacks shelves.
The purchase and use of mist nets requires permits, which vary according to a country or state's wildlife regulations. Mist net handling requires skill for optimal placement, avoiding entangling nets in vegetation, and proper storage. Bird and bat handling requires extensive training to avoid injury to the captured animals. Bat handling may be especially difficult since bats are captured at night and may bite. A 2011 research survey found mist netting to result in low rates of injury while providing high scientific value.
Usage of mist nets
Mist nets have been used by Japanese hunters for nearly 300 years to capture birds. They were first introduced into use for ornithology in the United States of America by Oliver L. Austin in 1947.
Mis
|
https://en.wikipedia.org/wiki/On%20Formally%20Undecidable%20Propositions%20of%20Principia%20Mathematica%20and%20Related%20Systems
|
"Über formal unentscheidbare Sätze der Principia Mathematica und verwandter Systeme I" ("On Formally Undecidable Propositions of Principia Mathematica and Related Systems I") is a paper in mathematical logic by Kurt Gödel. Submitted November 17, 1930, it was originally published in German in the 1931 volume of Monatshefte für Mathematik und Physik. Several English translations have appeared in print, and the paper has been included in two collections of classic mathematical logic papers. The paper contains Gödel's incompleteness theorems, now fundamental results in logic that have many implications for consistency proofs in mathematics. The paper is also known for introducing new techniques that Gödel invented to prove the incompleteness theorems.
Outline and key results
The main results established are Gödel's first and second incompleteness theorems, which have had an enormous impact on the field of mathematical logic. These appear as theorems VI and XI, respectively, in the paper.
In order to prove these results, Gödel introduced a method now known as Gödel numbering. In this method, each sentence and formal proof in first-order arithmetic is assigned a particular natural number. Gödel shows that many properties of these proofs can be defined within any theory of arithmetic that is strong enough to define the primitive recursive functions. (The contemporary terminology for recursive functions and primitive recursive functions had not yet been established when the paper was published; Gödel used the word ("recursive") for what are now known as primitive recursive functions.) The method of Gödel numbering has since become common in mathematical logic.
Because the method of Gödel numbering was novel, and to avoid any ambiguity, Gödel presented a list of 45 explicit formal definitions of primitive recursive functions and relations used to manipulate and test Gödel numbers. He used these to give an explicit definition of a formula that is true if and only if
|
https://en.wikipedia.org/wiki/Bidiagonal%20matrix
|
In mathematics, a bidiagonal matrix is a banded matrix with non-zero entries along the main diagonal and either the diagonal above or the diagonal below. This means there are exactly two non-zero diagonals in the matrix.
When the diagonal above the main diagonal has the non-zero entries the matrix is upper bidiagonal. When the diagonal below the main diagonal has the non-zero entries the matrix is lower bidiagonal.
For example, the following matrix is upper bidiagonal:
and the following matrix is lower bidiagonal:
Usage
One variant of the QR algorithm starts with reducing a general matrix into a bidiagonal one,
and the singular value decomposition (SVD) uses this method as well.
Bidiagonalization
Bidiagonalization allows guaranteed accuracy when using floating-point arithmetic to compute singular values.
See also
List of matrices
LAPACK
Hessenberg form – The Hessenberg form is similar, but has more non-zero diagonal lines than 2.
References
Stewart, G. W. (2001) Matrix Algorithms, Volume II: Eigensystems. Society for Industrial and Applied Mathematics. .
External links
High performance algorithms for reduction to condensed (Hessenberg, tridiagonal, bidiagonal) form
Linear algebra
Sparse matrices
|
https://en.wikipedia.org/wiki/Block%20reflector
|
"A block reflector is an orthogonal, symmetric matrix that reverses a subspace whose dimension may be greater than one."
It is built out of many elementary reflectors.
It is also referred to as a triangular factor, and is a triangular matrix and they are used in the Householder transformation.
A reflector belonging to can be written in the form :
where is the identity matrix for , is a scalar and belongs to .
LAPACK routines
Here are some of the LAPACK routines that apply to block reflectors
"*larft" forms the triangular vector T of a block reflector H=I-VTVH.
"*larzb" applies a block reflector or its transpose/conjugate transpose as returned by "*tzrzf" to a general matrix.
"*larzt" forms the triangular vector T of a block reflector H=I-VTVH as returned by "*tzrzf".
"*larfb" applies a block reflector or its transpose/conjugate transpose to a general rectangular matrix.
See also
Reflection (mathematics)
Householder transformation
Unitary matrix
Triangular matrix
References
Matrices
|
https://en.wikipedia.org/wiki/Nick%20Katz
|
Nicholas Michael Katz (born December 7, 1943) is an American mathematician, working in arithmetic geometry, particularly on p-adic methods, monodromy and moduli problems, and number theory. He is currently a professor of Mathematics at Princeton University and an editor of the journal Annals of Mathematics.
Life and work
Katz graduated from Johns Hopkins University (BA 1964) and from Princeton University, where in 1965 he received his master's degree and in 1966 he received his doctorate under supervision of Bernard Dwork with thesis On the Differential Equations Satisfied by Period Matrices. After that, at Princeton, he was an instructor, an assistant professor in 1968, associate professor in 1971 and professor in 1974. From 2002 to 2005 he was the chairman of faculty there. He was also a visiting scholar at the University of Minnesota, the University of Kyoto, Paris VI, Orsay Faculty of Sciences, the Institute for Advanced Study and the IHES. While in France, he adapted methods of scheme theory and category theory to the theory of modular forms. Subsequently, he has applied geometric methods to various exponential sums.
From 1968 to 1969, he was a NATO Postdoctoral Fellow, from 1975 to 1976 and from 1987–1988 Guggenheim Fellow and from 1971 to 1972 Sloan Fellow. In 1970 he was an invited speaker at the International Congress of Mathematicians in Nice (The regularity theorem in algebraic geometry) and in 1978 in Helsinki (p-adic L functions, Serre-Tate local moduli and ratios of solutions of differential equations).
Since 2003 he is a member of the American Academy of Arts and Sciences and since 2004 the National Academy of Sciences. In 2003 he was awarded with Peter Sarnak the Levi L. Conant Prize of the American Mathematical Society (AMS) for the essay "Zeroes of Zeta Functions and Symmetry" in the Bulletin of the American Mathematical Society. Since 2004 he is an editor of the Annals of Mathematics. In 2023 he received from the AMS the Leroy P. Steele Priz
|
https://en.wikipedia.org/wiki/Band%20matrix
|
In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal band, comprising the main diagonal and zero or more diagonals on either side.
Band matrix
Bandwidth
Formally, consider an n×n matrix A=(ai,j ). If all matrix elements are zero outside a diagonally bordered band whose range is determined by constants k1 and k2:
then the quantities k1 and k2 are called the and , respectively. The of the matrix is the maximum of k1 and k2; in other words, it is the number k such that if .
Examples
A band matrix with k1 = k2 = 0 is a diagonal matrix
A band matrix with k1 = k2 = 1 is a tridiagonal matrix
For k1 = k2 = 2 one has a pentadiagonal matrix and so on.
Triangular matrices
For k1 = 0, k2 = n−1, one obtains the definition of an upper triangular matrix
similarly, for k1 = n−1, k2 = 0 one obtains a lower triangular matrix.
Upper and lower Hessenberg matrices
Toeplitz matrices when bandwidth is limited.
Block diagonal matrices
Shift matrices and shear matrices
Matrices in Jordan normal form
A skyline matrix, also called "variable band matrix"a generalization of band matrix
The inverses of Lehmer matrices are constant tridiagonal matrices, and are thus band matrices.
Applications
In numerical analysis, matrices from finite element or finite difference problems are often banded. Such matrices can be viewed as descriptions of the coupling between the problem variables; the banded property corresponds to the fact that variables are not coupled over arbitrarily large distances. Such matrices can be further dividedfor instance, banded matrices exist where every element in the band is nonzero. These often arise when discretising one-dimensional problems.
Problems in higher dimensions also lead to banded matrices, in which case the band itself also tends to be sparse. For instance, a partial differential equation on a square domain (using central differences) will yield a matrix wi
|
https://en.wikipedia.org/wiki/Packed%20storage%20matrix
|
A packed storage matrix, also known as packed matrix, is a term used in programming for representing an matrix. It is a more compact way than an m-by-n rectangular array by exploiting a special structure of the matrix.
Typical examples of matrices that can take advantage of packed storage include:
symmetric or hermitian matrix
Triangular matrix
Banded matrix.
Code examples (Fortran)
Both of the following storage schemes are used extensively in BLAS and LAPACK.
An example of packed storage for hermitian matrix:
complex:: A(n,n) ! a hermitian matrix
complex:: AP(n*(n+1)/2) ! packed storage for A
! the lower triangle of A is stored column-by-column in AP.
! unpacking the matrix AP to A
do j=1,n
k = j*(j-1)/2
A(1:j,j) = AP(1+k:j+k)
A(j,1:j-1) = conjg(AP(1+k:j-1+k))
end do
An example of packed storage for banded matrix:
real:: A(m,n) ! a banded matrix with kl subdiagonals and ku superdiagonals
real:: AP(-kl:ku,n) ! packed storage for A
! the band of A is stored column-by-column in AP. Some elements of AP are unused.
! unpacking the matrix AP to A
do j=1,n
forall(i=max(1,j-kl):min(m,j+ku)) A(i,j) = AP(i-j,j)
end do
print *,AP(0,:) ! the diagonal
Arrays
Matrices
|
https://en.wikipedia.org/wiki/Bad%20command%20or%20file%20name
|
"Bad command or file name" is a common and ambiguous error message in MS-DOS and some other operating systems.
COMMAND.COM, the primary user interface of MS-DOS, produces this error message when the first word of a command could not be interpreted. For MS-DOS, this word must be the name of an internal command, executable file or batch file, so the error message provided an accurate description of the problem but easily confused novices. Though the source of the error had to be the first word (often a mistyped command), the wording gave the impression that files named in later words were damaged or had illegal filenames. Later, the wording of the error message was changed for clarity. Windows NT displays the following error message instead (where "foo" is replaced by the word causing error):
Some early Unix shells produced the equally cryptic "" for the same reasons. Most modern shells produce an error message similar to "".
See also
Abort, Retry, Fail?
List of DOS commands
References
Computer error messages
DOS on IBM PC compatibles
|
https://en.wikipedia.org/wiki/Laser%20guidance
|
Laser guidance directs a robotics system to a target position by means of a laser beam. The laser guidance of a robot is accomplished by projecting a laser light, image processing and communication to improve the accuracy of guidance. The key idea is to show goal positions to the robot by laser light projection instead of communicating them numerically. This intuitive interface simplifies directing the robot while the visual feedback improves the positioning accuracy and allows for implicit localization. The guidance system may serve also as a mediator for cooperative multiple robots.
Examples of proof-of-concept experiments of directing a robot by a laser pointer are shown on video.
Laser guidance spans areas of robotics, computer vision, user interface, video games, communication and smart home technologies.
Commercial systems
Samsung Electronics Co., Ltd. may have been using this technology in robotic vacuum cleaners since 2014.
Google Inc. applied for a patent with USPTO on using visual light or laser beam between devices to represent connections and interactions between them (Appl. No. 13/659,493, Pub. No. 2014/0363168).
However, no patent was granted to Google on this application.
Military use
Laser guidance is used by military to guide a missile or other projectile or vehicle to a target by means of a laser beam, either beam riding guidance or semi-active laser homing (SALH). With this technique, a laser is kept pointed at the target and the laser radiation bounces off the target and is scattered in all directions (this is known as "painting the target", or "laser painting"). The missile, bomb, etc. is launched or dropped somewhere near the target. When it is close enough for some of the reflected laser energy from the target to reach it, a laser seeker detects which direction this energy is coming from and adjusts the projectile trajectory towards the source. While the projectile is in the general area and the laser is kept aimed at the target, the
|
https://en.wikipedia.org/wiki/Seth%20Lloyd
|
Seth Lloyd (born August 2, 1960) is a professor of mechanical engineering and physics at the Massachusetts Institute of Technology.
His research area is the interplay of information with complex systems, especially quantum systems. He has performed seminal work in the fields of quantum computation, quantum communication and quantum biology, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon's noisy channel theorem, and designing novel methods for quantum error correction and noise reduction.
Biography
Lloyd was born on August 2, 1960. He graduated from Phillips Academy in 1978 and received a bachelor of arts degree from Harvard College in 1982. He earned a certificate of advanced study in mathematics and a master of philosophy degree from Cambridge University in 1983 and 1984, while on a Marshall Scholarship. Lloyd was awarded a doctorate by Rockefeller University in 1988 (advisor Heinz Pagels) after submitting a thesis on Black Holes, Demons, and the Loss of Coherence: How Complex Systems Get Information, and What They Do With It.
From 1988 to 1991, Lloyd was a postdoctoral fellow in the High Energy Physics Department at the California Institute of Technology, where he worked with Murray Gell-Mann on applications of information to quantum-mechanical systems. From 1991 to 1994, he was a postdoctoral fellow at Los Alamos National Laboratory, where he worked at the Center for Nonlinear Systems on quantum computation. In 1994, he joined the faculty of the Department of Mechanical Engineering at MIT. Starting in 1988, Lloyd was an external faculty member at the Santa Fe Institute for more than 30 years.
In his 2006 book, Programming the Universe, Lloyd contends that the universe itself is one big quantum computer producing what we see around us, and ourselves, as it runs a cosmic program. According to Lloyd, once we understand the laws of
|
https://en.wikipedia.org/wiki/Evo%20%28board%20game%29
|
Evo: The Last Gasp of the Dinosaurs is a German-style board game for three to five players, designed by Philippe Keyaerts and published by Eurogames. The game won the GAMES Magazine award for Game of the year 2002. It was nominated for the Origins Award for Best Graphic Presentation of a Board Game 2000. In 2004 it was nominated for the Hra Roku. The game went out of print in 2007, and a second edition was released in 2011.
Gameplay
The main game board is made of two reversible sections; on each, the two sides contain differently-sized halves of a prehistoric island. The board can therefore be assembled in four ways:
small-small for three players
small-large or large-small for four players
large-large for five players.
The island itself is made up of hexes of four different terrain types – desert, plains, hills and mountains. The game also uses a separate board for marking the current climate and round number, another for players scoring and bidding progress, and each player has a board to mark their dinosaur's mutations. The players' scores are also used as money during bidding phases.
Players start with three Event cards, a stack of dino tokens and a player board showing a dinosaur with: one egg, one leg, a tail, a horn-less face, one fur and one parasol, corresponding to most of the available mutations. For example the fur and parasol correspond to the species' ability to withstand cold and heat respectively. Each player places a dino token on their starting hex and a scoring marker on "10". Play proceeds through various phases:
Initiative – the order players will act in is determined by the number of tail mutations they have. Ties are resolved first in order of population size and then by roll-offs on a six-sided die.
Climate – a six-sided die is rolled to determine how the climate changes. Normally it proceeds in a cycle from hot to cold and back again, but on a 2 it stays the same, and on a 1 it moves in the opposite direction to the expected one.
Moveme
|
https://en.wikipedia.org/wiki/RapidIO
|
The RapidIO architecture is a high-performance packet-switched electrical connection technology. RapidIO supports messaging, read/write and cache coherency semantics. Based on industry-standard electrical specifications such as those for Ethernet, RapidIO can be used as a chip-to-chip, board-to-board, and chassis-to-chassis interconnect.
History
The RapidIO protocol was originally designed by Mercury Computer Systems and Motorola (Freescale) as a replacement for Mercury's RACEway proprietary bus and Freescale's PowerPC bus. The RapidIO Trade Association was formed in February 2000, and included telecommunications and storage OEMs as well as FPGA, processor, and switch companies.
Releases
The RapidIO specification revision 1.1 (3xN Gen1), released in March 2001, defined a wide, parallel bus. This specification did not achieve extensive commercial adoption.
The RapidIO specification revision 1.2, released in June 2002, defined a serial interconnect based on the XAUI physical layer. Devices based on this specification achieved significant commercial success within wireless baseband, imaging and military compute.
The RapidIO specification revision 1.3 was released in June 2005.
The RapidIO specification revision 2.0 (6xN Gen2), was released in March 2008, added more port widths (2×, 8×, and 16×) and increased the maximum lane speed to 6.25 GBd / 5 Gbit/s. Revision 2.1 has repeated and expanded the commercial success of the 1.2 specification.
The RapidIO specification revision 2.1 was released in September 2009.
The RapidIO specification revision 2.2 was released in May 2011.
The RapidIO specification revision 3.0 (10xN Gen3), was released in October 2013, has the following changes and improvements compared to the 2.x specifications:
Based on industry-standard Ethernet 10GBASE-KR electrical specifications for short (20 cm + connector) and long (1 m + 2 connector) reach applications
Directly leverages the Ethernet 10GBASE-KR DME training scheme for long-reach
|
https://en.wikipedia.org/wiki/IODBC
|
iODBC is an open-source initiative managed by OpenLink Software. It is a platform-independent ODBC SDK and runtime offering that enables the development of ODBC-compliant applications and drivers outside the Windows platform. The prime goals of this project are as follows:
Simplify the effort of porting ODBC applications from Windows to other platforms
Simplify the effort of porting ODBC drivers from Windows to other platforms
Create consistent ODBC-utilization experience across all platforms
History
iODBC emerged from a cooperative effort between OpenLink Software and Ke Jin. OpenLink Software produced a Driver Manager-less ODBC SDK that it branded as Universal DataBase Connectivity (UDBC) in 1993, because of the sporadic nature of shared library implementations across Unix platforms. Ke Jin used UDBC as inspiration for building a Driver Manager for ODBC outside the windows platform.
Over time Ke Jin and OpenLink Software decided to merge this effort into a single open-source offering under the LGPL license.
This process occurred at a time when the Free Software Foundation sought to have iODBC as a GPL offering. The delay in determining final licensing status for iODBC led to the emergence of UnixODBC and led to a fork in the platform-independent ODBC SDK and runtime that exists today. Drivers and applications written using either SDK have remained compatible (a tribute to both projects).
External links
iODBC homepage
References
SQL data access
Middleware
Database APIs
|
https://en.wikipedia.org/wiki/Boundary%20scan
|
Boundary scan is a method for testing interconnects (wire lines) on printed circuit boards or sub-blocks inside an integrated circuit. Boundary scan is also widely used as a debugging method to watch integrated circuit pin states, measure voltage, or analyze sub-blocks inside an integrated circuit.
The Joint Test Action Group (JTAG) developed a specification for boundary scan testing that was standardized in 1990 as the IEEE Std. 1149.1-1990. In 1994, a supplement that contains a description of the Boundary Scan Description Language (BSDL) was added which describes the boundary-scan logic content of IEEE Std 1149.1 compliant devices. Since then, this standard has been adopted by electronic device companies all over the world. Boundary scan is now mostly synonymous with JTAG.
Testing
The boundary scan architecture provides a means to test interconnects (including clusters of logic, memories, etc.) without using physical test probes; this involves the addition of at least one test cell that is connected to each pin of the device and that can selectively override the functionality of that pin. Each test cell may be programmed via the JTAG scan chain to drive a signal onto a pin and thus across an individual trace on the board; the cell at the destination of the board trace can then be read, verifying that the board trace properly connects the two pins. If the trace is shorted to another signal or if the trace is open, the correct signal value does not show up at the destination pin, indicating a fault.
On-chip infrastructure
To provide the boundary scan capability, IC vendors add additional logic to each of their devices, including scan cells for each of the external traces. These cells are then connected together to form the external boundary scan shift register (BSR), and combined with JTAG Test Access Port (TAP) controller support comprising four (or sometimes more) additional pins plus control circuitry.
Some TAP controllers support scan chains between on-chi
|
https://en.wikipedia.org/wiki/Picture%20line-up%20generation%20equipment
|
For televisions the picture line-up generation equipment (PLUGE or pluge) is the greyscale test patterns used in order to adjust the black level and contrast of the picture monitor. Various PLUGE patterns can be generated, the most common consisting of three vertical bars of super-black, normal black, and near-black and two rectangles of mid-gray and white (sometimes these are measured in IRE). These three PLUGE pulses are included in the SMPTE color bars (at the bottom and near the right) used for NTSC, PAL, and SÉCAM.
External links
Television technology
|
https://en.wikipedia.org/wiki/Daktronics
|
Daktronics is an American company based in Brookings, South Dakota, that designs, manufactures, sells, and services video displays, scoreboards, digital billboards, dynamic message signs, sound systems, and related products. It was founded in 1968 by two South Dakota State University professors.
History
Daktronics was founded in 1968 by Al Kurtenbach and Duane Sander, professors of electrical engineering at South Dakota State University in Brookings, South Dakota. The name is a portmanteau of "Dakota" and "electronics". The company initially wanted to get into the medical instrument field, but the company's founders found the field to be too large for them to serve in, so they changed their focus to providing electronic voting systems for state legislatures; their first client was for the State of Utah's legislature.
Shortly after, South Dakota State University's wrestling coach, Warren Williamson reached out to the company and asked them to devise a better scoreboard for wrestling. The result was Daktronics' first entry into the scoreboard field, developing the Matside wrestling scoreboard, the first product in the company's line. The company's scoreboards were later used at the 1976 Olympic Games. In 1980, Daktronics developed scoreboards which were used at the 1980 Winter Olympics in Lake Placid, New York. Daktronics displays have since been used at the 1992, 1996 and 2000 Summer Olympics.
In 1984, a new manufacturing facility was built. In 1987, the company developed a mobile scoring system for the PGA tour. In 1994, Daktronics, Inc. became a publicly traded company, offering shares under the symbol DAKT on the NASDAQ National Market system. The company also established an office in Germany in 2003, and in Hong Kong and the United Kingdom in 2004. In 2000, Daktronics acquired Keyframe services, and established an office in Canada. The following year, they installed their first LED video display in Times Square for TDK Financial Services Firm.
The company upg
|
https://en.wikipedia.org/wiki/Part%20number
|
A part number (often abbreviated PN, P/N, part no., or part #) is an identifier of a particular part design or material used in a particular industry. Its purpose is to simplify reference that item. A part number unambiguously identifies a part design within a single corporation. Sometimes across several corporations.
For example, when specifying a screw, it is easier to refer to "HSC0424PP" than saying "Hardware, screw, machine, 4-40, 3/4" long, pan head, Phillips". In this example, "HSC0424PP" is the part number. It may be prefixed in database fields as "PN HSC0424PP" or "P/N HSC0424PP". The "Part Number" term is often used loosely to refer to items or components (assemblies or parts), and it's equivalent to "Item Number", and overlaps with other terms like SKU (Stock Keeping Unit).
The part design versus instantiations of it
As a part number is an identifier of a part design (independent of its instantiations), a serial number is a unique identifier of a particular instantiation of that part design. In other words, a part number identifies any particular (physical) part as being made to that one unique design; a serial number, when used, identifies a particular (physical) part (one physical instance), as differentiated from the next unit that was stamped, machined, or extruded right after it. This distinction is not always clear, as natural language blurs it by typically referring to both part designs, and particular instantiations of those designs, by the same word, "part(s)". Thus if you buy a muffler of P/N 12345 today, and another muffler of P/N 12345 next Tuesday, you have bought "two copies of the same part", or "two parts", depending on the sense implied.
User part numbers versus manufacturing part numbers (MPN)
A business using a part will often use a different part number than the various manufacturers of that part do. This is especially common for catalog hardware, because the same or similar part design (say, a screw with a certain standard thread, o
|
https://en.wikipedia.org/wiki/Zn%C3%A1m%27s%20problem
|
In number theory, Znám's problem asks which sets of integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972, although other mathematicians had considered similar problems around the same time.
The initial terms of Sylvester's sequence almost solve this problem, except that the last chosen term equals one plus the product of the others, rather than being a proper divisor. showed that there is at least one solution to the (proper) Znám problem for each . Sun's solution is based on a recurrence similar to that for Sylvester's sequence, but with a different set of initial values.
The Znám problem is closely related to Egyptian fractions. It is known that there are only finitely many solutions for any fixed . It is unknown whether there are any solutions to Znám's problem using only odd numbers, and there remain several other open questions.
The problem
Znám's problem asks which sets of integers have the property that each integer in the set is a proper divisor of the product of the other integers in the set, plus 1. That is, given , what sets of integers
are there such that, for each , divides but is not equal to
A closely related problem concerns sets of integers in which each integer in the set is a divisor, but not necessarily a proper divisor, of one plus the product of the other integers in the set. This problem does not seem to have been named in the literature, and will be referred to as the improper Znám problem. Any solution to Znám's problem is also a solution to the improper Znám problem, but not necessarily vice versa.
History
Znám's problem is named after the Slovak mathematician Štefan Znám, who suggested it in 1972. had posed the improper Znám problem for , and , independently of Znám, found all solutions to the improper problem for . showed that Znám's problem is unsolvable f
|
https://en.wikipedia.org/wiki/Virtual%20channel
|
In most telecommunications organizations, a virtual channel is a method of remapping the program number as used in H.222 Program Association Tables and Program Mapping Tables to a channel number that can be entered as digits on a receiver's remote control.
Often, virtual channels are implemented in digital television to help users to go to channels easily and in general to ease the transition from analogue to digital broadcasting. Assigning virtual channels is most common in parts of the world where TV stations were colloquially named after the RF channel they were transmitting on ("Channel 6 Springfield"), as was common in North America during the analogue TV era. In other parts of the world, such as Europe, virtual channels are rarely used or needed, because TV stations there identify themselves by name, not by RF channel or callsign.
A "virtual channel" was first used for DigiCipher 2 in North America. It was later called a logical channel number (LCN) and used for private European Digital Video Broadcasting extensions widely used by the NDS Group and by NorDig in other markets.
Pay television operators were the first to use these systems for channel reassignment and rearrangement to meet their need to group channels by their content or origin and, to a lesser extent, to localize advertising.
Free-to-air stations using Advanced Television Systems Committee standards (ATSC) used the same television frequency channel allocation that the NTSC channel was using when both were simulcasting. They achieved this by the DigiCipher 2 method. Viewers could then use the same number to bring up either service.
Free-to-air DVB network operators, such as DTV Services Ltd. (d.b.a. Freeview) and Freeview New Zealand Ltd., use the NorDig method and follow the same practice as pay-TV operators. The exception is Freeview Australia Ltd., which also use the NorDig method and partly follow the ATSC practice of using the same VHF radio-frequency channel allocation that the PAL chan
|
https://en.wikipedia.org/wiki/Physics%20engine
|
A physics engine is computer software that provides an approximate simulation of certain physical systems, such as rigid body dynamics (including collision detection), soft body dynamics, and fluid dynamics, of use in the domains of computer graphics, video games and film (CGI). Their main uses are in video games (typically as middleware), in which case the simulations are in real-time. The term is sometimes used more generally to describe any software system for simulating physical phenomena, such as high-performance scientific simulation.
Description
There are generally two classes of physics engines: real-time and high-precision. High-precision physics engines require more processing power to calculate very precise physics and are usually used by scientists and computer-animated movies. Real-time physics engines—as used in video games and other forms of interactive computing—use simplified calculations and decreased accuracy to compute in time for the game to respond at an appropriate rate for game play. A physics engine is essentially a big calculator that does mathematics needed to simulate physics.
Scientific engines
One of the first general purpose computers, ENIAC, was used as a very simple type of physics engine. It was used to design ballistics tables to help the United States military estimate where artillery shells of various mass would land when fired at varying angles and gunpowder charges, also accounting for drift caused by wind. The results were calculated a single time only, and were tabulated into printed tables handed out to the artillery commanders.
Physics engines have been commonly used on supercomputers since the 1980s to perform computational fluid dynamics modeling, where particles are assigned force vectors that are combined to show circulation. Due to the requirements of speed and high precision, special computer processors known as vector processors were developed to accelerate the calculations. The techniques can be used to model
|
https://en.wikipedia.org/wiki/WJXT
|
WJXT (channel 4) is an independent television station in Jacksonville, Florida, United States. It is owned by Graham Media Group alongside CW affiliate WCWJ (channel 17). The two stations share studios at 4 Broadcast Place on the south bank of the St. Johns River in Jacksonville; WJXT's transmitter is located on Anders Boulevard in the city's Killarney Shores section.
History
As a CBS affiliate
WJXT originally signed on the air on September 15, 1949, as WMBR-TV. It was Jacksonville's first television station, the second television station in Florida and a primary CBS affiliate on VHF channel 4 after WTVJ (also on channel 4, now an NBC owned-and-operated station on channel 6) in Miami–Fort Lauderdale. The station was co-owned alongside WMBR radio (1460 AM, now WQOP; and 96.1 FM, now WEJZ). Though the station was originally a primary CBS affiliate, it also maintained secondary affiliations with NBC, ABC and the DuMont Television Network. In 1953, the WMBR stations were purchased by The Washington Post Company. WMBR-TV dropped the DuMont affiliation in 1955, less than a year before the network ceased operations. Since its only competition in the Jacksonville market came from UHF station WJHP-TV (which signed on in 1953 and went dark three years later), channel 4 had a virtual television monopoly in northern Florida until September 1957, when it lost the NBC affiliation to upstart WFGA (channel 12, now WTLV).
The Washington Post Company sold WMBR-AM-FM in 1958, while it kept the television station, whose callsign it changed to the current WJXT. WJXT remained a primary CBS and secondary ABC affiliate until WJKS-TV (channel 17, now CW sister station WCWJ) took the ABC affiliation upon its sign-on in February 1966, leaving WJXT exclusively with CBS. For much of its tenure as a CBS affiliate, WJXT was the only station affiliated with the network that was located between Savannah, Georgia, and Orlando, Florida, and was thus carried on many cable systems between Jacksonvil
|
https://en.wikipedia.org/wiki/Predicate%20transformer%20semantics
|
Predicate transformer semantics were introduced by Edsger Dijkstra in his seminal paper "Guarded commands, nondeterminacy and formal derivation of programs". They define the semantics of an imperative programming paradigm by assigning to each statement in this language a corresponding predicate transformer: a total function between two predicates on the state space of the statement. In this sense, predicate transformer semantics are a kind of denotational semantics. Actually, in guarded commands, Dijkstra uses only one kind of predicate transformer: the well-known weakest preconditions (see below).
Moreover, predicate transformer semantics are a reformulation of Floyd–Hoare logic. Whereas Hoare logic is presented as a deductive system, predicate transformer semantics (either by weakest-preconditions or by strongest-postconditions see below) are complete strategies to build valid deductions of Hoare logic. In other words, they provide an effective algorithm to reduce the problem of verifying a Hoare triple to the problem of proving a first-order formula. Technically, predicate transformer semantics perform a kind of symbolic execution of statements into predicates: execution runs backward in the case of weakest-preconditions, or runs forward in the case of strongest-postconditions.
Weakest preconditions
Definition
For a statement S and a postcondition R, a weakest precondition is a predicate Q such that for any precondition , if and only if . In other words, it is the "loosest" or least restrictive requirement needed to guarantee that R holds after S. Uniqueness follows easily from the definition: If both Q and Q' are weakest preconditions, then by the definition so and so , and thus . We often use to denote the weakest precondition for statement S with repect to a postcondition R.
Conventions
We use T to denote the predicate that is everywhere true and F to denote the one that is everywhere false. We shouldn't at least conceptually confuse ourselve
|
https://en.wikipedia.org/wiki/Null%20vector
|
In mathematics, given a vector space X with an associated quadratic form q, written , a null vector or isotropic vector is a non-zero element x of X for which .
In the theory of real bilinear forms, definite quadratic forms and isotropic quadratic forms are distinct. They are distinguished in that only for the latter does there exist a nonzero null vector.
A quadratic space which has a null vector is called a pseudo-Euclidean space.
A pseudo-Euclidean vector space may be decomposed (non-uniquely) into orthogonal subspaces A and B, , where q is positive-definite on A and negative-definite on B. The null cone, or isotropic cone, of X consists of the union of balanced spheres:
The null cone is also the union of the isotropic lines through the origin.
Split algebras
A composition algebra with a null vector is a split algebra.
In a composition algebra (A, +, ×, *), the quadratic form is q(x) = x x*. When x is a null vector then there is no multiplicative inverse for x, and since x ≠ 0, A is not a division algebra.
In the Cayley–Dickson construction, the split algebras arise in the series bicomplex numbers, biquaternions, and bioctonions, which uses the complex number field as the foundation of this doubling construction due to L. E. Dickson (1919). In particular, these algebras have two imaginary units, which commute so their product, when squared, yields +1:
Then
so 1 + hi is a null vector.
The real subalgebras, split complex numbers, split quaternions, and split-octonions, with their null cones representing the light tracking into and out of 0 ∈ A, suggest spacetime topology.
Examples
The light-like vectors of Minkowski space are null vectors.
The four linearly independent biquaternions , , , and are null vectors and can serve as a basis for the subspace used to represent spacetime. Null vectors are also used in the Newman–Penrose formalism approach to spacetime manifolds.
In the Verma module of a Lie algebra there are null vectors.
References
|
https://en.wikipedia.org/wiki/Slashed%20zero
|
The slashed zero 0̷ is a representation of the Arabic digit "0" (zero) with a slash through it. The slashed zero glyph is often used to distinguish the digit "zero" ("0") from the Latin script letter "O" anywhere that the distinction needs emphasis, particularly in encoding systems, scientific and engineering applications, computer programming (such as software development), and telecommunications. It thus helps to differentiate characters that would otherwise be homoglyphs. It was commonly used during the punch card era, when programs were typically written out by hand, to avoid ambiguity when the character was later typed on a card punch.
Usage
The slashed zero is used in a number of fields in order to avoid confusion with the letter 'O'. It is used by computer programmers, in recording amateur radio call signs and in military radio, as logs of such contacts tend to contain both letters and numerals.
The slashed zero was used on teleprinter circuits for weather applications. In this usage it was sometimes called communications zero.
The slashed zero can be used in stoichiometry to avoid confusion with the symbol for oxygen (capital O).
The slashed zero is also used in charting and documenting in the medical and healthcare fields to avoid confusion with the letter 'O'. It also denotes an absence of something (similar to the usage of an 'empty set' character), such as a sign or a symptom.
Slashed zeroes can also be used on cheques in order to prevent fraud, for example: Changing a 0 to an 8.
Slashed zeros are used on New Zealand number plates.
History
The slashed zero predates computers, and is known to have been used in the twelfth and thirteenth centuries.
In the days of the typewriter, there was no key for the slashed zero. Typists could generate it by first typing either an uppercase "O" or a zero and then backspace, followed by typing the slash key. The result would look very much like a slashed zero.
It is used in many Baudot teleprinter applications
|
https://en.wikipedia.org/wiki/Boxcar%20function
|
In mathematics, a boxcar function is any function which is zero over the entire real line except for a single interval where it is equal to a constant, A. The function is named after its graph's resemblance to a boxcar, a type of railroad car. The boxcar function can be expressed in terms of the uniform distribution as
where is the uniform distribution of x for the interval and is the Heaviside step function. As with most such discontinuous functions, there is a question of the value at the transition points. These values are probably best chosen for each individual application.
When a boxcar function is selected as the impulse response of a filter, the result is a simple moving average filter, whose frequency response is a sinc-in-frequency, a type of low-pass filter.
See also
Boxcar averager
Rectangular function
Step function
Top-hat filter
References
Special functions
|
https://en.wikipedia.org/wiki/Lineage%20%28evolution%29
|
An evolutionary lineage is a temporal series of populations, organisms, cells, or genes connected by a continuous line of descent from ancestor to descendant. Lineages are subsets of the evolutionary tree of life. Lineages are often determined by the techniques of molecular systematics.
Phylogenetic representation of lineages
Lineages are typically visualized as subsets of a phylogenetic tree. A lineage is a single line of descent or linear chain within the tree, while a clade is a (usually branched) monophyletic group, containing a single ancestor and all its descendants. Phylogenetic trees are typically created from DNA, RNA or protein sequence data. Apart from this, morphological differences and similarities have been, and still are used to create phylogenetic trees. Sequences from different individuals are collected and their similarity is quantified. Mathematical procedures are used to cluster individuals by similarity.
Just as a map is a scaled approximation of true geography, a phylogenetic tree is an approximation of the true complete evolutionary relationships. For example, in a full tree of life, the entire clade of animals can be collapsed to a single branch of the tree. However, this is merely a limitation of rendering space. In theory, a true and complete tree for all living organisms or for any DNA sequence could be generated.
See also
Clade
Linnaean taxonomy
References
External links
Phylogenetics
|
https://en.wikipedia.org/wiki/Anti-phishing%20software
|
Anti-phishing software consists of computer programs that attempt to identify phishing content contained in websites, e-mail, or other forms used to accessing data (usually from the internet) and block the content, usually with a warning to the user (and often an option to view the content regardless). It is often integrated with web browsers and email clients as a toolbar that displays the real domain name for the website the viewer is visiting, in an attempt to prevent fraudulent websites from masquerading as other legitimate websites.
Most popular web browsers comes with built-in anti-phishing and anti-malware protection services, but almost none of the alternate web browsers have such protections.
Password managers can also be used to help defend against phishing, as can some mutual authentication techniques.
Types of anti-phishing software
Email security
According to Gartner, "email security refers collectively to the prediction, prevention, detection and response framework used to provide attack protection and access protection for email." Email security solution may be : Email security spans gateways, email systems, user behavior, content security, and various supporting processes, services and adjacent security architecture.
Security awareness computer-based training
According to Gartner, security awareness training include one or more of the following capabilities: Ready-to-use training and educational content, Employee testing and knowledge checks, Availability in multiple languages, Phishing and other social engineering attack simulations, Platform and awareness analytics to help measure the efficacy of the awareness program.
Notable client-based anti-phishing programs
avast!
Avira Premium Security Suite
Earthlink ScamBlocker (discontinued)
eBay Toolbar
Egress Defend
ESET Smart Security
G Data Software G DATA Antivirus
GeoTrust TrustWatch
Google Safe Browsing (used in Mozilla Firefox, Google Chrome, Opera, Safari, and Vivaldi)
Kaspersky Internet
|
https://en.wikipedia.org/wiki/Empirical%20risk%20minimization
|
Empirical risk minimization (ERM) is a principle in statistical learning theory which defines a family of learning algorithms and is used to give theoretical bounds on their performance. The core idea is that we cannot know exactly how well an algorithm will work in practice (the true "risk") because we don't know the true distribution of data that the algorithm will work on, but we can instead measure its performance on a known set of training data (the "empirical" risk).
Background
Consider the following situation, which is a general setting of many supervised learning problems. We have two spaces of objects and and would like to learn a function (often called hypothesis) which outputs an object , given . To do so, we have at our disposal a training set of examples where is an input and is the corresponding response that we wish to get from .
To put it more formally, we assume that there is a joint probability distribution over and , and that the training set consists of instances drawn i.i.d. from . Note that the assumption of a joint probability distribution allows us to model uncertainty in predictions (e.g. from noise in data) because is not a deterministic function of but rather a random variable with conditional distribution for a fixed .
We also assume that we are given a non-negative real-valued loss function which measures how different the prediction of a hypothesis is from the true outcome . For classification tasks these loss functions can be scoring rules.
The risk associated with hypothesis is then defined as the expectation of the loss function:
A loss function commonly used in theory is the 0-1 loss function: .
The ultimate goal of a learning algorithm is to find a hypothesis among a fixed class of functions for which the risk is minimal:
For classification problems, the Bayes classifier is defined to be the classifier minimizing the risk defined with the 0–1 loss function.
Empirical risk minimization
In general, th
|
https://en.wikipedia.org/wiki/Pseudotensor
|
In physics and mathematics, a pseudotensor is usually a quantity that transforms like a tensor under an orientation-preserving coordinate transformation (e.g. a proper rotation) but additionally changes sign under an orientation-reversing coordinate transformation (e.g., an improper rotation), which is a transformation that can be expressed as a proper rotation followed by reflection. This is a generalization of a pseudovector. To evaluate a tensor or pseudotensor sign, it has to be contracted with some vectors, as many as its rank is, belonging to the space where the rotation is made while keeping the tensor coordinates unaffected (differently from what one does in the case of a base change). Under improper rotation a pseudotensor and a proper tensor of the same rank will have different sign which depends on the rank being even or odd. Sometimes inversion of the axes is used as an example of an improper rotation to see the behaviour of a pseudotensor, but it works only if vector space dimensions is odd otherwise inversion is a proper rotation without an additional reflection.
There is a second meaning for pseudotensor (and likewise for pseudovector), restricted to general relativity. Tensors obey strict transformation laws, but pseudotensors in this sense are not so constrained. Consequently, the form of a pseudotensor will, in general, change as the frame of reference is altered. An equation containing pseudotensors which holds in one frame will not necessarily hold in a different frame. This makes pseudotensors of limited relevance because equations in which they appear are not invariant in form.
Definition
Two quite different mathematical objects are called a pseudotensor in different contexts.
The first context is essentially a tensor multiplied by an extra sign factor, such that the pseudotensor changes sign under reflections when a normal tensor does not. According to one definition, a pseudotensor P of the type is a geometric object whose components in a
|
https://en.wikipedia.org/wiki/Novell%20Storage%20Services
|
Novell Storage Services (NSS) is a file system used by the Novell NetWare operating system. Support for NSS was introduced in 2004 to SUSE Linux via low-level network NCPFS protocol. It has some unique features that make it especially useful for setting up shared volumes on a file server in a local area network.
NSS is a 64-bit journaling file system with a balanced tree algorithm for the directory structure. Its published specifications (as of NetWare 6.5) are:
Maximum file size: 8 EB
Maximum partition size: 8 EB
Maximum device size (Physical or Logical): 8 EB
Maximum pool size: 8 EB
Maximum volume size: 8 EB
Maximum files per volume: 8 trillion
Maximum mounted volumes per server: unlimited if all are NSS
Maximum open files per server: no practical limit
Maximum directory tree depth: limited only by client
Maximum volumes per partition: unlimited
Maximum extended attributes: no limit on number of attributes.
Maximum data streams: no limit on number of data streams.
Unicode characters supported by default
Support for different name spaces: DOS, Microsoft Windows Long names (loaded by default), Unix, Apple Macintosh
Support for restoring deleted files (salvage)
Support for transparent compression
Support for encrypted volumes
Support for data shredding
See also
NetWare File System (NWFS)
Comparison of file systems
List of file systems
External links
Article about NSS
Novell Storage Services - Features
Compression file systems
Disk file systems
Novell NetWare
|
https://en.wikipedia.org/wiki/XKMS
|
XML Key Management Specification (XKMS) uses the web services framework to make it easier for developers to secure inter-application communication using public key infrastructure (PKI). XML Key Management Specification is a protocol developed by W3C which describes the distribution and registration of public keys. Services can access an XKMS compliant server in order to receive updated key information for encryption and authentication.
Architecture
XKMS consists of two parts:
X-KISS XML Key Information Service Specification
X-KRSS XML Key Registration Service Specification
The X-KRSS defines the protocols needed to register public key information. X-KRSS can generate the key material, making key recovery easier than when created manually.
The X-KISS outlines the syntax that applications should use to delegate some or all of the tasks needed to process the key information element of an XML signature to a trust service.
In both cases the goal of XKMS is to allow all the complexity of traditional PKI implementations to be offloaded from the client to an external service. While this approach was originally suggested by Diffie and Hellman in their New Directions paper this was generally considered impractical at the time leading to commercial development focusing on the certificate based approach proposed by Loren Kohnfelder.
Development history
The team that developed the original XKMS proposal submitted to the W3C included Warwick Ford, Phillip Hallam-Baker (editor) and Brian LaMacchia. The architectural approach is closely related to the MIT PGP Key server originally created and maintained by Brian LaMacchia. The realization in XML is closely related to SAML, the first edition of which was also edited by Hallam-Baker.
At the time XKMS was proposed no security infrastructure was defined for the then entirely new SOAP protocol for Web Services. As a result, a large part of the XKMS specification is concerned with the definition of security 'bindings' for spe
|
https://en.wikipedia.org/wiki/Setpoint%20%28control%20system%29
|
In cybernetics and control theory, a setpoint (SP; also set point) is the desired or target value for an essential variable, or process value (PV) of a control system, which may differ from the actual measured value of the variable. Departure of such a variable from its setpoint is one basis for error-controlled regulation using negative feedback for automatic control. A setpoint can be any physical quantity or parameter that a control system seeks to regulate, such as temperature, pressure, flow rate, position, speed, or any other measurable attribute.
In the context of PID controller, the setpoint represents the reference or goal for the controlled process variable. It serves as the benchmark against which the actual process variable (PV) is continuously compared. The PID controller calculates an error signal by taking the difference between the setpoint and the current value of the process variable. Mathematically, this error is expressed as:
where is the error at a given time , is the setpoint, is the process variable at time .
The PID controller uses this error signal to determine how to adjust the control output to bring the process variable as close as possible to the setpoint while maintaining stability and minimizing overshoot.
Examples
Cruise control
The error can be used to return a system to its norm. An everyday example is the cruise control on a road vehicle; where external influences such as gradients cause speed changes (PV), and the driver also alters the desired set speed (SP). The automatic control algorithm restores the actual speed to the desired speed in the optimum way, without delay or overshoot, by altering the power output of the vehicle's engine. In this way the error is used to control the PV so that it equals the SP. A widespread of error is classically used in the PID controller.
Industrial applications
Special consideration must be given for engineering applications. In industrial systems, physical or process restraints
|
https://en.wikipedia.org/wiki/WAXN-TV
|
WAXN-TV (channel 64) is an independent television station licensed to Kannapolis, North Carolina, United States, serving the Charlotte area. It is owned by Cox Media Group alongside dual ABC/Telemundo affiliate WSOC-TV (channel 9). Both stations share studios on West 23rd Street north of uptown Charlotte, while WAXN-TV's transmitter is located near Reedy Creek Park in the Newell section of the city.
History
The station first signed on the air on October 15, 1994, as WKAY-TV. It was originally owned by Kannapolis Television Company, a subsidiary of Truth Temple in Kannapolis. It had originally received a construction permit as WDZH, but changed the call letters to WKAY on November 15, 1989. The pastor of Truth Temple, Garland Faw, named the station WKAY after his wife Kay. The station aired a mix of religious programming, older movies, and barter syndicated programs, as well as the locally produced special Magic or Something in 1995. Kannapolis Television entered into a joint sales agreement (JSA) with WSOC-TV owner Cox Enterprises, and formally changed the call letters to WAXN-TV in August 1996.
Under the agreement, channel 9 took over channel 64's operations and re-branded the station as "Action 64." The "Action" branding had also been used at the time on Cox's two other independent stations, WRDQ in Orlando and KICU-TV in San Jose, California, the latter of which is now owned by Fox Television Stations since 2014. Cox invested over $3 million toward relaunching the station and making other improvements. The station moved its operations to WSOC-TV's facilities and underwent a significant technical overhaul, boosting its transmitting power to a level comparable with other Charlotte area stations. Previously, it could only be seen on cable television in most of the market, as its over-the-air analog signal barely made it out of Cabarrus County.
WSOC-TV owned the rights to a large amount of syndicated programming, but increased local news commitments left channel 9
|
https://en.wikipedia.org/wiki/Num%C3%A9raire
|
The numéraire (or numeraire) is a basic standard by which value is computed. In mathematical economics it is a tradable economic entity in terms of whose price the relative prices of all other tradables are expressed. In a monetary economy, one of the functions of money is to act as the numéraire, i.e. to serve as a unit of account and therefore provide a common benchmark relative to which the value of various goods and services can be measured against.
Using a numeraire, whether monetary or some consumable good, facilitates value comparisons when only the relative prices are relevant, as in general equilibrium theory. When economic analysis refers to a particular good as the numéraire, one says that all other prices are normalized by the price of that good. For example, if a unit of good g has twice the market value of a unit of the numeraire, then the (relative) price of g is 2. Since the value of one unit of the numeraire relative to one unit of itself is 1, the price of the numeraire is always 1.
Change of numéraire
In a financial market with traded securities, one may use a numéraire to price assets. For instance, let be the price at time of $1 that was invested in the money market at time . The fundamental theorem of asset pricing says that all assets priced in terms of the numéraire (in this case, ), are martingales with respect to a risk-neutral measure, say . That is:
Now, suppose that is another strictly positive traded asset (and hence a martingale when priced in terms of the money market). Then we can define a new probability measure by the Radon–Nikodym derivative
Then it can be shown that is a martingale under when priced in terms of the new numéraire :
This technique has many important applications in LIBOR and swap market models, as well as commodity markets. Jamshidian (1989) first used it in the context of the Vasicek model for interest rates in order to calculate bond options prices. Geman, El Karoui and Rochet (1995) introduced the
|
https://en.wikipedia.org/wiki/Clustering%20coefficient
|
In graph theory, a clustering coefficient is a measure of the degree to which nodes in a graph tend to cluster together. Evidence suggests that in most real-world networks, and in particular social networks, nodes tend to create tightly knit groups characterised by a relatively high density of ties; this likelihood tends to be greater than the average probability of a tie randomly established between two nodes (Holland and Leinhardt, 1971; Watts and Strogatz, 1998).
Two versions of this measure exist: the global and the local. The global version was designed to give an overall indication of the clustering in the network, whereas the local gives an indication of the embeddedness of single nodes.
Local clustering coefficient
The local clustering coefficient of a vertex (node) in a graph quantifies how close its neighbours are to being a clique (complete graph). Duncan J. Watts and Steven Strogatz introduced the measure in 1998 to determine whether a graph is a small-world network.
A graph formally consists of a set of vertices and a set of edges between them. An edge connects vertex with vertex .
The neighbourhood for a vertex is defined as its immediately connected neighbours as follows:
We define as the number of vertices, , in the neighbourhood, , of a vertex.
The local clustering coefficient for a vertex is then given by a proportion of the number of links between the vertices within its neighbourhood divided by the number of links that could possibly exist between them. For a directed graph, is distinct from , and therefore for each neighbourhood there are links that could exist among the vertices within the neighbourhood ( is the number of neighbours of a vertex). Thus, the local clustering coefficient for directed graphs is given as
An undirected graph has the property that and are considered identical. Therefore, if a vertex has neighbours, edges could exist among the vertices within the neighbourhood. Thus, the local cluste
|
https://en.wikipedia.org/wiki/Gotcha%20%28programming%29
|
In programming, a gotcha is a valid construct in a system, program or programming language that works as documented but is counter-intuitive and almost invites mistakes because it is both easy to invoke and unexpected or unreasonable in its outcome.
Example
The classic gotcha in C/C++ is the construct
if (a = b) code;
It is syntactically valid: it puts the value of b into a and then executes code if a is non-zero. Sometimes this is even intended. However most commonly it is a typo: the programmer probably meant
if (a == b) code;
which executes code if a and b are equal. Modern compilers will usually generate a warning when encountering the former construct (conditional branch on assignment, not comparison), depending on compiler options (e.g., the -Wall option for gcc). To avoid this gotcha, there is a recommendation to keep the constants in the left side of the comparison, e.g. 42 == x rather than x == 42. This way, using = instead of == will cause a compiler error (see Yoda conditions). Many kinds of gotchas are not detected by compilers, however.
See also
Usability
References
Further reading
External links
C Traps and Pitfalls by Andrew Koenig
C++ Gotchas A programmer's guide to avoiding and correcting ninety-nine of the most common, destructive, and interesting C++ design and programming errors, by Stephen C. Dewhurst
Computer programming folklore
Programming language folklore
Programming language design
|
https://en.wikipedia.org/wiki/Timeworks%20Publisher
|
Timeworks Publisher was a desktop publishing (DTP) program produced by GST Software in the United Kingdom.
It is notable as the first affordable DTP program for the IBM PC. In appearance and operation, it was a Ventura Publisher clone, but it was possible to run it on a computer without a hard disk.
Versions
Timeworks Desktop Publisher
Timeworks Publisher 1 for Atari TOS relied on the GDOS software components, which were available from Atari but were often distributed with applications that required them. GDOS provided TOS/GEM with a standardized method for installing printer drivers and additional fonts, although these were limited to bitmapped fonts in all but the later releases. GDOS had a reputation for being difficult to configure, used a lot of system resources and was fairly buggy, meaning that Timeworks could struggle to run on systems without a hard disk and less than 2 MB of memory - but it was possible, and for many users Timeworks was an inexpensive introduction to desktop publishing.
For the IBM PC, Timeworks ran on Digital Research's GEM Desktop (supplied with the program) as a runtime system. Later versions ran on Microsoft Windows.
Timeworks Publisher 2 included full WYSIWYG, paragraph tagging, manual control of kerning, text and graphics imports and more fonts. Timeworks Publisher 2.1 with GEM/5 is known to have supported Bézier curves already.
Acorn Desktop Publisher
In mid-1988, following on from the release of GST's word processor, First Word Plus, Acorn Computers announced that it had commissioned GST to port and enhance the Timeworks product for the Archimedes series. Being designed for use with RISC OS, using the anti-aliased font technology already demonstrated on the Archimedes, utilising the multi-tasking capabilities of the RISC OS desktop environment, and offering printed output support for laser and dot-matrix printers, availability was deferred until the release of RISC OS in April 1989. The delivered product, Acorn Desktop Publi
|
https://en.wikipedia.org/wiki/Toy%20theorem
|
In mathematics, a toy theorem is a simplified instance (special case) of a more general theorem, which can be useful in providing a handy representation of the general theorem, or a framework for proving the general theorem. One way of obtaining a toy theorem is by introducing some simplifying assumptions in a theorem.
In many cases, a toy theorem is used to illustrate the claim of a theorem, while in other cases, studying the proofs of a toy theorem (derived from a non-trivial theorem) can provide insight that would be hard to obtain otherwise.
Toy theorems can also have educational value as well. For example, after presenting a theorem (with, say, a highly non-trivial proof), one can sometimes give some assurance that the theorem really holds, by proving a toy version of the theorem.
Examples
A toy theorem of the Brouwer fixed-point theorem is obtained by restricting the dimension to one. In this case, the Brouwer fixed-point theorem follows almost immediately from the intermediate value theorem.
Another example of toy theorem is Rolle's theorem, which is obtained from the mean value theorem by equating the function values at the endpoints.
See also
Corollary
Fundamental theorem
Lemma (mathematics)
Toy model
References
Mathematical theorems
Mathematical terminology
|
https://en.wikipedia.org/wiki/Strict
|
In mathematical writing, the term strict refers to the property of excluding equality and equivalence and often occurs in the context of inequality and monotonic functions. It is often attached to a technical term to indicate that the exclusive meaning of the term is to be understood. The opposite is non-strict, which is often understood to be the case but can be put explicitly for clarity. In some contexts, the word "proper" can also be used as a mathematical synonym for "strict".
Use
This term is commonly used in the context of inequalities — the phrase "strictly less than" means "less than and not equal to" (likewise "strictly greater than" means "greater than and not equal to"). More generally, a strict partial order, strict total order, and strict weak order exclude equality and equivalence.
When comparing numbers to zero, the phrases "strictly positive" and "strictly negative" mean "positive and not equal to zero" and "negative and not equal to zero", respectively. In the context of functions, the adverb "strictly" is used to modify the terms "monotonic", "increasing", and "decreasing".
On the other hand, sometimes one wants to specify the inclusive meanings of terms. In the context of comparisons, one can use the phrases "non-negative", "non-positive", "non-increasing", and "non-decreasing" to make it clear that the inclusive sense of the terms is being used.
The use of such terms and phrases helps avoid possible ambiguity and confusion. For instance, when reading the phrase "x is positive", it is not immediately clear whether x = 0 is possible, since some authors might use the term positive loosely to mean that x is not less than zero. Such an ambiguity can be mitigated by writing "x is strictly positive" for x > 0, and "x is non-negative" for x ≥ 0. (A precise term like non-negative is never used with the word negative in the wider sense that includes zero.)
The word "proper" is often used in the same way as "strict". For example, a "proper subset" of
|
https://en.wikipedia.org/wiki/Green%E2%80%93Kubo%20relations
|
The Green–Kubo relations (Melville S. Green 1954, Ryogo Kubo 1957) give the exact mathematical expression for transport coefficients in terms of integrals of time correlation functions:
Thermal and mechanical transport processes
Thermodynamic systems may be prevented from relaxing to equilibrium because of the application of a field (e.g. electric or magnetic field), or because the boundaries of the system are in relative motion (shear) or maintained at different temperatures, etc. This generates two classes of nonequilibrium system: mechanical nonequilibrium systems and thermal nonequilibrium systems.
The standard example of an electrical transport process is Ohm's law, which states that, at least for sufficiently small applied voltages, the current I is linearly proportional to the applied voltage V,
As the applied voltage increases one expects to see deviations from linear behavior. The coefficient of proportionality is the electrical conductance which is the reciprocal of the electrical resistance.
The standard example of a mechanical transport process is Newton's law of viscosity, which states that the shear stress is linearly proportional to the strain rate. The strain rate is the rate of change streaming velocity in the x-direction, with respect to the y-coordinate, . Newton's law of viscosity states
As the strain rate increases we expect to see deviations from linear behavior
Another well known thermal transport process is Fourier's law of heat conduction, stating that the heat flux between two bodies maintained at different temperatures is proportional to the temperature gradient (the temperature difference divided by the spatial separation).
Linear constitutive relation
Regardless of whether transport processes are stimulated thermally or mechanically, in the small field limit it is expected that a flux will be linearly proportional to an applied field. In the linear case the flux and the force are said to be conjugate to each other. The relati
|
https://en.wikipedia.org/wiki/Fireplane
|
Fireplane is a computer internal interconnect created by Sun Microsystems.
The Fireplane interconnect architecture is an evolutionary development of Sun's previous Ultra Port Architecture (UPA). It was introduced in October 2000 as the processor I/O interconnect in the Sun Blade 1000 workstation, followed in early 2001 by its use in the Sun Fire and Sun Fire 15K series enterprise servers. These coincided with the popular expansion of the web in the dot com boom and a shift of Sun's main market from Unix workstations to datacenter servers such as the Starfire, supporting high traffic web sites.
Peak performance (in the Sun Blade 1000) reached 67.2 GBytes/second or a sustained 9.6 Gbit/s (2.4 Gbit/s for each processor).
Each generation of Sun architecture had involved upgraded processors and matching upgrades to the bus or interconnect architectures that supported them. By this time, fast access to memory was becoming more important than simple CPU instruction speed for overall performance. Multiprocessors, shared memory, memory caching and switching between CPU and memory were technologies necessary to achieve this.
The Sun Fire 15K series frame allows 18 combined processor and memory expander boards. Each board comprises four processors, four memory modules and I/O processors. The Fireplane interconnect uses 18×18 crossbar switches to connect between them. Overall peak bandwidth through the interconnect is 43 Gbytes per second.
As memory architectures increase in complexity, maintaining cache coherence becomes a greater problem than simple connectivity. Fireplane represents a substantial advance over previous interconnects in this aspect. It combines both snoopy cache and point-to-point directory-based models to give a two-level cache coherence model. Snoopy buses are used primarily for single buses with small numbers of processors; directory models are used for larger numbers of processors. Fireplane combines both, to give a scalable shared memory architecture
|
https://en.wikipedia.org/wiki/Filter%20bank
|
In signal processing, a filter bank (or filterbank) is an array of bandpass filters that separates the input signal into multiple components, each one carrying a single frequency sub-band of the original signal. One application of a filter bank is a graphic equalizer, which can attenuate the components differently and recombine them into a modified version of the original signal. The process of decomposition performed by the filter bank is called analysis (meaning analysis of the signal in terms of its components in each sub-band); the output of analysis is referred to as a subband signal with as many subbands as there are filters in the filter bank. The reconstruction process is called synthesis, meaning reconstitution of a complete signal resulting from the filtering process.
In digital signal processing, the term filter bank is also commonly applied to a bank of receivers. The difference is that receivers also down-convert the subbands to a low center frequency that can be re-sampled at a reduced rate. The same result can sometimes be achieved by undersampling the bandpass subbands.
Another application of filter banks is signal compression when some frequencies are more important than others. After decomposition, the important frequencies can be coded with a fine resolution. Small differences at these frequencies are significant and a coding scheme that preserves these differences must be used. On the other hand, less important frequencies do not have to be exact. A coarser coding scheme can be used, even though some of the finer (but less important) details will be lost in the coding.
The vocoder uses a filter bank to determine the amplitude information of the subbands of a modulator signal (such as a voice) and uses them to control the amplitude of the subbands of a carrier signal (such as the output of a guitar or synthesizer), thus imposing the dynamic characteristics of the modulator on the carrier.
Some filter banks work almost entirely in the tim
|
https://en.wikipedia.org/wiki/Celtiberian%20script
|
The Celtiberian script is a Paleohispanic script that was the main writing system of the Celtiberian language, an extinct Continental Celtic language, which was also occasionally written using the Latin alphabet. This script is a direct adaptation of the northeastern Iberian script, the most frequently used of the Iberian scripts.
Origins
All the Paleohispanic scripts, with the exception of the Greco-Iberian alphabet, share a common distinctive typological characteristic: they represent syllabic values for the stop consonants, and monophonemic values for the rest of consonants and vowels. They are thus to be classed as neither alphabets nor syllabaries; rather, they are mixed scripts normally identified as semi-syllabaries. There is no agreement about how the Paleohispanic semi-syllabaries originated; some researchers conclude that they derive only from the Phoenician alphabet, while others believe the Greek alphabet was also involved.
Typology and variants
The basic Celtiberian signary contains 26 signs rather than the 28 signs of the original model, the northeastern Iberian script, since the Celtiberians omitted one of the two rhotic and one of the three nasals. The remaining 26 signs comprised 5 vowels, 15 syllabic signs and 6 consonants (one lateral, two sibilants, one rhotic and two nasals). The sign equivalent to Iberian s is transcribed as z in Celtiberian, because it is assumed that it sometimes expresses the fricative result of an ancient dental stop (d), while the Iberian sign ś is transcribed as s. As for the use of the nasal signs, there are two variants of the Celtiberian script: In the eastern variant, the excluded nasal sign was the Iberian sign ḿ, while in the western variant, the excluded nasal sign was the Iberian sign m. This is interpreted as evidence of a double origin of the Celtiberian script. Like one variant of the northeastern Iberian script, the western variant of Celtiberian shows evidence of having allowed the voiced stops g and d to b
|
https://en.wikipedia.org/wiki/Rigidity%20%28mathematics%29
|
In mathematics, a rigid collection C of mathematical objects (for instance sets or functions) is one in which every c ∈ C is uniquely determined by less information about c than one would expect.
The above statement does not define a mathematical property; instead, it describes in what sense the adjective "rigid" is typically used in mathematics, by mathematicians.
Examples
Some examples include:
Harmonic functions on the unit disk are rigid in the sense that they are uniquely determined by their boundary values.
Holomorphic functions are determined by the set of all derivatives at a single point. A smooth function from the real line to the complex plane is not, in general, determined by all its derivatives at a single point, but it is if we require additionally that it be possible to extend the function to one on a neighbourhood of the real line in the complex plane. The Schwarz lemma is an example of such a rigidity theorem.
By the fundamental theorem of algebra, polynomials in C are rigid in the sense that any polynomial is completely determined by its values on any infinite set, say N, or the unit disk. By the previous example, a polynomial is also determined within the set of holomorphic functions by the finite set of its non-zero derivatives at any single point.
Linear maps L(X, Y) between vector spaces X, Y are rigid in the sense that any L ∈ L(X, Y) is completely determined by its values on any set of basis vectors of X.
Mostow's rigidity theorem, which states that the geometric structure of negatively curved manifolds is determined by their topological structure.
A well-ordered set is rigid in the sense that the only (order-preserving) automorphism on it is the identity function. Consequently, an isomorphism between two given well-ordered sets will be unique.
Cauchy's theorem on geometry of convex polytopes states that a convex polytope is uniquely determined by the geometry of its faces and combinatorial adjacency rules.
Alexandrov's uniqueness theor
|
https://en.wikipedia.org/wiki/Stationary%20phase%20approximation
|
In mathematics, the stationary phase approximation is a basic principle of asymptotic analysis, applying to functions given by integration against a rapidly-varying complex exponential.
This method originates from the 19th century, and is due to George Gabriel Stokes and Lord Kelvin.
It is closely related to Laplace's method and the method of steepest descent, but Laplace's contribution precedes the others.
Basics
The main idea of stationary phase methods relies on the cancellation of sinusoids with rapidly varying phase. If many sinusoids have the same phase and they are added together, they will add constructively. If, however, these same sinusoids have phases which change rapidly as the frequency changes, they will add incoherently, varying between constructive and destructive addition at different times.
Formula
Letting denote the set of critical points of the function (i.e. points where ), under the assumption that is either compactly supported or has exponential decay, and that all critical points are nondegenerate (i.e. for ) we have the following asymptotic formula, as :
Here denotes the Hessian of , and denotes the signature of the Hessian, i.e. the number of positive eigenvalues minus the number of negative eigenvalues.
For , this reduces to:
In this case the assumptions on reduce to all the critical points being non-degenerate.
This is just the Wick-rotated version of the formula for the method of steepest descent.
An example
Consider a function
.
The phase term in this function, , is stationary when
or equivalently,
.
Solutions to this equation yield dominant frequencies for some and . If we expand as a Taylor series about and neglect terms of order higher than , we have
where denotes the second derivative of . When is relatively large, even a small difference will generate rapid oscillations within the integral, leading to cancellation. Therefore we can extend the limits of integration beyond the limit for a Taylor expan
|
https://en.wikipedia.org/wiki/Mac%20OS%20Cyrillic%20encoding
|
Mac OS Cyrillic is a character encoding used on Apple Macintosh computers to represent texts in the Cyrillic script.
The original version lacked the letter Ґ, which is used in Ukrainian, although its use was limited during the Soviet era to regions outside Ukraine. The closely related MacUkrainian resolved this, differing only by replacing two less commonly used symbols with its uppercase and lowercase forms. The euro sign update of the Mac OS scripts incorporated these changes back into MacCyrillic.
Other related code pages include Mac OS Turkic Cyrillic and Mac OS Barents Cyrillic, introduced by Michael Everson in fonts for languages unsupported by standard MacCyrillic.
Layout
Each character is shown with its equivalent Unicode code point and its decimal code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as Mac OS Roman.
{| class="wikitable chset nounderlines" frame="box" style="text-align: center; border-collapse: collapse"
|-
|style="text-align: left; font-family: sans-serif" |
|width=22px | A2
|width=22px | B6
|width=22px | FF
|-
|style="text-align: left" | Macintosh Cyrillic before Mac OS 9.0also Microsoft code page 10007 and IBM code page/CCSID 1283
|
|
|rowspan=2
|-
|style="text-align: left" | Macintosh Ukrainian before Mac OS 9.0also Microsoft code page 10017
|rowspan=2
|rowspan=2
|-
|style="text-align: left" | Macintosh Cyrillic since Mac OS 9.0
|
|}
References
Character sets
Cyrillic
|
https://en.wikipedia.org/wiki/Mac%20OS%20Central%20European%20encoding
|
Mac OS Central European is a character encoding used on Apple Macintosh computers to represent texts in Central European and Southeastern European languages that use the Latin script. This encoding is also known as Code Page 10029. IBM assigns code page/CCSID 1282 to this encoding. This codepage contains diacritical letters that ISO 8859-2 does not have, and vice versa (This encoding supports Estonian, Lithuanian and Latvian while ISO 8859-2 supports Albanian, Croatian and Romanian).
Although a few of the characters which are in Mac OS Central European but not Mac OS Roman are also supported by Mac OS Croatian, these are not encoded at the same positions.
Code page layout
The following table shows the Macintosh Central European encoding. Each character is shown with its equivalent Unicode code point. Only the second half of the table (code points 128–255) is shown, the first half (code points 0–127) being the same as MacRoman or ASCII.
References
Character sets
Central European
|
https://en.wikipedia.org/wiki/DOS%20Navigator
|
DOS Navigator (DN) is an orthodox file manager for DOS, OS/2, and Windows.
Influence
DOS Navigator is an influential early implementation of orthodox file manager (OFM). By implementing three additional types of virtual file systems (VFS): XTree, Briefcase and list-based, DN launched a new generation of OFMs. It offers unlimited panels and many new important features, making it one of the most powerful (and complex) OFMs.
History
The initial version of DN I (v 0.90) was released in 1991, and written by Stefan Tanurkov, Andrew Zabolotny and Sergey Melnik (all from Chișinău, Moldova). After that, DN was rewritten using Turbo Vision by Stefan Tanurkov and Dmitry Dotsenko (Dotsenko developed DN at Moscow State University). These versions are sometimes referred as DN II.
In 1993, Slava Filimonov invited Stefan to join him to continue producing and publishing DN with joint efforts. Slava programmed new components, design and made countless optimizations and improvements. He wrote a new software key protection system that remained unbroken for almost four years after its introduction.
DN II was actively developed until the start of 1995, until version 1.35. Several other programmers participated in development after version 1.35. Starting from version 1.37, Filimonov and Ilya Bagdasarov were in charge of bug-fixing. Filimonov and Bagdasarov solely maintained, developed and released versions 1.37 through 1.39. After they left, DN was maintained again by Tanurkov and Maxim Masiutin.
In 1998, the development mostly took a bug-fixing direction as Ritlabs' product The Bat! became a more promising software product with much better commercial potential. The last shareware version was 1.50. In late 1999, Ritlabs decided to make version 1.51 of the DOS Navigator completely free with freely available source code.
Several open source DN branches currently exist including win32/dpmi/os2 version "dn/2" and Linux port attempt "dn2l".
Disadvantages
The original DN contains a la
|
https://en.wikipedia.org/wiki/Fetal%20pig
|
Fetal pigs are unborn pigs used in elementary as well as advanced biology classes as objects for dissection. Pigs, as a mammalian species, provide a good specimen for the study of physiological systems and processes due to the similarities between many pig and human organs.
Use in biology labs
Along with frogs and earthworms, fetal pigs are among the most common animals used in classroom dissection. There are several reasons for this, the main reason being that pigs, like humans, are mammals. Shared traits include common hair, mammary glands, live birth, similar organ systems, metabolic levels, and basic body form. They also allow for the study of fetal circulation, which differs from that of an adult. Secondly, fetal pigs are easy to obtain because they are by-products of the pork industry. Fetal pigs are the unborn piglets of sows that were killed by the meat-packing industry. These pigs are not bred and killed for this purpose, but are extracted from the deceased sow’s uterus. Fetal pigs not used in classroom dissections are often used in fertilizer or simply discarded. Thirdly, fetal pigs are cheap, which is an essential component for dissection use by schools. They can be ordered for about $30 at biological product companies. Fourthly, fetal pigs are easy to dissect because of their soft tissue and incompletely developed bones that are still made of cartilage. In addition, they are relatively large with well-developed organs that are easily visible. As long as the pork industry exists, fetal pigs will be relatively abundant, making them the prime choice for classroom dissections.
Alternatives
Several peer-reviewed comparative studies have concluded that the educational outcomes of students who are taught basic and advanced biomedical concepts and skills using non-animal methods are equivalent or superior to those of their peers who use animal-based laboratories such as animal dissection.
A systematic review concluded that students taught using non-animal m
|
https://en.wikipedia.org/wiki/Dippin%27%20Dots
|
Dippin' Dots is an ice cream snack invented by Curt Jones in 1988. The confection is created by flash freezing ice cream mix in liquid nitrogen. The snack is made by Dippin' Dots, Inc., headquartered in Paducah, Kentucky. Dippin' Dots are sold in 14 countries, including Honduras and Luxembourg.
Because the product requires storage at temperatures below , it is not sold in most grocery stores, as most cannot meet such extreme cooling requirements.
Dippin' Dots are sold in individual servings at franchised outlets. Many are in stadiums, arenas, shopping malls, and in vending machines, though there are also locations at aquariums, zoos, museums and theme parks.
History
Dippin' Dots was founded in Paducah, Kentucky, in 1988. Jones began the company in his parents' garage. It was originally invented as cow feed when Jones, who specialized in cryogenics, was trying to make efficient fodder for farm animals.
The company is now headquartered in Paducah, Kentucky.
In 1992, Dippin' Dots received for its ice cream making process, and in 1996 sued its main competitor, Mini Melts, for infringement.
Japan became the first international licensee of Dippin' Dots in 1995.
In 2007, the U.S. Patent and Trademark Office ruled against Dippin' Dots because the process of creating the ice cream was "obvious" rather than proprietary.
On November 4, 2011, the company filed for Chapter 11 bankruptcy protection, after failing to reach an agreement with their lender, Regions Bank. According to The New York Times, the bank had been trying to foreclose on Dippin' Dots for over a year.
On May 18, 2012, U.S. Bankruptcy Court approved the purchase of the company by Scott Fischer and his father Mark Fischer. The Fischers had co-founded Chaparral Energy in Oklahoma City, Oklahoma. They retained company founder Curt Jones as CEO, and planned to expand from 1,600 sales locations to 2,000 locations, keeping the production and headquarters in Paducah, where it employed 165 people.
In mid-201
|
https://en.wikipedia.org/wiki/Continuous%20modelling
|
Continuous modelling is the mathematical practice of applying a model to continuous data (data which has a potentially infinite number, and divisibility, of attributes). They often use differential equations and are converse to discrete modelling.
Modelling is generally broken down into several steps:
Making assumptions about the data: The modeller decides what is influencing the data and what can be safely ignored.
Making equations to fit the assumptions.
Solving the equations.
Verifying the results: Various statistical tests are applied to the data and the model and compared.
If the model passes the verification progress, putting it into practice.
If the model fails the verification progress, altering it and subjecting it again to verification; if it persists in fitting the data more poorly than a competing model, it is abandoned.
External links
Definition by the UK National Physical Laboratory
Applied mathematics
|
https://en.wikipedia.org/wiki/Cyclic%20homology
|
In noncommutative geometry and related branches of mathematics, cyclic homology and cyclic cohomology are certain (co)homology theories for associative algebras which generalize the de Rham (co)homology of manifolds. These notions were independently introduced by Boris Tsygan (homology) and Alain Connes (cohomology) in the 1980s. These invariants have many interesting relationships with several older branches of mathematics, including de Rham theory, Hochschild (co)homology, group cohomology, and the K-theory. Contributors to the development of the theory include Max Karoubi, Yuri L. Daletskii, Boris Feigin, Jean-Luc Brylinski, Mariusz Wodzicki, Jean-Louis Loday, Victor Nistor, Daniel Quillen, Joachim Cuntz, Ryszard Nest, Ralf Meyer, and Michael Puschnigg.
Hints about definition
The first definition of the cyclic homology of a ring A over a field of characteristic zero, denoted
HCn(A) or Hnλ(A),
proceeded by the means of the following explicit chain complex related to the Hochschild homology complex of A, called the Connes complex:
For any natural number n ≥ 0, define the operator which generates the natural cyclic action of on the n-th tensor product of A:
Recall that the Hochschild complex groups of A with coefficients in A itself are given by setting for all n ≥ 0. Then the components of the Connes complex are defined as , and the differential is the restriction of the Hochschild differential to this quotient. One can check that the Hochschild differential does indeed factor through to this space of coinvariants.
Connes later found a more categorical approach to cyclic homology using a notion of cyclic object in an abelian category, which is analogous to the notion of simplicial object. In this way, cyclic homology (and cohomology) may be interpreted as a derived functor, which can be explicitly computed by the means of the (b, B)-bicomplex. If the field k contains the rational numbers, the definition in terms of the Connes complex calculates the sa
|
https://en.wikipedia.org/wiki/Swale%20%28landform%29
|
A swale is a shady spot, or a sunken or marshy place. In US usage in particular, it is a shallow channel with gently sloping sides. Such a swale may be either natural or human-made. Artificial swales are often infiltration basins, designed to manage water runoff, filter pollutants, and increase rainwater infiltration. Bioswales are swales that involve the inclusion of plants or vegetation in their construction, specifically.
On land
The use of swales has been popularized as a rainwater-harvesting and soil-conservation strategy by Bill Mollison, David Holmgren, and other advocates of permaculture. In this context a swale is usually a water-harvesting ditch on contour, also called a contour bund.
Swales as used in permaculture are designed by permaculturalists to slow and capture runoff by spreading it horizontally across the landscape (along an elevation contour line), facilitating runoff infiltration into the soil. This archetypal form of swale is a dug-out, sloped, often grassed or "ditch" or "lull" in the landform. One option involves piling the spoil onto a new bank on the still lower slope, in which case a bund or berm is formed, mitigating the natural (and often hardscape-increased) risks to slopes below and to any linked watercourse from flash flooding.
In arid and seasonally dry places, vegetation (existing or planted) in the swale benefits heavily from the concentration of runoff. Trees and shrubs along the swale can provide shade and mulch which decrease evaporation.
On beaches
The term "swale" or "beach swale" is also used to describe long, narrow, usually shallow troughs between ridges or sandbars on a beach, that run parallel to the shoreline.
See also
Contour trenching
Gutter
Keyline design
Rain garden
Stormwater
Water-sensitive urban design
References
External links
Fact Sheet: Dry and Wet Vegetated Swales from Federal Highway Administration
Wetlands of the Great Lakes: The Beach Swale & Dune and Swale Types from Michigan State U
|
https://en.wikipedia.org/wiki/Indeterminate%20%28variable%29
|
In mathematics, particularly in formal algebra, an indeterminate is a symbol that is treated as a variable, but does not stand for anything else except itself. It may be used as a placeholder in objects such as polynomials and formal power series. In particular:
It does not designate a constant or a parameter of the problem.
It is not an unknown that could be solved for.
It is not a variable designating a function argument, or a variable being summed or integrated over.
It is not any type of bound variable.
It is just a symbol used in an entirely formal way.
When used as placeholders, a common operation is to substitute mathematical expressions (of an appropriate type) for the indeterminates.
By a common abuse of language, mathematical texts may not clearly distinguish indeterminates from ordinary variables.
Polynomials
A polynomial in an indeterminate is an expression of the form , where the are called the coefficients of the polynomial. Two such polynomials are equal only if the corresponding coefficients are equal. In contrast, two polynomial functions in a variable may be equal or not at a particular value of .
For example, the functions
are equal when and not equal otherwise. But the two polynomials
are unequal, since 2 does not equal 5, and 3 does not equal 2. In fact,
does not hold unless and . This is because is not, and does not designate, a number.
The distinction is subtle, since a polynomial in can be changed to a function in by substitution. But the distinction is important because information may be lost when this substitution is made. For example, when working in modulo 2, we have that:
so the polynomial function is identically equal to 0 for having any value in the modulo-2 system. However, the polynomial is not the zero polynomial, since the coefficients, 0, 1 and −1, respectively, are not all zero.
Formal power series
A formal power series in an indeterminate is an expression of the form , where no value is assigned to
|
https://en.wikipedia.org/wiki/Oncovirus
|
An oncovirus or oncogenic virus is a virus that can cause cancer. This term originated from studies of acutely transforming retroviruses in the 1950–60s, when the term "oncornaviruses" was used to denote their RNA virus origin. With the letters "RNA" removed, it now refers to any virus with a DNA or RNA genome causing cancer and is synonymous with "tumor virus" or "cancer virus". The vast majority of human and animal viruses do not cause cancer, probably because of longstanding co-evolution between the virus and its host. Oncoviruses have been important not only in epidemiology, but also in investigations of cell cycle control mechanisms such as the retinoblastoma protein.
The World Health Organization's International Agency for Research on Cancer estimated that in 2002, infection caused 17.8% of human cancers, with 11.9% caused by one of seven viruses. A 2020 study of 2,658 samples from 38 different types of cancer found that 16% were associated with a virus. These cancers might be easily prevented through vaccination (e.g., papillomavirus vaccines), diagnosed with simple blood tests, and treated with less-toxic antiviral compounds.
Causality
Generally, tumor viruses cause little or no disease after infection in their hosts, or cause non-neoplastic diseases such as acute hepatitis for hepatitis B virus or mononucleosis for Epstein–Barr virus. A minority of persons (or animals) will go on to develop cancers after infection. This has complicated efforts to determine whether or not a given virus causes cancer. The well-known Koch's postulates, 19th-century constructs developed by Robert Koch to establish the likelihood that Bacillus anthracis will cause anthrax disease, are not applicable to viral diseases. Firstly, this is because viruses cannot truly be isolated in pure culture—even stringent isolation techniques cannot exclude undetected contaminating viruses with similar density characteristics, and viruses must be grown on cells. Secondly, asymptomatic virus in
|
https://en.wikipedia.org/wiki/Yakov%20Perelman
|
Yakov Isidorovich Perelman (; – 16 March 1942) was a Russian Empire and Soviet science writer and author of many popular science books, including Physics Can Be Fun and Mathematics Can Be Fun (both translated from Russian into English).
Life and work
Perelman was born in 1882 in the town of Białystok, Russian Empire. He obtained the Diploma in Forestry from the Imperial Forestry Institute (Now Saint-Petersburg State Forestry University) in Saint Petersburg, in 1909. He was influenced by Ernst Mach and probably the Russian Machist Alexander Bogdanov in his pedagogical approach to popularising science. After the success of "Physics for Entertainment", Perelman set out to produce other books, in which he showed himself to be an imaginative populariser of science. Especially popular were "Arithmetic for entertainment", "Mechanics for entertainment", "Geometry for Entertainment", "Astronomy for entertainment", "Lively Mathematics", " Physics Everywhere", and "Tricks and Amusements".
His famous books on physics and astronomy were translated into various languages by the erstwhile Soviet Union.
The scientist Konstantin Tsiolkovsky thought highly of Perelman's talents and creative genius, writing of him in the preface of Interplanetary Journeys: "The author has long been known by his popular, witty and quite scientific works on physics, astronomy and mathematics, which are, moreover written in a marvelous language and are very readable."
Perelman has also authored a number of textbooks and articles in Soviet popular science magazines.
In addition to his educational and scientific writings, he also worked as an editor of science magazines, including Nature and People and In the Workshop of Nature.
Perelman died from starvation in 1942, during the German Siege of Leningrad. The siege started on 9 September 1941 and lasted 872 days, until
27 January 1944. The Siege of Leningrad was one of the longest, most destructive sieges of a major city in modern history and one of
|
https://en.wikipedia.org/wiki/Sensor%20fusion
|
Sensor fusion is the process of combining sensor data or data derived from disparate sources such that the resulting information has less uncertainty than would be possible when these sources were used individually. For instance, one could potentially obtain a more accurate location estimate of an indoor object by combining multiple data sources such as video cameras and WiFi localization signals. The term uncertainty reduction in this case can mean more accurate, more complete, or more dependable, or refer to the result of an emerging view, such as stereoscopic vision (calculation of depth information by combining two-dimensional images from two cameras at slightly different viewpoints).
The data sources for a fusion process are not specified to originate from identical sensors. One can distinguish direct fusion, indirect fusion and fusion of the outputs of the former two. Direct fusion is the fusion of sensor data from a set of heterogeneous or homogeneous sensors, soft sensors, and history values of sensor data, while indirect fusion uses information sources like a priori knowledge about the environment and human input.
Sensor fusion is also known as (multi-sensor) data fusion and is a subset of information fusion.
Examples of sensors
Accelerometers
Electronic Support Measures (ESM)
Flash LIDAR
Global Positioning System (GPS)
Infrared / thermal imaging camera
Magnetic sensors
MEMS
Phased array
Radar
Radiotelescopes, such as the proposed Square Kilometre Array, the largest sensor ever to be built
Scanning LIDAR
Seismic sensors
Sonar and other acoustic
Sonobuoys
TV cameras
→Additional List of sensors
Algorithms
Sensor fusion is a term that covers a number of methods and algorithms, including:
Kalman filter
Bayesian networks
Dempster–Shafer
Convolutional neural network
Gaussian processes
Example calculations
Two example sensor fusion calculations are illustrated below.
Let and denote two sensor measurements with noise variances a
|
https://en.wikipedia.org/wiki/Borel%27s%20lemma
|
In mathematics, Borel's lemma, named after Émile Borel, is an important result used in the theory of asymptotic expansions and partial differential equations.
Statement
Suppose U is an open set in the Euclidean space Rn, and suppose that f0, f1, ... is a sequence of smooth functions on U.
If I is any open interval in R containing 0 (possibly I = R), then there exists a smooth function F(t, x) defined on I×U, such that
for k ≥ 0 and x in U.
Proof
Proofs of Borel's lemma can be found in many text books on analysis, including and , from which the proof below is taken.
Note that it suffices to prove the result for a small interval I = (−ε,ε), since if ψ(t) is a smooth bump function with compact support in (−ε,ε) equal identically to 1 near 0, then ψ(t) ⋅ F(t, x) gives a solution on R × U. Similarly using a smooth partition of unity on Rn subordinate to a covering by open balls with centres at δ⋅Zn, it can be assumed that all the fm have compact support in some fixed closed ball C. For each m, let
where εm is chosen sufficiently small that
for |α| < m. These estimates imply that each sum
is uniformly convergent and hence that
is a smooth function with
By construction
Note: Exactly the same construction can be applied, without the auxiliary space U, to produce a smooth function on the interval I for which the derivatives at 0 form an arbitrary sequence.
See also
References
Partial differential equations
Lemmas in analysis
Asymptotic analysis
|
https://en.wikipedia.org/wiki/Ecotope
|
Ecotopes are the smallest ecologically distinct landscape features in a landscape mapping and classification system. As such, they represent relatively homogeneous, spatially explicit landscape functional units that are useful for stratifying landscapes into ecologically distinct features for the measurement and mapping of landscape structure, function and change.
Like ecosystems, ecotopes are identified using flexible criteria, in the case of ecotopes, by criteria defined within a specific ecological mapping and classification system. Just as ecosystems are defined by the interaction of biotic and abiotic components, ecotope classification should stratify landscapes based on a combination of both biotic and abiotic factors, including vegetation, soils, hydrology, and other factors. Other parameters that must be considered in the classification of ecotopes include their period of stability (such as the number of years that a feature might persist), and their spatial scale (minimum mapping unit).
The first definition of ecotope was made by Thorvald Sørensen in 1936. Arthur Tansley picked this definition up in 1939 and elaborated it. He stated that an ecotope is "the particular portion, [...], of the physical world that forms a home for the organisms which inhabit it". In 1945 Carl Troll first applied the term to landscape ecology "the smallest spatial object or component of a geographical landscape". Other academics clarified this to suggest that an ecotope is ecologically homogeneous and is the smallest ecological land unit that is relevant.
The term "patch" was used in place of the term "ecotope", by Foreman and Godron (1986), who defined a patch as "a nonlinear surface area differing in appearance from its surroundings". However, by definition, ecotopes must be identified using a full suite of ecosystem characteristics: patches are a more general type of spatial unit than ecotopes.
In ecology an ecotope has also been defined as "The species relation
|
https://en.wikipedia.org/wiki/Network%20processor
|
A network processor is an integrated circuit which has a feature set specifically targeted at the networking application domain.
Network processors are typically software programmable devices and would have generic characteristics similar to general purpose central processing units that are commonly used in many different types of equipment and products.
History of development
In modern telecommunications networks, information (voice, video, data) is transferred as packet data (termed packet switching) which is in contrast to older telecommunications networks that carried information as analog signals such as in the public switched telephone network (PSTN) or analog TV/Radio networks. The processing of these packets has resulted in the creation of integrated circuits (IC) that are optimised to deal with this form of packet data. Network processors have specific features or architectures that are provided to enhance and optimise packet processing within these networks.
Network processors have evolved into ICs with specific functions. This evolution has resulted in more complex and more flexible ICs being created. The newer circuits are programmable and thus allow a single hardware IC design to undertake a number of different functions, where the appropriate software is installed.
Network processors are used in the manufacture of many different types of network equipment such as:
Routers, software routers and switches (Inter-network processors)
Firewalls
Session border controllers
Intrusion detection devices
Intrusion prevention devices
Network monitoring systems
Network security (secure cryptoprocessors)
Reconfigurable Match-Tables
Reconfigurable Match-Tables were introduced in 2013 to allow switches to operate at high speeds while maintaining flexibility when it comes to the network protocols running on them, or the processing to does to them. P4 is used to program the chips. The company Barefoot Networks was based around these processors and was later
|
https://en.wikipedia.org/wiki/Riemann%E2%80%93Lebesgue%20lemma
|
In mathematics, the Riemann–Lebesgue lemma, named after Bernhard Riemann and Henri Lebesgue, states that the Fourier transform or Laplace transform of an L1 function vanishes at infinity. It is of importance in harmonic analysis and asymptotic analysis.
Statement
Let be an integrable function, i.e. is a measurable function such that
and let be the Fourier transform of , i.e.
Then vanishes at infinity: as .
Because the Fourier transform of an integrable function is continuous, the Fourier transform is a continuous function vanishing at infinity. If denotes the vector space of continuous functions vanishing at infinity, the Riemann–Lebesgue lemma may be formulated as follows: The Fourier transformation maps to .
Proof
We will focus on the one-dimensional case , the proof in higher dimensions is similar. First, suppose that is continuous and compactly supported. For , the substitution leads to
.
This gives a second formula for . Taking the mean of both formulas, we arrive at the following estimate:
.
Because is continuous, converges to as for all . Thus, converges to 0 as due to the dominated convergence theorem.
If is an arbitrary integrable function, it may be approximated in the norm by a compactly supported continuous function. For , pick a compactly supported continuous function such that . Then
Because this holds for any , it follows that as .
Other versions
The Riemann–Lebesgue lemma holds in a variety of other situations.
If , then the Riemann–Lebesgue lemma also holds for the Laplace transform of , that is,
as within the half-plane .
A version holds for Fourier series as well: if is an integrable function on a bounded interval, then the Fourier coefficients of tend to 0 as . This follows by extending by zero outside the interval, and then applying the version of the Riemann–Lebesgue lemma on the entire real line.
However, the Riemann–Lebesgue lemma does not hold for arbitrary distributions. For example, the Dirac delta
|
https://en.wikipedia.org/wiki/Coombs%20test
|
The direct and indirect Coombs tests, also known as antiglobulin test (AGT), are blood tests used in immunohematology. The direct Coombs test detects antibodies that are stuck to the surface of the red blood cells. Since these antibodies sometimes destroy red blood cells they can cause anemia; this test can help clarify the condition. The indirect Coombs test detects antibodies that are floating freely in the blood. These antibodies could act against certain red blood cells; the test can be carried out to diagnose reactions to a blood transfusion.
The direct Coombs test is used to test for autoimmune hemolytic anemia, a condition where the immune system breaks down red blood cells, leading to anemia. The direct Coombs test is used to detect antibodies or complement proteins attached to the surface of red blood cells. To perform the test, a blood sample is taken and the red blood cells are washed (removing the patient's plasma and unbound antibodies from the red blood cells) and then incubated with anti-human globulin ("Coombs reagent"). If the red cells then agglutinate, the test is positive, a visual indication that antibodies or complement proteins are bound to the surface of red blood cells and may be causing destruction of those cells.
The indirect Coombs test is used in prenatal testing of pregnant women and in testing prior to a blood transfusion. The test detects antibodies against foreign red blood cells. In this case, serum is extracted from a blood sample taken from the patient. The serum is incubated with foreign red blood cells of known antigenicity. Finally, anti-human globulin is added. If agglutination occurs, the indirect Coombs test is positive.
Mechanism
The two Coombs tests are based on anti-human antibodies binding to human antibodies, commonly IgG or IgM. These anti-human antibodies are produced by plasma cells of non-human animals after immunizing them with human plasma. Additionally, these anti-human antibodies will also bind to human anti
|
https://en.wikipedia.org/wiki/Dirichlet%E2%80%93Jordan%20test
|
In mathematics, the Dirichlet–Jordan test gives sufficient conditions for a real-valued, periodic function f to be equal to the sum of its Fourier series at a point of continuity. Moreover, the behavior of the Fourier series at points of discontinuity is determined as well (it is the midpoint of the values of the discontinuity). It is one of many conditions for the convergence of Fourier series.
The original test was established by Peter Gustav Lejeune Dirichlet in 1829, for piecewise monotone functions. It was extended in the late 19th century by Camille Jordan to functions of bounded variation (any function of bounded variation is the difference of two increasing functions).
Dirichlet–Jordan test for Fourier series
The Dirichlet–Jordan test states that if a periodic function is of bounded variation on a period, then the Fourier series converges, as , at each point of the domain to
In particular, if is continuous at , then the Fourier series converges to . Moreover, if is continuous everywhere, then the convergence is uniform.
Stated in terms of a periodic function of period 2π, the Fourier series coefficients are defined as
and the partial sums of the Fourier series are
The analogous statement holds irrespective of what the period of f is, or which version of the Fourier series is chosen.
There is also a pointwise version of the test: if is a periodic function in , and is of bounded variation in a neighborhood of , then the Fourier series at converges to the limit as above
Jordan test for Fourier integrals
For the Fourier transform on the real line, there is a version of the test as well. Suppose that is in and of bounded variation in a neighborhood of the point . Then
If is continuous in an open interval, then the integral on the left-hand side converges uniformly in the interval, and the limit on the right-hand side is .
This version of the test (although not satisfying modern demands for rigor) is historically prior to Dirichlet, being due
|
https://en.wikipedia.org/wiki/Fixed-point%20space
|
In mathematics, a Hausdorff space X is called a fixed-point space if every continuous function has a fixed point.
For example, any closed interval [a,b] in is a fixed point space, and it can be proved from the intermediate value property of real continuous function. The open interval (a, b), however, is not a fixed point space. To see it, consider the function
, for example.
Any linearly ordered space that is connected and has a top and a bottom element is a fixed point space.
Note that, in the definition, we could easily have disposed of the condition that the space is Hausdorff.
References
Vasile I. Istratescu, Fixed Point Theory, An Introduction, D. Reidel, the Netherlands (1981).
Andrzej Granas and James Dugundji, Fixed Point Theory (2003) Springer-Verlag, New York,
William A. Kirk and Brailey Sims, Handbook of Metric Fixed Point Theory (2001), Kluwer Academic, London
Fixed points (mathematics)
Topology
Topological spaces
|
https://en.wikipedia.org/wiki/Wirtinger%27s%20inequality%20for%20functions
|
For other inequalities named after Wirtinger, see Wirtinger's inequality.
In the mathematical field of analysis, the Wirtinger inequality is an important inequality for functions of a single variable, named after Wilhelm Wirtinger. It was used by Adolf Hurwitz in 1901 to give a new proof of the isoperimetric inequality for curves in the plane. A variety of closely related results are today known as Wirtinger's inequality, all of which can be viewed as certain forms of the Poincaré inequality.
Theorem
There are several inequivalent versions of the Wirtinger inequality:
Let be a continuous and differentiable function on the interval with average value zero and with . Then
and equality holds if and only if for some numbers and .
Let be a continuous and differentiable function on the interval with . Then
and equality holds if and only if for some number .
Let be a continuous and differentiable function on the interval with average value zero. Then
and equality holds if and only if for some number .
Despite their differences, these are closely related to one another, as can be seen from the account given below in terms of spectral geometry. They can also all be regarded as special cases of various forms of the Poincaré inequality, with the optimal Poincaré constant identified explicitly. The middle version is also a special case of the Friedrichs inequality, again with the optimal constant identified.
Proofs
The three versions of the Wirtinger inequality can all be proved by various means. This is illustrated in the following by a different kind of proof for each of the three Wirtinger inequalities given above. In each case, by a linear change of variables in the integrals involved, there is no loss of generality in only proving the theorem for one particular choice of .
Fourier series
Consider the first Wirtinger inequality given above. Take to be . Since Dirichlet's conditions are met, we can write
and the fact that the average value of is ze
|
https://en.wikipedia.org/wiki/White%20pages%20schema
|
A white pages schema is a data model, specifically a logical schema, for organizing the data contained in entries in a directory service, database, or application, such as an address book. In a white pages directory, each entry typically represents an individual person that makes use of network resources, such as by receiving email or having an account to log into a system.
In some environments, the schema may also include the representation of organizational divisions, roles, groups, and devices. The term is derived from the white pages, the listing of individuals in a telephone directory, typically sorted by the individual's home location (e.g. city) and then by
their name.
While many telephone service providers have for decades published a list of their subscribers in a telephone directory, and similarly corporations published a list of their employees in an internal directory, it was not until the rise of electronic mail systems that a requirement for standards for the electronic exchange of subscriber information between different systems appeared.
A white pages schema typically defines, for each real-world object being represented:
what attributes of that object are to be represented in the entry for that object
what relationships of that object to other objects are to be represented
how is the entry to be named in a DIT
how an entry is to be located by a client searching for it
how similar entries are to be distinguished
how are entries to be ordered when displayed in a list
One of the earliest attempts to standardize a white pages schema for electronic mail use was in X.520 and X.521, part of the X.500 specifications,
that was derived from the addressing requirements of X.400 and defined a Directory Information Tree that mirrored the international telephone system, with entries representing residential and organizational subscribers. This evolved into the Lightweight Directory Access Protocol standard schema in . One of the most widely deployed
|
https://en.wikipedia.org/wiki/100%2C000
|
100,000 (one hundred thousand) is the natural number following 99,999 and preceding 100,001. In scientific notation, it is written as 105.
Terms for 100,000
In Bangladesh, India, Pakistan and South Asia, one hundred thousand is called a lakh, and is written as 1,00,000. The Thai, Lao, Khmer and Vietnamese languages also have separate words for this number: , , (all saen), and respectively. The Malagasy word is .
In Cyrillic numerals, it is known as the legion (): or .
Values of 100,000
In astronomy, 100,000 metres, 100 kilometres, or 100 km (62 miles) is the altitude at which the Fédération Aéronautique Internationale (FAI) defines spaceflight to begin.
In the Irish language, () is a popular greeting meaning "a hundred thousand welcomes".
Selected 6-digit numbers (100,001–999,999)
100,001 to 199,999
100,003 = smallest 6-digit prime number
100,128 = smallest triangular number with 6 digits and the 447th triangular number
100,151 = twin prime with 100,153
100,153 = twin prime with 100,151
100,255 = Friedman number
100,489 = 3172, the smallest 6-digit square
101,101 = smallest palindromic Carmichael number
101,723 = smallest prime number whose square is a pandigital number containing each digit from 0 to 9
102,564 = The smallest parasitic number
103,049 = little Schroeder number
103,680 = highly totient number
103,769 = the number of combinatorial types of 5-dimensional parallelohedra
103,823 = 473, the smallest 6-digit cube and nice Friedman number (−1 + 0 + 3×8×2)3
104,480 = number of non-isomorphic set-systems of weight 14.
104,723 = the 9,999th prime number
104,729 = the 10,000th prime number
104,869 = the smallest prime number containing every non-prime digit
104,976 = 184, 3-smooth number
105,071 = number of triangle-free graphs on 11 vertices
105,664 = harmonic divisor number
109,376 = 1-automorphic number
110,880 = highly composite number
111,111 = repunit
111,777 = smallest natural number requiring 17 syllables in American E
|
https://en.wikipedia.org/wiki/GNU%20coding%20standards
|
The GNU coding standards are a set of rules and guidelines for writing programs that work consistently within the GNU system. The GNU Coding Standards were written by Richard Stallman and other GNU Project volunteers. The standards document is part of the GNU Project and is available from the GNU website. Though it focuses on writing free software for GNU in C, much of it can be applied more generally. In particular, the GNU Project encourages its contributors to always try to follow the standards—whether or not their programs are implemented in C.
Code formatting
The GNU Coding Standards specify exactly how to format most C programming language constructs. Here is a characteristic example:
int
main (int argc, char *argv[])
{
struct gizmo foo;
fetch_gizmo (&foo, argv[1]);
check:
if (foo.type == MOOMIN)
puts ("It's a moomin.");
else if (foo.bar < GIZMO_SNUFKIN_THRESHOLD / 2
|| (strcmp (foo.class_name, "snufkin") == 0)
&& foo.bar < GIZMO_SNUFKIN_THRESHOLD)
puts ("It's a snufkin.");
else
{
char *barney; /* Pointer to the first character after
the last slash in the file name. */
int wilma; /* Approximate size of the universe. */
int fred; /* Max value of the `bar' field. */
do
{
frobnicate (&foo, GIZMO_SNUFKIN_THRESHOLD,
&barney, &wilma, &fred);
twiddle (&foo, barney, wilma + fred);
}
while (foo.bar >= GIZMO_SNUFKIN_THRESHOLD);
store_size (wilma);
goto check;
}
return 0;
}
The consistent treatment of blocks as statements (for the purpose of indentation) is a very distinctive feature of the GNU C code formatting style; as is the mandatory space before parentheses. All code formatted in the GNU style has the property that each closing brace, bracket or parenthesis appears to the right of its corresponding opening delimiter, or in the same column.
As a general principle
|
https://en.wikipedia.org/wiki/Wireless%20USB
|
Wireless USB (Universal Serial Bus) is a short-range, high-bandwidth wireless radio communication protocol created by the Wireless USB Promoter Group which intended to increase the availability of general USB-based technologies. It is unrelated to Wi-Fi, and different from the Cypress WirelessUSB offerings. It was maintained by the WiMedia Alliance which ceased operations in 2009. Wireless USB is sometimes abbreviated as WUSB, although the USB Implementers Forum discouraged this practice and instead prefers to call the technology Certified Wireless USB to distinguish it from the competing UWB standard.
Wireless USB was based on the WiMedia Alliance's Ultra-WideBand (UWB) common radio platform, which is capable of sending 480 Mbit/s at distances up to and 110 Mbit/s at up to . It was designed to operate in the 3.1 to 10.6 GHz frequency range, although local regulatory policies may restrict the legal operating range in some countries.
The standard is now obsolete, and no new hardware has been produced for many years.
Support for the standard was deprecated in Linux 5.4 and removed in Linux 5.7
Overview
The rationale for this specification was the overwhelming success of USB as a base for peripherals everywhere: cited reasons include extreme ease of use and low cost, which allow the existence of a ubiquitous bidirectional, fast port architecture. The definition of Ultra-WideBand (UWB) matches the capabilities and transfer rates of USB very closely (from 1.5 and 12 Mbit/s up to 480 Mbit/s for USB 2.0) and makes for a natural wireless extension of USB in the short range (3 meters, up to 10 at a reduced rate of 110 Mbit/s). Still, there was no physical bus to power the peripherals any more, and the absence of wires means that some properties that are usually taken for granted in USB systems need to be achieved by other means.
The goal of the specification was to preserve the functional model of USB, based on intelligent hosts and behaviorally simple devices, while
|
https://en.wikipedia.org/wiki/Bertrand%20competition
|
Bertrand competition is a model of competition used in economics, named after Joseph Louis François Bertrand (1822–1900). It describes interactions among firms (sellers) that set prices and their customers (buyers) that choose quantities at the prices set. The model was formulated in 1883 by Bertrand in a review of Antoine Augustin Cournot's book Recherches sur les Principes Mathématiques de la Théorie des Richesses (1838) in which Cournot had put forward the Cournot model. Cournot's model argued that each firm should maximise its profit by selecting a quantity level and then adjusting price level to sell that quantity. The outcome of the model equilibrium involved firms pricing above marginal cost; hence, the competitive price. In his review, Bertrand argued that each firm should instead maximise its profits by selecting a price level that undercuts its competitors' prices, when their prices exceed marginal cost. The model was not formalized by Bertrand; however, the idea was developed into a mathematical model by Francis Ysidro Edgeworth in 1889.
Underlying assumptions of Bertrand competition
Considering the simple framework, the underlying assumptions that the Bertrand model makes is as follows:
there are firms () competing in the market that produce homogenous goods; that is, identical products;
the market demand function , where Q is the summation of quantity produced by firms , is continuous and downward sloping with ;
the marginal cost is symmetric, ;
it is a static game; firms simultaneously set price, without knowing the other firm's decision; and
firms don't have a capacity constraint; that is, each firm has the capability to produce enough goods to meet market demand.
Furthermore, it is intuitively deducible, when considering the law of demand of firms' competition in the market:
the firm that sets the lowest price will acquire the whole market; since, product is homogenous and there is no cost of switching for the customers; and
if the pri
|
https://en.wikipedia.org/wiki/Centrality
|
In graph theory and network analysis, indicators of centrality assign numbers or rankings to nodes within a graph corresponding to their network position. Applications include identifying the most influential person(s) in a social network, key infrastructure nodes in the Internet or urban networks, super-spreaders of disease, and brain networks. Centrality concepts were first developed in social network analysis, and many of the terms used to measure centrality reflect their sociological origin.
Definition and characterization of centrality indices
Centrality indices are answers to the question "What characterizes an important vertex?" The answer is given in terms of a real-valued function on the vertices of a graph, where the values produced are expected to provide a ranking which identifies the most important nodes.
The word "importance" has a wide number of meanings, leading to many different definitions of centrality. Two categorization schemes have been proposed. "Importance" can be conceived in relation to a type of flow or transfer across the network. This allows centralities to be classified by the type of flow they consider important. "Importance" can alternatively be conceived as involvement in the cohesiveness of the network. This allows centralities to be classified based on how they measure cohesiveness. Both of these approaches divide centralities in distinct categories. A further conclusion is that a centrality which is appropriate for one category will often "get it wrong" when applied to a different category.
Many, though not all, centrality measures effectively count the number of paths (also called walks) of some type going through a given vertex; the measures differ in how the relevant walks are defined and counted. Restricting consideration to this group allows for taxonomy which places many centralities on a spectrum from those concerned with walks of length one (degree centrality) to infinite walks (eigenvector centrality). Other centralit
|
https://en.wikipedia.org/wiki/Examples%20of%20vector%20spaces
|
This page lists some examples of vector spaces. See vector space for the definitions of terms used on this page. See also: dimension, basis.
Notation. Let F denote an arbitrary field such as the real numbers R or the complex numbers C.
Trivial or zero vector space
The simplest example of a vector space is the trivial one: {0}, which contains only the zero vector (see the third axiom in the Vector space article). Both vector addition and scalar multiplication are trivial. A basis for this vector space is the empty set, so that {0} is the 0-dimensional vector space over F. Every vector space over F contains a subspace isomorphic to this one.
The zero vector space is conceptually different from the null space of a linear operator L, which is the kernel of L. (Incidentally, the null space of L is a zero space if and only if L is injective.)
Field
The next simplest example is the field F itself. Vector addition is just field addition, and scalar multiplication is just field multiplication. This property can be used to prove that a field is a vector space. Any non-zero element of F serves as a basis so F is a 1-dimensional vector space over itself.
The field is a rather special vector space; in fact it is the simplest example of a commutative algebra over F. Also, F has just two subspaces: {0} and F itself.
Coordinate space
A basic example of a vector space is the following. For any positive integer n, the set of all n-tuples of elements of F forms an n-dimensional vector space over F sometimes called coordinate space and denoted Fn. An element of Fn is written
where each xi is an element of F. The operations on Fn are defined by
Commonly, F is the field of real numbers, in which case we obtain real coordinate space Rn. The field of complex numbers gives complex coordinate space Cn. The a + bi form of a complex number shows that C itself is a two-dimensional real vector space with coordinates (a,b). Similarly, the quaternions and the octonions are respectively
|
https://en.wikipedia.org/wiki/Zipping%20%28computer%20science%29
|
In computer science, zipping is a function which maps a tuple of sequences into a sequence of tuples. This name zip derives from the action of a zipper in that it interleaves two formerly disjoint sequences. The inverse function is unzip.
Example
Given the three words cat, fish and be where |cat| is 3, |fish| is 4 and |be| is 2. Let denote the length of the longest word which is fish; . The zip of cat, fish, be is then 4 tuples of elements:
where # is a symbol not in the original alphabet. In Haskell this truncates to the shortest sequence , where :
zip3 "cat" "fish" "be"
-- [('c','f','b'),('a','i','e')]
Definition
Let Σ be an alphabet, # a symbol not in Σ.
Let x1x2... x|x|, y1y2... y|y|, z1z2... z|z|, ... be n words (i.e. finite sequences) of elements of Σ. Let denote the length of the longest word, i.e. the maximum of |x|, |y|, |z|, ... .
The zip of these words is a finite sequence of n-tuples of elements of , i.e. an element of :
,
where for any index , the wi is #.
The zip of x, y, z, ... is denoted zip(x, y, z, ...) or x ⋆ y ⋆ z ⋆ ...
The inverse to zip is sometimes denoted unzip.
A variation of the zip operation is defined by:
where is the minimum length of the input words. It avoids the use of an adjoined element , but destroys information about elements of the input sequences beyond .
In programming languages
Zip functions are often available in programming languages, often referred to as . In Lisp-dialects one can simply the desired function over the desired lists, is variadic in Lisp so it can take an arbitrary number of lists as argument. An example from Clojure:
;; `nums' contains an infinite list of numbers (0 1 2 3 ...)
(def nums (range))
(def tens [10 20 30])
(def firstname "Alice")
;; To zip (0 1 2 3 ...) and [10 20 30] into a vector, invoke `map vector' on them; same with list
(map vector nums tens) ; ⇒ ([0 10] [1 20] [2 30])
(map list nums tens) ; ⇒ ((0 10) (1 20) (2 30))
(map str nums tens)
|
https://en.wikipedia.org/wiki/Wool%20combing%20machine
|
The wool combing machine was invented by Edmund Cartwright, the inventor of the power loom, in Doncaster. The machine was used to arrange and lay parallel by length the fibers of wool, prior to further treatment.
Cartwright's invention, nicknamed "Big Ben," was originally patented in April 1790, with subsequent patents following in December 1790 and May 1792 as the machine's design was refined by Cartwright. This machine is the first example of mechanization of the wool combing stage of the textile manufacturing process, and a significant achievement for the textile industry. Cartwright's machine was described as doing the work of 20 hand-combers.
The wool combing machine was improved refined by many later inventors, including Josué Heilmann, Samuel Cunliffe Lister, Isaac Holden, and James Noble.
References
English inventions
History of Nottinghamshire
Textile machinery
Weaving equipment
Wool industry
|
https://en.wikipedia.org/wiki/Snowflake%20schema
|
In computing, a snowflake schema is a logical arrangement of tables in a multidimensional database such that the entity relationship diagram resembles a snowflake shape. The snowflake schema is represented by centralized fact tables which are connected to multiple dimensions. "Snowflaking" is a method of normalizing the dimension tables in a star schema. When it is completely normalized along all the dimension tables, the resultant structure resembles a snowflake with the fact table in the middle. The principle behind snowflaking is normalization of the dimension tables by removing low cardinality attributes and forming separate tables.
The snowflake schema is similar to the star schema. However, in the snowflake schema, dimensions are normalized into multiple related tables, whereas the star schema's dimensions are denormalized with each dimension represented by a single table. A complex snowflake shape emerges when the dimensions of a snowflake schema are elaborate, having multiple levels of relationships, and the child tables have multiple parent tables ("forks in the road").
Common uses
Star and snowflake schemas are most commonly found in dimensional data warehouses and data marts where speed of data retrieval is more important than the efficiency of data manipulations. As such, the tables in these schemas are not normalized much, and are frequently designed at a level of normalization short of third normal form.
Data normalization and storage
Normalization splits up data to avoid redundancy (duplication) by moving commonly repeating groups of data into new tables. Normalization therefore tends to increase the number of tables that need to be joined in order to perform a given query, but reduces the space required to hold the data and the number of places where it needs to be updated if the data changes.
From a space storage point of view, dimensional tables are typically small compared to fact tables. This often negates the potential storage-space benef
|
https://en.wikipedia.org/wiki/Superadditivity
|
In mathematics, a function is superadditive if
for all and in the domain of
Similarly, a sequence is called superadditive if it satisfies the inequality
for all and
The term "superadditive" is also applied to functions from a boolean algebra to the real numbers where such as lower probabilities.
Examples of superadditive functions
The map is a superadditive function for nonnegative real numbers because the square of is always greater than or equal to the square of plus the square of for nonnegative real numbers and :
The determinant is superadditive for nonnegative Hermitian matrix, that is, if are nonnegative Hermitian then This follows from the Minkowski determinant theorem, which more generally states that is superadditive (equivalently, concave) for nonnegative Hermitian matrices of size : If are nonnegative Hermitian then
Horst Alzer proved that Hadamard's gamma function is superadditive for all real numbers with
Mutual information
Properties
If is a superadditive function whose domain contains then To see this, take the inequality at the top: Hence
The negative of a superadditive function is subadditive.
Fekete's lemma
The major reason for the use of superadditive sequences is the following lemma due to Michael Fekete.
Lemma: (Fekete) For every superadditive sequence the limit is equal to the supremum (The limit may be positive infinity, as is the case with the sequence for example.)
The analogue of Fekete's lemma holds for subadditive functions as well.
There are extensions of Fekete's lemma that do not require the definition of superadditivity above to hold for all and
There are also results that allow one to deduce the rate of convergence to the limit whose existence is stated in Fekete's lemma if some kind of both superadditivity and subadditivity is present. A good exposition of this topic may be found in Steele (1997).
See also
References
Notes
Mathematical analysis
Sequences and series
Types
|
https://en.wikipedia.org/wiki/Developmental%20Studies%20Hybridoma%20Bank
|
The Developmental Studies Hybridoma Bank (DSHB) is a National Resource established by the National Institutes of Health (NIH) in 1986 to bank and distribute at cost hybridomas and the monoclonal antibodies (mAbs) they produce to the basic science community worldwide. It is housed in the Department of Biology at the University of Iowa.
Mission
The mission of the DSHB is four-fold:
Keep product prices low to facilitate research (currently 40.00 USD per ml of supernatant).
Serve as a repository to relieve scientists of the time and expense of distributing hybridomas and the mAbs they produce.
Assure the scientific community that mAbs with limited demand remain available.
Maintain the highest product quality, provide prompt customer service and technical assistance.
Description
The DSHB is directed by David R. Soll at the University of Iowa. There are currently over 5000 hybridomas in the DSHB collection. The DSHB has obtained hybridomas from a variety of individuals and institutions, the latter including the Muscular Dystrophy Association, the National Cancer Institute, the NIH Common Fund, and the European Molecular Biology Laboratory (EMBL). The DSHB eagerly awaits new contributions. First time customers must agree to the DSHB terms of usage that products will be used for research purposes only, and that they cannot be commercialized or distributed to a third party. Researchers also agree to acknowledge both the DSHB and the contributing investigator and institution in publications that benefit from the use of DSHB products and provide to the DSHB citations of all publications. Individuals or institutions can deposit hybridomas for distribution at no cost. Contributing to the DSHB does not preclude the depositor from licensing cell lines for commercial purposes. The DSHB does not own any contributed intellectual property. The intellectual property remains that of the scientist and/or institution that banks the hybridomas. The DSHB covers the operating cost
|
https://en.wikipedia.org/wiki/Plateau%20%28mathematics%29
|
A plateau of a function is a part of its domain where the function has constant value.
More formally, let U, V be topological spaces. A plateau for a function f: U → V is a path-connected set of points P of U such that for some y we have
f (p) = y
for all p in P.
Examples
Plateaus can be observed in mathematical models as well as natural systems. In nature, plateaus can be observed in physical, chemical and biological systems. An example of an observed plateau in the natural world is in the tabulation of biodiversity of life through time.
See also
Level set
Contour line
Minimal surface
References
Topology
|
https://en.wikipedia.org/wiki/Product%20requirements%20document
|
A product requirements document (PRD) is a document containing all the requirements for a certain product.
It is written to allow people to understand what a product should do. A PRD should, however, generally avoid anticipating or defining how the product will do it in order to later allow interface designers and engineers to use their expertise to provide the optimal solution to the requirements.
PRDs are most frequently written for software products, but they can be used for any type of product and also for services.
Typically, a PRD is created from a user's point-of-view by a user/client or a company's marketing department (in the latter case it may also be called a Marketing Requirements Document (MRD)). The requirements are then analyzed by a (potential) maker/supplier from a more technical point of view, broken down and detailed in a Functional Specification (sometimes also called Technical Requirements Document).
Components
Typical components of a product requirements document (PRD) are:
Title & author information
Purpose and scope, from both a technical and business perspective
Stakeholder identification
Market assessment and target demographics
Product overview and use cases
Requirements, including
functional requirements (e.g. what a product should do)
usability requirements
technical requirements (e.g. security, network, platform, integration, client)
environmental requirements
support requirements
interaction requirements (e.g. how the product should work with other systems)
Assumptions
Constraints
Dependencies
High level workflow plans, timelines and milestones (more detail is defined through a project plan)
Evaluation plan and performance metrics
Not all PRDs have all of these components. In particular, PRDs for other types of products (manufactured goods, etc.) will eliminate the software-specific elements from the list above, and may add in additional elements that pertain to their domain, e.g. manufacturing requirements.
See
|
https://en.wikipedia.org/wiki/List%20of%20clinically%20important%20bacteria
|
This is a list of bacteria that are significant in medicine. It is not intended as an exhaustive list of all bacterial species: that should be at List of bacteria. For viruses, see list of viruses.
A
Acetobacter aurantius
Acinetobacter baumannii
Actinomyces israelii
Agrobacterium radiobacter
Agrobacterium tumefaciens
Anaplasma
Anaplasma phagocytophilum
Azorhizobium caulinodans
Azotobacter vinelandii
viridans streptococci
B
Bacillus
Bacillus anthracis
Bacillus brevis
Bacillus cereus
Bacillus fusiformis
Bacillus licheniformis
Bacillus megaterium
Bacillus mycoides
Bacillus stearothermophilus
Bacillus subtilis
Bacillus thuringiensis
Bacteroides
Bacteroides fragilis
Bacteroides gingivalis
Bacteroides melaninogenicus (now known as Prevotella melaninogenica)
Bartonella
Bartonella henselae
Bartonella quintana
Bordetella
Bordetella bronchiseptica
Bordetella pertussis
Borrelia burgdorferi
Brucella
Brucella abortus
Brucella melitensis
Brucella suis
Burkholderia
Burkholderia mallei
Burkholderia pseudomallei
Burkholderia cepacia
C
Calymmatobacterium granulomatis
Campylobacter
Campylobacter coli
Campylobacter fetus
Campylobacter jejuni
Campylobacter pylori
Chlamydia
Chlamydia trachomatis
Chlamydophila
Chlamydophila pneumoniae (previously called Chlamydia pneumoniae)
Chlamydophila psittaci (previously called Chlamydia psittaci)
Clostridium
Clostridium botulinum
Clostridium difficile
Clostridium perfringens (previously called Clostridium welchii)
Clostridium tetani
Corynebacterium
Corynebacterium diphtheriae
Corynebacterium fusiforme
Coxiella burnetii
E
Ehrlichia chaffeensis
Ehrlichia ewingii
Eikenella corrodens
Enterobacter cloacae
Enterococcus
Enterococcus avium
Enterococcus durans
Enterococcus faecalis
Enterococcus faecium
Enterococcus gallinarum
Enterococcus maloratus
Escherichia coli
F
Fusobacterium necrophorum
Fusobacterium nucleatum
G
Gardnerella vaginalis
H
Haemophilus
Haemophilus ducreyi
Haemophilus influenzae
Haemophilus parainfluenzae
Haemophilus pertuss
|
https://en.wikipedia.org/wiki/Film%20plane
|
A film plane is the surface of an image recording device such as a camera, upon which the lens creates the focused image. In cameras from different manufacturers, the film plane varies in distance from the lens. Thus each lens used has to be chosen carefully to assure that the image is focused on the exact place where the individual frame of film or digital sensor is positioned during exposure. It is sometimes marked on a camera body with the 'Φ' symbol where the vertical bar represents the exact location.
Movie cameras often also have small focus hooks where the focus puller can attach one side of a tape measure to quickly gauge the distance to objects that he intends to bring into focus. The measurement is taken from the film plane to the subject.
Due to Petzval field curvature, the film plane upon which a lens focuses may not be a literal plane. Cameras may bend the film stock or even plate stock slightly to compensate, improving the area of critical focus and sharpness. Nevertheless, the general concept of a focal plane is understood to refer to this position in the camera sensor relative to the lens.
See also
Cardinal point (optics)
Flange focal distance
Photography equipment
Planes (geometry)
Basically Film Panel is an area inside any cameras or image taking device that has a lens,film and a digital sensor.
|
https://en.wikipedia.org/wiki/Allee%20effect
|
The Allee effect is a phenomenon in biology characterized by a correlation between population size or density and the mean individual fitness (often measured as per capita population growth rate) of a population or species.
History and background
Although the concept of Allee effect had no title at the time, it was first described in the 1930s by its namesake, Warder Clyde Allee. Through experimental studies, Allee was able to demonstrate that goldfish have a greater survival rate when there are more individuals within the tank. This led him to conclude that aggregation can improve the survival rate of individuals, and that cooperation may be crucial in the overall evolution of social structure. The term "Allee principle" was introduced in the 1950s, a time when the field of ecology was heavily focused on the role of competition among and within species. The classical view of population dynamics stated that due to competition for resources, a population will experience a reduced overall growth rate at higher density and increased growth rate at lower density. In other words, individuals in a population would be better off when there are fewer individuals around due to a limited amount of resources (see logistic growth). However, the concept of the Allee effect introduced the idea that the reverse holds true when the population density is low. Individuals within a species often require the assistance of another individual for more than simple reproductive reasons in order to persist. The most obvious example of this is observed in animals that hunt for prey or defend against predators as a group.
Definition
The generally accepted definition of Allee effect is positive density dependence, or the positive correlation between population density and individual fitness. It is sometimes referred to as "undercrowding" and it is analogous (or even considered synonymous by some) to "depensation" in the field of fishery sciences. Listed below are a few significant subcateg
|
https://en.wikipedia.org/wiki/Triquetra
|
The triquetra ( ; from the Latin adjective triquetrus "three-cornered") is a triangular figure composed of three interlaced arcs, or (equivalently) three overlapping vesicae piscis lens shapes. It is used as an ornamental design in architecture, and in medieval manuscript illumination (particularly in the Insular tradition). Its depiction as interlaced is common in Insular ornaments from about the 7th century. In this interpretation, the triquetra represents the topologically simplest possible knot.
History
Iron Age
The term triquetra in archaeology is used of any figure consisting of three arcs, including a pinwheel design of the type of the triskeles. Such symbols become frequent from about the 4th century BC ornamented ceramics of Anatolia and Persia, and it appears on early Lycian coins.
The triquetra is found on runestones in Northern Europe, such as the Funbo Runestones, and on early Germanic coins. It bears a resemblance to the valknut, a design of three interlacing triangles, found in the same context.
Insular art
The triquetra is often found in insular art, most notably metal work and in illuminated manuscripts like the Book of Kells. It is a "minor though recurring theme" in the secondary phase of Anglo-Saxon sceatta production (c. 710–760). It is found in similar artwork on early Christian
High Crosses and slabs. An example from early medieval stonework is the Anglo-Saxon frithstool at Hexham Abbey.
The symbol has been interpreted as representing the Holy Trinity, especially since the Celtic revival of the 19th century. The original intention by the early medieval artists is unknown and experts warn against over-interpretation. It is, however, regularly used as a Trinitarian symbol in contemporary Catholic iconography.
Buddhist tradition
The triquetra has been a known symbol in Japan called Musubi Mitsugashiwa. Being one of the forms of the Iakšaku dynasty signs, it reached Japan with the dynasty's Kāśyapīya spreading technology and Buddhism via the
|
https://en.wikipedia.org/wiki/Advanced%20glycation%20end-product
|
Advanced glycation end products (AGEs) are proteins or lipids that become glycated as a result of exposure to sugars. They are a bio-marker implicated in aging and the development, or worsening, of many degenerative diseases, such as diabetes, atherosclerosis, chronic kidney disease, and Alzheimer's disease.
Dietary sources
Animal-derived foods that are high in fat and protein are generally AGE-rich and are prone to further AGE formation during cooking. However, only low molecular weight AGEs are absorbed through diet, and vegetarians have been found to have higher concentrations of overall AGEs compared to non-vegetarians. Therefore, it is unclear whether dietary AGEs contribute to disease and aging, or whether only endogenous AGEs (those produced in the body) matter. This does not free diet from potentially negatively influencing AGE, but potentially implies that dietary AGE may deserve less attention than other aspects of diet that lead to elevated blood sugar levels and formation of AGEs.
Effects
AGEs affect nearly every type of cell and molecule in the body and are thought to be one factor in aging and some age-related chronic diseases. They are also believed to play a causative role in the vascular complications of diabetes mellitus.
AGEs arise under certain pathologic conditions, such as oxidative stress due to hyperglycemia in patients with diabetes. AGEs play a role as proinflammatory mediators in gestational diabetes as well.
In the context of cardiovascular disease, AGEs can induce crosslinking of collagen, which can cause vascular stiffening and entrapment of low-density lipoprotein particles (LDL) in the artery walls. AGEs can also cause glycation of LDL which can promote its oxidation. Oxidized LDL is one of the major factors in the development of atherosclerosis. Finally, AGEs can bind to RAGE (receptor for advanced glycation end products) and cause oxidative stress as well as activation of inflammatory pathways in vascular endothelial cells.
I
|
https://en.wikipedia.org/wiki/Secure%20communication
|
Secure communication is when two entities are communicating and do not want a third party to listen in. For this to be the case, the entities need to communicate in a way that is unsusceptible to eavesdropping or interception. Secure communication includes means by which people can share information with varying degrees of certainty that third parties cannot intercept what is said. Other than spoken face-to-face communication with no possible eavesdropper, it is probable that no communication is guaranteed to be secure in this sense, although practical obstacles such as legislation, resources, technical issues (interception and encryption), and the sheer volume of communication serve to limit surveillance.
With many communications taking place over long distance and mediated by technology, and increasing awareness of the importance of interception issues, technology and its compromise are at the heart of this debate. For this reason, this article focuses on communications mediated or intercepted by technology.
Also see Trusted Computing, an approach under present development that achieves security in general at the potential cost of compelling obligatory trust in corporate and government bodies.
History
In 1898, Nikola Tesla demonstrated a radio controlled boat in Madison Square Garden that allowed secure communication between transmitter and receiver.
One of the most famous systems of secure communication was the Green Hornet. During WWII, Winston Churchill had to discuss vital matters with Franklin D. Roosevelt. In the beginning, the calls were made using a voice scrambler, as this was thought to be secure. When this was found to be untrue, engineers started to work on a whole new system, which resulted in the Green Hornet or SIGSALY. With the Green Hornet, any unauthorized party listening in would just hear white noise, but the conversation would remain clear to authorized parties. As secrecy was paramount, the location of the Green Hornet was only known by t
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.