source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Computer%20hardware
Computer hardware includes the physical parts of a computer, such as the case, central processing unit (CPU), random access memory (RAM), monitor, mouse, keyboard, computer data storage, graphics card, sound card, speakers and motherboard. By contrast, software is the set of instructions that can be stored and run by hardware. Hardware is so-termed because it is "hard" or rigid with respect to changes, whereas software is "soft" because it is easy to change. Hardware is typically directed by the software to execute any command or instruction. A combination of hardware and software forms a usable computing system, although other systems exist with only hardware. Von Neumann architecture The template for all modern computers is the Von Neumann architecture, detailed in a 1945 paper by Hungarian mathematician John von Neumann. This describes a design architecture for an electronic digital computer with subdivisions of a processing unit consisting of an arithmetic logic unit and processor registers, a control unit containing an instruction register and program counter, a memory to store both data and instructions, external mass storage, and input and output mechanisms. The meaning of the term has evolved to mean a stored-program computer in which an instruction fetch and a data operation cannot occur at the same time because they share a common bus. This is referred to as the Von Neumann bottleneck and often limits the performance of the system. Types of computer systems Personal computer The personal computer is one of the most common types of computer due to its versatility and relatively low price. Desktop personal computers have a monitor, a keyboard, a mouse, and a computer case. The computer case holds the motherboard, fixed or removable disk drives for data storage, the power supply, and may contain other peripheral devices such as modems or network interfaces. Some models of desktop computers integrated the monitor and keyboard into the same case as the
https://en.wikipedia.org/wiki/Network%20browser
A network browser is a tool used to browse a computer network. An example of this is My Network Places (or Network Neighborhood in earlier versions of Microsoft Windows). An actual program called Network Browser is offered in Mac OS 9. See also Browser service Computer networking
https://en.wikipedia.org/wiki/List%20of%20named%20matrices
This article lists some important classes of matrices used in mathematics, science and engineering. A matrix (plural matrices, or less commonly matrixes) is a rectangular array of numbers called entries. Matrices have a long history of both study and application, leading to diverse ways of classifying matrices. A first group is matrices satisfying concrete conditions of the entries, including constant matrices. Important examples include the identity matrix given by and the zero matrix of dimension . For example: . Further ways of classifying matrices are according to their eigenvalues, or by imposing conditions on the product of the matrix with other matrices. Finally, many domains, both in mathematics and other sciences including physics and chemistry, have particular matrices that are applied chiefly in these areas. Constant matrices The list below comprises matrices whose elements are constant for any given dimension (size) of matrix. The matrix entries will be denoted aij. The table below uses the Kronecker delta δij for two integers i and j which is 1 if i = j and 0 else. Specific patterns for entries The following lists matrices whose entries are subject to certain conditions. Many of them apply to square matrices only, that is matrices with the same number of columns and rows. The main diagonal of a square matrix is the diagonal joining the upper left corner and the lower right one or equivalently the entries ai,i. The other diagonal is called anti-diagonal (or counter-diagonal). Matrices satisfying some equations A number of matrix-related notions is about properties of products or inverses of the given matrix. The matrix product of a m-by-n matrix A and a n-by-k matrix B is the m-by-k matrix C given by This matrix product is denoted AB. Unlike the product of numbers, matrix products are not commutative, that is to say AB need not be equal to BA. A number of notions are concerned with the failure of this commutativity. An inverse of square matrix
https://en.wikipedia.org/wiki/In-band%20control
In-band control is a characteristic of network protocols with which data control is regulated. In-band control passes control data on the same connection as main data. Protocols that use in-band control include HTTP and SMTP. This is as opposed to Out-of-band control used by protocols such as FTP. Example Here is an example of an SMTP client-server interaction: Server: 220 example.com Client: HELO example.net Server: 250 Hello example.net, pleased to meet you Client: MAIL FROM: <jane.doe@example.net> Server: 250 jane.doe@example.net... Sender ok Client: RCPT TO: <john.doe@example.com> Server: 250 john.doe@example.com ... Recipient ok Client: DATA Server: 354 Enter mail, end with "." on a line by itself Client: Do you like ketchup? Client: How about pickles? Client: . Server: 250 Message accepted for delivery Client: QUIT Server: 221 example.com closing connection SMTP is in-band because the control messages, such as "HELO" and "MAIL FROM", are sent in the same stream as the actual message content. See also Out-of-band control Computer networks
https://en.wikipedia.org/wiki/IEEE%201451
IEEE 1451 is a set of smart transducer interface standards developed by the Institute of Electrical and Electronics Engineers (IEEE) Instrumentation and Measurement Society's Sensor Technology Technical Committee describing a set of open, common, network-independent communication interfaces for connecting transducers (sensors or actuators) to microprocessors, instrumentation systems, and control/field networks. One of the key elements of these standards is the definition of Transducer electronic data sheets (TEDS) for each transducer. The TEDS is a memory device attached to the transducer, which stores transducer identification, calibration, correction data, and manufacturer-related information. The goal of the IEEE 1451 family of standards is to allow the access of transducer data through a common set of interfaces whether the transducers are connected to systems or networks via a wired or wireless means. Transducer electronic data sheet A transducer electronic data sheet (TEDS) is a standardized method of storing transducer (sensors or actuators) identification, calibration, correction data, and manufacturer-related information. TEDS formats are defined in the IEEE 1451 set of smart transducer interface standards developed by the IEEE Instrumentation and Measurement Society's Sensor Technology Technical Committee that describe a set of open, common, network-independent communication interfaces for connecting transducers to microprocessors, instrumentation systems, and control/field networks. One of the key elements of the IEEE 1451 standards is the definition of TEDS for each transducer. The TEDS can be implemented as a memory device attached to the transducer and containing information needed by a measurement instrument or control system to interface with a transducer. TEDS can, however, be implemented in two ways. First, the TEDS can reside in embedded memory, typically an EEPROM, within the transducer itself which is connected to the measurement instrume
https://en.wikipedia.org/wiki/Node%20%28networking%29
In telecommunications networks, a node (, ‘knot’) is either a redistribution point or a communication endpoint. The definition of a node depends on the network and protocol layer referred to. A physical network node is an electronic device that is attached to a network, and is capable of creating, receiving, or transmitting information over a communication channel. A passive distribution point such as a distribution frame or patch panel is consequently not a node. Computer networks In data communication, a physical network node may either be data communication equipment (DCE) such as a modem, hub, bridge or switch; or data terminal equipment (DTE) such as a digital telephone handset, a printer or a host computer. If the network in question is a local area network (LAN) or wide area network (WAN), every LAN or WAN node that participates on the data link layer must have a network address, typically one for each network interface controller it possesses. Examples are computers, a DSL modem with Ethernet interface and wireless access point. Equipment, such as an Ethernet hub or modem with serial interface, that operates only below the data link layer does not require a network address. If the network in question is the Internet or an intranet, many physical network nodes are host computers, also known as Internet nodes, identified by an IP address, and all hosts are physical network nodes. However, some data-link-layer devices such as switches, bridges and wireless access points do not have an IP host address (except sometimes for administrative purposes), and are not considered to be Internet nodes or hosts, but are considered physical network nodes and LAN nodes. Telecommunications In the fixed telephone network, a node may be a public or private telephone exchange, a remote concentrator or a computer providing some intelligent network service. In cellular communication, switching points and databases such as the base station controller, home location registe
https://en.wikipedia.org/wiki/Xputer
The Xputer is a design for a reconfigurable computer, proposed by computer scientist Reiner Hartenstein. Hartenstein uses various terms to describe the various innovations in the design, including config-ware, flow-ware, morph-ware, and "anti-machine". The Xputer represents a move away from the traditional Von Neumann computer architecture, to a coarse-grained "soft Arithmetic logic unit (ALU)" architecture. Parallelism is achieved by configurable elements known as reconfigurable datapath arrays (rDPA), organized in a two-dimensional array of ALU's similar to the KressArray. Architecture The Xputer architecture is data-stream-based, and is the counterpart of the instruction-based von Neumann computer architecture. The Xputer architecture was one of the first coarse-grained reconfigurable architectures, and consists of a reconfigurable datapath array (rDPA) organized as a two-dimensional array of ALUs (rDPU). The bus-width between ALU's were 32-bit in the first version of the Xputer. The ALUs (also known as rDPUs) are used for computing a single mathematical operation, such as addition, subtraction or multiplication, and can also be used purely for routing. ALUs are mesh-connected via three types of connections, and data-flow along these connections are managed by an address generation unit. Nearest neighbour (connections between neighbouring ALUs) Row/column back-buses Global bus (a single global bus for interconnection between further ALUs) Programs for the Xputer are written in the C language, and compiled for usage on the Xputer using the CoDeX compiler written by the author. The CoDeX compiler maps suitable portions of the C program onto the Xputer's rDPA fabric. The remainder of the program is executed on the host system, such as a personal computer. rDPA A reconfigurable datapath array (rDPA) is a semiconductor device containing reconfigurable data path units and programmable interconnects, first proposed by Rainer Kress in 1993, at the University of K
https://en.wikipedia.org/wiki/Cahen%27s%20constant
In mathematics, Cahen's constant is defined as the value of an infinite series of unit fractions with alternating signs: Here denotes Sylvester's sequence, which is defined recursively by Combining these fractions in pairs leads to an alternative expansion of Cahen's constant as a series of positive unit fractions formed from the terms in even positions of Sylvester's sequence. This series for Cahen's constant forms its greedy Egyptian expansion: This constant is named after (also known for the Cahen–Mellin integral), who was the first to introduce it and prove its irrationality. Continued fraction expansion The majority of naturally occurring mathematical constants have no known simple patterns in their continued fraction expansions. Nevertheless, the complete continued fraction expansion of Cahen's constant is known: it is where the sequence of coefficients is defined by the recurrence relation All the partial quotients of this expansion are squares of integers. Davison and Shallit made use of the continued fraction expansion to prove that is transcendental. Alternatively, one may express the partial quotients in the continued fraction expansion of Cahen's constant through the terms of Sylvester's sequence: To see this, we prove by induction on that . Indeed, we have , and if holds for some , then where we used the recursion for in the first step respectively the recursion for in the final step. As a consequence, holds for every , from which it is easy to conclude that . Best approximation order Cahen's constant has best approximation order . That means, there exist constants such that the inequality has infinitely many solutions , while the inequality has at most finitely many solutions . This implies (but is not equivalent to) the fact that has irrationality measure 3, which was first observed by . To give a proof, denote by the sequence of convergents to Cahen's constant (that means, ). But now it follows from and the recursi
https://en.wikipedia.org/wiki/Mathematical%20Cranks
Mathematical Cranks is a book on pseudomathematics and the cranks who create it, written by Underwood Dudley. It was published by the Mathematical Association of America in their MAA Spectrum book series in 1992 (). Topics Previously, Augustus De Morgan wrote in A Budget of Paradoxes about cranks in multiple subjects, and Dudley wrote a book about angle trisection. However, this is the first book to focus on mathematical crankery as a whole. The book consists of 57 essays, loosely organized by the most common topics in mathematics for cranks to focus their attention on. The "top ten" of these topics, as listed by reviewer Ian Stewart, are, in order: squaring the circle, angle trisection, Fermat's Last Theorem, non-Euclidean geometry and the parallel postulate, the golden ratio, perfect numbers, the four color theorem, advocacy for duodecimal and other non-standard number systems, Cantor's diagonal argument for the uncountability of the real numbers, and doubling the cube. Other common topics for crankery, collected by Dudley, include calculations for the perimeter of an ellipse, roots of quintic equations, Fermat's little theorem, Gödel's incompleteness theorems, Goldbach's conjecture, magic squares, divisibility rules, constructible polygons, twin primes, set theory, statistics, and the Van der Pol oscillator. As David Singmaster writes, many of these topics are the subject of mainstream mathematics "and only become crankery in extreme cases". The book omits or passes lightly over other topics that apply mathematics to crankery in other areas, such as numerology and pyramidology. Its attitude towards the cranks it covers is one of "sympathy and understanding", and in order to keep the focus on their crankery it names them only by initials. The book also attempts to analyze the motivation and psychology behind crankery, and to provide advice to professional mathematicians on how to respond to cranks. Despite his work on the subject, which has "become
https://en.wikipedia.org/wiki/Interdigitation
Interdigitation is the interlinking of biological components that resembles the fingers of two hands being locked together. It can be a naturally occurring or man-made state. Examples Naturally occurring interdigitation includes skull sutures that develop during periods of brain growth, and which remain thin and straight, and later develop complex fractal interdigitations that provide interlocking strength. A layer of the retina where photoreception occurs is called the interdigitation zone. Adhesion or diffusive bonding occurs when sections of polymer chains from one surface interdigitate with those of an adjacent surface. In the dermis, dermal papillae (DP) (singular papilla, diminutive of Latin papula, 'pimple') are small, nipple-like extensions of the dermis into the epidermis, also known as interdigitations. The distal convoluted tubule (DCT), a portion of kidney nephron, can be recognized by several distinct features, including lateral membrane interdigitations with neighboring cells. Some hypotheses contend that crown shyness, the interdigitation of canopy branches, leads to "reciprocal pruning" of adjacent trees. Interdigitation is also found in biological research. Interdigitation fusion is a method of preparing calcium- and phosphate-loaded liposomes. Drugs inserted in the bilayer biomembrane may influence the lateral organization of the lipid membrane, with interdigitation of the membrane to fill volume voids. A similar interdigitation process involves investigating dissipative particle dynamics (DPD) simulations by adding alcohol molecules to the bilayers of double-tail lipids. Pressure-induced interdigitation is used to study hydrostatic pressure of bicellular dispersions containing anionic lipids.
https://en.wikipedia.org/wiki/Phenomics
Phenomics is the systematic study of traits that make up a phenotype. It was coined by UC Berkeley and LBNL scientist Steven A. Garan. As such, it is a transdisciplinary area of research that involves biology, data sciences, engineering and other fields. Phenomics is concerned with the measurement of the phenotype where a phenome is a set of traits (physical and biochemical traits) that can be produced by a given organism over the course of development and in response to genetic mutation and environmental influences. It is also important to remember that an organisms phenotype changes with time. The relationship between phenotype and genotype enables researchers to understand and study pleiotropy. Phenomics concepts are used in functional genomics, pharmaceutical research, metabolic engineering, agricultural research, and increasingly in phylogenetics. Technical challenges involve improving, both qualitatively and quantitatively, the capacity to measure phenomes. Applications Plant sciences In plant sciences, phenomics research occurs in both field and controlled environments. Field phenomics encompasses the measurement of phenotypes that occur in both cultivated and natural conditions, whereas controlled environment phenomics research involves the use of glass houses, growth chambers, and other systems where growth conditions can be manipulated. The University of Arizona's Field Scanner in Maricopa, Arizona is a platform developed to measure field phenotypes. Controlled environment systems include the Enviratron at Iowa State University, the Plant Cultivation Hall under construction at IPK, and platforms at the Donald Danforth Plant Science Center, the University of Nebraska-Lincoln, and elsewhere. Standards, methods, tools, and instrumentation A Minimal Information About a Plant Phenotyping Experiment (MIAPPE) standard is available and in use among many researchers collecting and organizing plant phenomics data. A diverse set of computer vision methods exist
https://en.wikipedia.org/wiki/Upper%20and%20lower%20bounds
In mathematics, particularly in order theory, an upper bound or majorant of a subset of some preordered set is an element of that is greater than or equal to every element of . Dually, a lower bound or minorant of is defined to be an element of that is less than or equal to every element of . A set with an upper (respectively, lower) bound is said to be bounded from above or majorized (respectively bounded from below or minorized) by that bound. The terms bounded above (bounded below) are also used in the mathematical literature for sets that have upper (respectively lower) bounds. Examples For example, is a lower bound for the set (as a subset of the integers or of the real numbers, etc.), and so is . On the other hand, is not a lower bound for since it is not smaller than every element in . The set has as both an upper bound and a lower bound; all other numbers are either an upper bound or a lower bound for that . Every subset of the natural numbers has a lower bound since the natural numbers have a least element (0 or 1, depending on convention). An infinite subset of the natural numbers cannot be bounded from above. An infinite subset of the integers may be bounded from below or bounded from above, but not both. An infinite subset of the rational numbers may or may not be bounded from below, and may or may not be bounded from above. Every finite subset of a non-empty totally ordered set has both upper and lower bounds. Bounds of functions The definitions can be generalized to functions and even to sets of functions. Given a function with domain and a preordered set as codomain, an element of is an upper bound of if for each in . The upper bound is called sharp if equality holds for at least one value of . It indicates that the constraint is optimal, and thus cannot be further reduced without invalidating the inequality. Similarly, a function defined on domain and having the same codomain is an upper bound of , if for each in
https://en.wikipedia.org/wiki/Secure%20element
A secure element (SE) is a secure operating system (OS) in a tamper-resistant processor chip or secure component. It can protect assets (root of trust, sensitive data, keys, certificates, applications) against high level software and hardware attacks. Applications that process this sensitive data on an SE are isolated and so operate within a controlled environment not impacted by software (including possible malware) found elsewhere on the OS. The hardware and embedded software meet the requirements of the Security IC Platform Protection Profile [PP 0084] including resistance to physical tampering scenarios described within it. More than 96 billion secure elements have been produced and shipped between 2010 and 2021. SEs exist in different form factors; as devices such as smart card, SIM/UICC, smart microSD, or as part of a larger device as an embedded or integrated SE. SEs are an evolution of the traditional chip that was powering smart cards, which have been adapted to suit the needs of numerous use cases, such as smartphones, tablets, set top boxes, wearables, connected cars, and other internet of things (IoT) devices. The technology is widely used by technology firms such as Oracle, Apple and Samsung. SEs provide secure isolation, storage and processing for applications (called applets) they host while being isolated from the external world (e.g. rich OS and application processor when embedded in a smartphone) and from other applications running on the SE. Java Card and MULTOS are the most deployed standardized multi-application operating systems currently used to develop applications running on SE. Since 1999, GlobalPlatform has been the body responsible for standardizing secure element technologies to support a dynamic model of application management in a multi actor model. GlobalPlatform also runs Functional and Security Certification programmes for secure elements, and hosts a list of Functional Certified and Security Certified products. GlobalPlatform t
https://en.wikipedia.org/wiki/Inequation
In mathematics, an inequation is a statement that an inequality holds between two values. It is usually written in the form of a pair of expressions denoting the values in question, with a relational sign between them indicating the specific inequality relation. Some examples of inequations are: In some cases, the term "inequation" can be considered synonymous to the term "inequality", while in other cases, an inequation is reserved only for statements whose inequality relation is "not equal to" (≠). Chains of inequations A shorthand notation is used for the conjunction of several inequations involving common expressions, by chaining them together. For example, the chain is shorthand for which also implies that and . In rare cases, chains without such implications about distant terms are used. For example is shorthand for , which does not imply Similarly, is shorthand for , which does not imply any order of and . Solving inequations Similar to equation solving, inequation solving means finding what values (numbers, functions, sets, etc.) fulfill a condition stated in the form of an inequation or a conjunction of several inequations. These expressions contain one or more unknowns, which are free variables for which values are sought that cause the condition to be fulfilled. To be precise, what is sought are often not necessarily actual values, but, more in general, expressions. A solution of the inequation is an assignment of expressions to the unknowns that satisfies the inequation(s); in other words, expressions such that, when they are substituted for the unknowns, make the inequations true propositions. Often, an additional objective expression (i.e., an optimization equation) is given, that is to be minimized or maximized by an optimal solution. For example, is a conjunction of inequations, partly written as chains (where can be read as "and"); the set of its solutions is shown in blue in the picture (the red, green, and orange line corre
https://en.wikipedia.org/wiki/Kernel%20principal%20component%20analysis
In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed in a reproducing kernel Hilbert space. Background: Linear PCA Recall that conventional PCA operates on zero-centered data; that is, , where is one of the multivariate observations. It operates by diagonalizing the covariance matrix, in other words, it gives an eigendecomposition of the covariance matrix: which can be rewritten as . (See also: Covariance matrix as a linear operator) Introduction of the Kernel to PCA To understand the utility of kernel PCA, particularly for clustering, observe that, while N points cannot, in general, be linearly separated in dimensions, they can almost always be linearly separated in dimensions. That is, given N points, , if we map them to an N-dimensional space with where , it is easy to construct a hyperplane that divides the points into arbitrary clusters. Of course, this creates linearly independent vectors, so there is no covariance on which to perform eigendecomposition explicitly as we would in linear PCA. Instead, in kernel PCA, a non-trivial, arbitrary function is 'chosen' that is never calculated explicitly, allowing the possibility to use very-high-dimensional 's if we never have to actually evaluate the data in that space. Since we generally try to avoid working in the -space, which we will call the 'feature space', we can create the N-by-N kernel which represents the inner product space (see Gramian matrix) of the otherwise intractable feature space. The dual form that arises in the creation of a kernel allows us to mathematically formulate a version of PCA in which we never actually solve the eigenvectors and eigenvalues of the covariance matrix in the -space (see Kernel trick). The N-elements in each column of K represent the dot product of one point of the tr
https://en.wikipedia.org/wiki/Footprinting
Footprinting (also known as reconnaissance) is the technique used for gathering information about computer systems and the entities they belong to. To get this information, a hacker might use various tools and technologies. This information is very useful to a hacker who is trying to crack a whole system. When used in the computer security lexicon, "Footprinting" generally refers to one of the pre-attack phases; tasks performed before doing the actual attack. Some of the tools used for Footprinting are Sam Spade, nslookup, traceroute, Nmap and neotrace. Techniques used DNS queries Network enumeration Network queries Operating system identification Software used Wireshark Uses It allows a hacker to gain information about the target system or network. This information can be used to carry out attacks on the system. That is the reason by which it may be named a Pre-Attack, since all the information is reviewed in order to get a complete and successful resolution of the attack. Footprinting is also used by ethical hackers and penetration testers to find security flaws and vulnerabilities within their own company's network before a malicious hacker does. Types There are two types of Footprinting that can be used: active Footprinting and passive Footprinting. Active Footprinting is the process of using tools and techniques, such as performing a ping sweep or using the traceroute command, to gather information on a target. Active Footprinting can trigger a target's Intrusion Detection System (IDS) and may be logged, and thus requires a level of stealth to successfully do. Passive Footprinting is the process of gathering information on a target by innocuous, or, passive, means. Browsing the target's website, visiting social media profiles of employees, searching for the website on WHOIS, and performing a Google search of the target are all ways of passive Footprinting. Passive Footprinting is the stealthier method since it will not trigger a target's IDS or otherwise
https://en.wikipedia.org/wiki/Data%20processing%20unit
A data processing unit (DPU) is a programmable computer processor that tightly integrates a general-purpose CPU with network interface hardware. Sometimes they are called "IPUs" (for "infrastructure processing unit") or "SmartNICs". They can be used in place of traditional NICs to relieve the main CPU of complex networking responsibilities and other "infrastructural" duties; although their features vary, they may be used to perform encryption/decryption, serve as a firewall, handle TCP/IP, process HTTP requests, or even function as a hypervisor or storage controller. These devices can be attractive to cloud computing providers whose servers might otherwise spend a significant amount of CPU time on these tasks, cutting into the cycles they can provide to guests. See also Compute Express Link (CXL)
https://en.wikipedia.org/wiki/Touch%20%28American%20TV%20series%29
Touch is an American drama television series that ran on Fox from January 25, 2012, to May 10, 2013. The series was created by Tim Kring and starred Kiefer Sutherland. During its first season the series aired regularly on Thursday nights beginning March 22, 2012. Thirteen episodes were ordered for the first season, with the two-episode season finale airing on Thursday, May 31, 2012. On May 9, 2012, Fox renewed the show for a second season. The second season was originally scheduled to begin Friday, October 26, 2012, but was pushed back to Friday, February 8, 2013. On May 9, 2013, Fox canceled the series after two seasons. Plot Touch centers on former reporter Martin Bohm (Kiefer Sutherland) and his 11-year-old son, Jake (David Mazouz), who has been diagnosed as autistic. Martin's wife died in the World Trade Center during the September 11 attacks, and he has been struggling to raise Jake since then, moving from job to job while tending to Jake's special needs. Jake has never spoken a word, but is fascinated by numbers and patterns relating to numbers, spending much of his days writing them down in notebooks or his touch-screen tablet and sometimes using objects (for instance popcorn kernels). Season 1 Jake's repeated escapes from special schools put Martin's capacity to raise the child in question, and social worker Clea Hopkins (Gugu Mbatha-Raw) arrives to perform an evaluation of Jake's living conditions. Martin, worried that he might lose his son, attempts to communicate with him, but the boy only continues to write down a specific pattern of numbers. This leads Martin to discover Professor Arthur Teller (Danny Glover), who has seen and worked with cases like this before, claiming that Jake is one of the few who can see the "pain of the universe" through the numbers. Teller also alludes to the interconnectivity of humanity as envisioned by the Chinese legend of the red string of fate, whereby actions, seen and unseen, can change the fate of people across the
https://en.wikipedia.org/wiki/List%20of%20numerical%20computational%20geometry%20topics
List of numerical computational geometry topics enumerates the topics of computational geometry that deals with geometric objects as continuous entities and applies methods and algorithms of nature characteristic to numerical analysis. This area is also called "machine geometry", computer-aided geometric design, and geometric modelling. See List of combinatorial computational geometry topics for another flavor of computational geometry that states problems in terms of geometric objects as discrete entities and hence the methods of their solution are mostly theories and algorithms of combinatorial character. Curves In the list of curves topics, the following ones are fundamental to geometric modelling. Parametric curve Bézier curve Spline Hermite spline Beta spline B-spline Higher-order spline NURBS Contour line Surfaces Bézier surface Isosurface Parametric surface Other Level-set method Computational topology Mathematics-related lists Geometric algorithms Geometry
https://en.wikipedia.org/wiki/Elliptic%20surface
In mathematics, an elliptic surface is a surface that has an elliptic fibration, in other words a proper morphism with connected fibers to an algebraic curve such that almost all fibers are smooth curves of genus 1. (Over an algebraically closed field such as the complex numbers, these fibers are elliptic curves, perhaps without a chosen origin.) This is equivalent to the generic fiber being a smooth curve of genus one. This follows from proper base change. The surface and the base curve are assumed to be non-singular (complex manifolds or regular schemes, depending on the context). The fibers that are not elliptic curves are called the singular fibers and were classified by Kunihiko Kodaira. Both elliptic and singular fibers are important in string theory, especially in F-theory. Elliptic surfaces form a large class of surfaces that contains many of the interesting examples of surfaces, and are relatively well understood in the theories of complex manifolds and smooth 4-manifolds. They are similar to (have analogies with, that is), elliptic curves over number fields. Examples The product of any elliptic curve with any curve is an elliptic surface (with no singular fibers). All surfaces of Kodaira dimension 1 are elliptic surfaces. Every complex Enriques surface is elliptic, and has an elliptic fibration over the projective line. Kodaira surfaces Dolgachev surfaces Shioda modular surfaces Kodaira's table of singular fibers Most of the fibers of an elliptic fibration are (non-singular) elliptic curves. The remaining fibers are called singular fibers: there are a finite number of them, and each one consists of a union of rational curves, possibly with singularities or non-zero multiplicities (so the fibers may be non-reduced schemes). Kodaira and Néron independently classified the possible fibers, and Tate's algorithm can be used to find the type of the fibers of an elliptic curve over a number field. The following table lists the possible fibers of a minimal el
https://en.wikipedia.org/wiki/Reverberation%20mapping
Reverberation mapping (or Echo mapping) is an astrophysical technique for measuring the structure of the broad-line region (BLR) around a supermassive black hole at the center of an active galaxy, and thus estimating the hole's mass. It is considered a "primary" mass estimation technique, i.e., the mass is measured directly from the motion that its gravitational force induces in the nearby gas. Newton's law of gravity defines a direct relation between the mass of a central object and the speed of a smaller object in orbit around the central mass. Thus, for matter orbiting a black hole, the black-hole mass is related by the formula to the RMS velocity ΔV of gas moving near the black hole in the broad emission-line region, measured from the Doppler broadening of the gaseous emission lines. In this formula, RBLR is the radius of the broad-line region; G is the constant of gravitation; and f is a poorly known "form factor" that depends on the shape of the BLR. While ΔV can be measured directly using spectroscopy, the necessary determination of RBLR is much less straightforward. This is where reverberation mapping comes into play. It utilizes the fact that the emission-line fluxes vary strongly in response to changes in the continuum, i.e., the light from the accretion disk near the black hole. Put simply, if the brightness of the accretion disk varies, the emission lines, which are excited in response to the accretion disk's light, will "reverberate", that is, vary in response. But it will take some time for light from the accretion disk to reach the broad-line region. Thus, the emission-line response is delayed with respect to changes in the continuum. Assuming that this delay is solely due to light travel times, the distance traveled by the light, corresponding to the radius of the broad emission-line region, can be measured. Only a small handful (less than 40) of active galactic nuclei have been accurately "mapped" in this way. An alternative approach is to use
https://en.wikipedia.org/wiki/Lua
Lua or LUA may refer to: Science and technology Lua (programming language) Latvia University of Agriculture Last universal ancestor, in evolution Ethnicity and language Lua people, of Laos Lawa people, of Thailand sometimes referred to as Lua Lua language (disambiguation), several languages (including Lua’) Luba-Kasai language, ISO 639 code Lai (surname) (賴), Chinese, sometimes romanised as Lua Places Tenzing-Hillary Airport (IATA code), in Lukla, Nepal One of the Duff Islands People Lua (goddess), a Roman goddess Saint Lua (died c 609) Lua Blanco (born 1987), Brazilian actress and singer Lua Getsinger (1871–1916) A member of Weki Meki band Other uses Lua (martial art), of Hawaii "Lua" (song), by Bright Eyes
https://en.wikipedia.org/wiki/Financial%20signal%20processing
Financial signal processing is a branch of signal processing technologies which applies to signals within financial markets. They are often used by quantitative analysts to make best estimation of the movement of financial markets, such as stock prices, options prices, or other types of derivatives. History The modern start of financial signal processing is often credited to Claude Shannon. Shannon was the inventor of modern communication theory. He discovered the capacity of a communication channel by analyzing entropy of information. For a long time, financial signal processing technologies have been used by different hedge funds, such as Jim Simon's Renaissance Technologies. However, hedge funds usually do not reveal their trade secrets. Some early research results in this area are summarized by R.H. Tütüncü and M. Koenig and by T.M. Cover, J.A. Thomas. A.N. Akansu and M.U. Torun published the book in financial signal processing entitled A Primer for Financial Engineering: Financial Signal Processing and Electronic Trading. An edited volume on the subject with the title Financial Signal Processing and Machine Learning was also published. The first IEEE International Conference on Acoustics, Speech, and Signal Processing session on Financial Signal Processing was organized at ICASSP 2011 in Prague, Czech Republic. There were two special issues of IEEE Journal of Selected Topics in Signal Processing published on Signal Processing Methods in Finance and Electronic Trading in 2012, and on Financial Signal Processing and Machine Learning for Electronic Trading in 2016 in addition to the special section on Signal Processing for Financial Applications in IEEE Signal Processing Magazine appeared in 2011. Financial Signal Processing in Academia Recently, a new research group in Imperial College London has been formed which focuses on Financial Signal Processing as part of the Communication and Signal Processing Group of the Electrical and Electronic Engineering depa
https://en.wikipedia.org/wiki/Network%20domain
A network domain is an administrative grouping of multiple private computer networks or local hosts within the same infrastructure. Domains can be identified using a domain name; domains which need to be accessible from the public Internet can be assigned a globally unique name within the Domain Name System (DNS). A domain controller is a server that automates the logins, user groups, and architecture of a domain, rather than manually coding this information on each host in the domain. It is common practice, but not required, to have the domain controller act as a DNS server. That is, it would assign names to hosts in the network based on their IP addresses. Example Half of the staff of Building A uses Network 1, . This network has the VLAN identifier of VLAN 10. The other half of the staff of Building A uses Network 2, . This network has the VLAN identifier of VLAN 20. All of the staff of Building B uses Network 3, . This has the VLAN identifier of VLAN 11. The router R1 serves as the gateway for all three networks, and the whole infrastructure is connected physically via ethernet. Network 2 and 3 are routed through R1 and have full access to each other. Network 1 is completely separate from the other two, and does not have access to either of them. Network 2 and 3 are therefore in the same network domain, while Network 1 is in its own network domain, albeit alone. A network administrator can then suitably name these network domains to match the infrastructure topology. Usage Use of the term network domain first appeared in 1965 and saw increasing usage beginning in 1985. It initially applied to the naming of radio stations based on broadcast frequency and geographic area. It entered its current usage by network theorists to describe solutions to the problems of subdividing a single homogeneous LAN and joining multiple networks, possibly constituted of different network architectures.
https://en.wikipedia.org/wiki/SREC%20%28file%20format%29
Motorola S-record is a file format, created by Motorola in the mid-1970s, that conveys binary information as hex values in ASCII text form. This file format may also be known as SRECORD, SREC, S19, S28, S37. It is commonly used for programming flash memory in microcontrollers, EPROMs, EEPROMs, and other types of programmable logic devices. In a typical application, a compiler or assembler converts a program's source code (such as C or assembly language) to machine code and outputs it into a HEX file. The HEX file is then imported by a programmer to "burn" the machine code into non-volatile memory, or is transferred to the target system for loading and execution. Overview History The S-record format was created in the mid-1970s for the Motorola 6800 processor. Software development tools for that and other embedded processors would make executable code and data in the S-record format. PROM programmers would then read the S-record format and "burn" the data into the PROMs or EPROMs used in the embedded system. Other hex formats There are other ASCII encoding with a similar purpose. BPNF, BHLF, and B10F were early binary formats, but they are neither compact nor flexible. Hexadecimal formats are more compact because they represent 4 bits rather than 1 bit per character. Many, such as S-record, are more flexible because they include address information so they can specify just a portion of a PROM. Intel HEX format was often used with Intel processors. TekHex is another hex format that can include a symbol table for debugging. Format Record structure An SREC format file consists of a series of ASCII text records. The records have the following structure from left to right: Record start - each record begins with an uppercase letter "S" character (ASCII 0x53) which stands for "Start-of-Record". Record type - single numeric digit "0" to "9" character (ASCII 0x30 to 0x39), defining the type of record. See table below. Byte count - two hex digits ("00" to "FF"), ind
https://en.wikipedia.org/wiki/List%20of%20probabilistic%20proofs%20of%20non-probabilistic%20theorems
Probability theory routinely uses results from other fields of mathematics (mostly, analysis). The opposite cases, collected below, are relatively rare; however, probability theory is used systematically in combinatorics via the probabilistic method. They are particularly used for non-constructive proofs. Analysis Normal numbers exist. Moreover, computable normal numbers exist. These non-probabilistic existence theorems follow from probabilistic results: (a) a number chosen at random (uniformly on (0,1)) is normal almost surely (which follows easily from the strong law of large numbers); (b) some probabilistic inequalities behind the strong law. The existence of a normal number follows from (a) immediately. The proof of the existence of computable normal numbers, based on (b), involves additional arguments. All known proofs use probabilistic arguments. Dvoretzky's theorem which states that high-dimensional convex bodies have ball-like slices is proved probabilistically. No deterministic construction is known, even for many specific bodies. The diameter of the Banach–Mazur compactum was calculated using a probabilistic construction. No deterministic construction is known. The original proof that the Hausdorff–Young inequality cannot be extended to is probabilistic. The proof of the de Leeuw–Kahane–Katznelson theorem (which is a stronger claim) is partially probabilistic. The first construction of a Salem set was probabilistic. Only in 1981 did Kaufman give a deterministic construction. Every continuous function on a compact interval can be uniformly approximated by polynomials, which is the Weierstrass approximation theorem. A probabilistic proof uses the weak law of large numbers. Non-probabilistic proofs were available earlier. Existence of a nowhere differentiable continuous function follows easily from properties of Wiener process. A non-probabilistic proof was available earlier. Stirling's formula was first discovered by Abraham de Moivre in his `The D
https://en.wikipedia.org/wiki/Biosignal
A biosignal is any signal in living beings that can be continually measured and monitored. The term biosignal is often used to refer to bioelectrical signals, but it may refer to both electrical and non-electrical signals. The usual understanding is to refer only to time-varying signals, although spatial parameter variations (e.g. the nucleotide sequence determining the genetic code) are sometimes subsumed as well. Electrical biosignals Electrical biosignals, or bioelectrical time signals, usually refers to the change in electric current produced by the sum of an electrical potential difference across a specialized tissue, organ or cell system like the nervous system. Thus, among the best-known bioelectrical signals are: Electroencephalogram (EEG) Electrocardiogram (ECG) Electromyogram (EMG) Electrooculogram (EOG) Electroretinogram (ERG) Electrogastrogram (EGG) Galvanic skin response (GSR) or electrodermal activity (EDA) EEG, ECG, EOG and EMG are measured with a differential amplifier which registers the difference between two electrodes attached to the skin. However, the galvanic skin response measures electrical resistance and the Magnetoencephalography (MEG) measures the magnetic field induced by electrical currents (electroencephalogram) of the brain. With the development of methods for remote measurement of electric fields using new sensor technology, electric biosignals such as EEG and ECG can be measured without electric contact with the skin. This can be applied, for example, for remote monitoring of brain waves and heart beat of patients who must not be touched, in particular patients with serious burns. Electrical currents and changes in electrical resistances across tissues can also be measured from plants. Biosignals may also refer to any non-electrical signal that is capable of being monitored from biological beings, such as mechanical signals (e.g. the mechanomyogram or MMG), acoustic signals (e.g. phonetic and non-phonetic utterances, bre
https://en.wikipedia.org/wiki/Ptolemaic%20graph
In graph theory, a Ptolemaic graph is an undirected graph whose shortest path distances obey Ptolemy's inequality, which in turn was named after the Greek astronomer and mathematician Ptolemy. The Ptolemaic graphs are exactly the graphs that are both chordal and distance-hereditary; they include the block graphs and are a subclass of the perfect graphs. Characterization A graph is Ptolemaic if and only if it obeys any of the following equivalent conditions: The shortest path distances obey Ptolemy's inequality: for every four vertices , , , and , the inequality holds. For instance, the gem graph (3-fan) in the illustration is not Ptolemaic, because in this graph , greater than . For every two overlapping maximal cliques, the intersection of the two cliques is a separator that splits the differences of the two cliques. In the illustration of the gem graph, this is not true: cliques and are not separated by their intersection, , because there is an edge that connects the cliques but avoids the intersection. Every -vertex cycle has at least diagonals. The graph is both chordal (every cycle of length greater than three has a diagonal) and distance-hereditary (every connected induced subgraph has the same distances as the whole graph). The gem shown is chordal but not distance-hereditary: in the subgraph induced by , the distance from to is 3, greater than the distance between the same vertices in the whole graph. Because both chordal and distance-hereditary graphs are perfect graphs, so are the Ptolemaic graphs. The graph is chordal and does not contain an induced gem, a graph formed by adding two non-crossing diagonals to a pentagon. The graph is distance-hereditary and does not contain an induced 4-cycle. The graph can be constructed from a single vertex by a sequence of operations that add a new degree-one (pendant) vertex, or duplicate (twin) an existing vertex, with the exception that a twin operation in which the new duplicate vertex is not adjacent to its
https://en.wikipedia.org/wiki/Hardware%20compatibility%20list
A hardware compatibility list (HCL) is a list of computer hardware (typically including many types of peripheral devices) that is compatible with a particular operating system or device management software. The list contains both whole computer systems and specific hardware elements including motherboards, sound cards, and video cards. In today's world, there is a vast amount of computer hardware in circulation, and many operating systems too. A hardware compatibility list is a database of hardware models and their compatibility with a certain operating system. HCLs can be centrally controlled (one person or team keeps the list of hardware maintained) or user-driven (users submit reviews on hardware they have used). There are many HCLs. Usually, each operating system will have an official HCL on its website. See also System requirements
https://en.wikipedia.org/wiki/X2%20transceiver
The X2 transceiver format is a 10 gigabit per second modular fiber optic interface intended for use in routers, switches and optical transport platforms. It is an early generation 10 gigabit interface related to the similar XENPAK and XPAK formats. X2 may be used with 10 gigabit ethernet or OC-192/STM-64 speed SDH/SONET equipment. X2 modules are smaller and consume less power than first generation XENPAK modules, but larger and consume more energy than the newer XFP transceiver standard and SFP+ standards. As of 2016 this format is relatively uncommon and has been replaced by 10Gbit/s SFP+ in most new equipment.
https://en.wikipedia.org/wiki/Common%20spatial%20pattern
Common spatial pattern (CSP) is a mathematical procedure used in signal processing for separating a multivariate signal into additive subcomponents which have maximum differences in variance between two windows. Details Let of size and of size be two windows of a multivariate signal, where is the number of signals and and are the respective number of samples. The CSP algorithm determines the component such that the ratio of variance (or second-order moment) is maximized between the two windows: The solution is given by computing the two covariance matrices: Then, the simultaneous diagonalization of those two matrices (also called generalized eigenvalue decomposition) is realized. We find the matrix of eigenvectors and the diagonal matrix of eigenvalues sorted by decreasing order such that: and with the identity matrix. This is equivalent to the eigendecomposition of : will correspond to the first column of : Discussion Relation between variance ratio and eigenvalue The eigenvectors composing are components with variance ratio between the two windows equal to their corresponding eigenvalue: Other components The vectorial subspace generated by the first eigenvectors will be the subspace maximizing the variance ratio of all components belonging to it: On the same way, the vectorial subspace generated by the last eigenvectors will be the subspace minimizing the variance ratio of all components belonging to it: Variance or second-order moment CSP can be applied after a mean subtraction (a.k.a. "mean centering") on signals in order to realize a variance ratio optimization. Otherwise CSP optimizes the ratio of second-order moment. Choice of windows X1 and X2 The standard use consists on choosing the windows to correspond to two periods of time with different activation of sources (e.g. during rest and during a specific task). It is also possible to choose the two windows to correspond to two different frequency bands in order t
https://en.wikipedia.org/wiki/Cross-correlation%20matrix
The cross-correlation matrix of two random vectors is a matrix containing as elements the cross-correlations of all pairs of elements of the random vectors. The cross-correlation matrix is used in various digital signal processing algorithms. Definition For two random vectors and , each containing random elements whose expected value and variance exist, the cross-correlation matrix of and is defined by and has dimensions . Written component-wise: The random vectors and need not have the same dimension, and either might be a scalar value. Example For example, if and are random vectors, then is a matrix whose -th entry is . Complex random vectors If and are complex random vectors, each containing random variables whose expected value and variance exist, the cross-correlation matrix of and is defined by where denotes Hermitian transposition. Uncorrelatedness Two random vectors and are called uncorrelated if They are uncorrelated if and only if their cross-covariance matrix matrix is zero. In the case of two complex random vectors and they are called uncorrelated if and Properties Relation to the cross-covariance matrix The cross-correlation is related to the cross-covariance matrix as follows: Respectively for complex random vectors: See also Autocorrelation Correlation does not imply causation Covariance function Pearson product-moment correlation coefficient Correlation function (astronomy) Correlation function (statistical mechanics) Correlation function (quantum field theory) Mutual information Rate distortion theory Radial distribution function
https://en.wikipedia.org/wiki/Information-centric%20networking
Information-centric networking (ICN) is an approach to evolve the Internet infrastructure away from a host-centric paradigm, based on perpetual connectivity and the end-to-end principle, to a network architecture in which the focal point is identified information (or content or data). Some of the application areas of ICN are in web applications, multimedia streaming, the Internet of Things, Wireless Sensor Networks and Vehicular networks and emerging applications such as social networks, Industrial IoTs. In this paradigm, connectivity may well be intermittent, end-host and in-network storage can be capitalized upon transparently, as bits in the network and on data storage devices have exactly the same value, mobility and multi access are the norm and anycast, multicast, and broadcast are natively supported. Data becomes independent from location, application, storage, and means of transportation, enabling in-network caching and replication. The expected benefits are improved efficiency, better scalability with respect to information/bandwidth demand and better robustness in challenging communication scenarios. In information-centric networking the cache is a network level solution, and it has rapidly changing cache states, higher request arrival rates and smaller cache sizes. In particular, information-centric networking caching policies should be fast and lightweight. IRTF Working Group The Internet Research Task Force (IRTF) is sponsoring a research group on Information-Centric Networking Research, which serves as a forum for the exchange and analysis of ICN research ideas and proposals. Current and future work items and outputs are managed on the ICNRG wiki.
https://en.wikipedia.org/wiki/List%20of%20impossible%20puzzles
This is a list of puzzles that cannot be solved. An impossible puzzle is a puzzle that cannot be resolved, either due to lack of sufficient information, or any number of logical impossibilities. 15 puzzle – Slide fifteen numbered tiles into numerical order. Impossible for half of the starting positions. Five room puzzle – Cross each wall of a diagram exactly once with a continuous line. MU puzzle – Transform the string to according to a set of rules. Mutilated chessboard problem – Place 31 dominoes of size 2×1 on a chessboard with two opposite corners removed. Coloring the edges of the Petersen graph with three colors. Seven Bridges of Königsberg – Walk through a city while crossing each of seven bridges exactly once. Three cups problem – Turn three cups right-side up after starting with one wrong and turning two at a time. Three utilities problem – Connect three cottages to gas, water, and electricity without crossing lines. Thirty-six officers problem – Arrange six regiments consisting of six officers each of different ranks in a 6 × 6 square so that no rank or regiment is repeated in any row or column. See also Impossible Puzzle, or "Sum and Product Puzzle", which is not impossible -gry, a word puzzle List of undecidable problems, no algorithm can exist to answer a yes–no question about the input Puzzles Mathematics-related lists
https://en.wikipedia.org/wiki/Wigner%20quasiprobability%20distribution
The Wigner quasiprobability distribution (also called the Wigner function or the Wigner–Ville distribution, after Eugene Wigner and Jean-André Ville) is a quasiprobability distribution. It was introduced by Eugene Wigner in 1932 to study quantum corrections to classical statistical mechanics. The goal was to link the wavefunction that appears in Schrödinger's equation to a probability distribution in phase space. It is a generating function for all spatial autocorrelation functions of a given quantum-mechanical wavefunction . Thus, it maps on the quantum density matrix in the map between real phase-space functions and Hermitian operators introduced by Hermann Weyl in 1927, in a context related to representation theory in mathematics (see Weyl quantization). In effect, it is the Wigner–Weyl transform of the density matrix, so the realization of that operator in phase space. It was later rederived by Jean Ville in 1948 as a quadratic (in signal) representation of the local time-frequency energy of a signal, effectively a spectrogram. In 1949, José Enrique Moyal, who had derived it independently, recognized it as the quantum moment-generating functional, and thus as the basis of an elegant encoding of all quantum expectation values, and hence quantum mechanics, in phase space (see Phase-space formulation). It has applications in statistical mechanics, quantum chemistry, quantum optics, classical optics and signal analysis in diverse fields, such as electrical engineering, seismology, time–frequency analysis for music signals, spectrograms in biology and speech processing, and engine design. Relation to classical mechanics A classical particle has a definite position and momentum, and hence it is represented by a point in phase space. Given a collection (ensemble) of particles, the probability of finding a particle at a certain position in phase space is specified by a probability distribution, the Liouville density. This strict interpretation fails for a quantum p
https://en.wikipedia.org/wiki/Sophomore%27s%20dream
In mathematics, the sophomore's dream is the pair of identities (especially the first) discovered in 1697 by Johann Bernoulli. The numerical values of these constants are approximately 1.291285997... and 0.7834305107..., respectively. The name "sophomore's dream" is in contrast to the name "freshman's dream" which is given to the incorrect identity The sophomore's dream has a similar too-good-to-be-true feel, but is true. Proof The proofs of the two identities are completely analogous, so only the proof of the second is presented here. The key ingredients of the proof are: to write (using the notation for the natural logarithm and for the exponential function); to expand using the power series for ; and to integrate termwise, using integration by substitution. In details, can be expanded as Therefore, By uniform convergence of the power series, one may interchange summation and integration to yield To evaluate the above integrals, one may change the variable in the integral via the substitution With this substitution, the bounds of integration are transformed to giving the identity By Euler's integral identity for the Gamma function, one has so that Summing these (and changing indexing so it starts at instead of ) yields the formula. Historical proof The original proof, given in Bernoulli, and presented in modernized form in Dunham, differs from the one above in how the termwise integral is computed, but is otherwise the same, omitting technical details to justify steps (such as termwise integration). Rather than integrating by substitution, yielding the Gamma function (which was not yet known), Bernoulli used integration by parts to iteratively compute these terms. The integration by parts proceeds as follows, varying the two exponents independently to obtain a recursion. An indefinite integral is computed initially, omitting the constant of integration both because this was done historically, and because it drops out when computing
https://en.wikipedia.org/wiki/Index%20of%20software%20engineering%20articles
This is an alphabetical list of articles pertaining specifically to software engineering. 0–9 2D computer graphics — 3D computer graphics A Abstract syntax tree — Abstraction — Accounting software — Ada — Addressing mode — Agile software development — Algorithm — Anti-pattern — Application framework — Application software — Artificial intelligence — Artificial neural network — ASCII — Aspect-oriented programming — Assembler — Assembly language — Assertion — Automata theory — Automotive software — Avionics software B Backward compatibility — BASIC — BCPL — Berkeley Software Distribution — Beta test — Boolean logic — Business software C C — C++ — C# — CAD — Canonical model — Capability Maturity Model — Capability Maturity Model Integration — COBOL — Code coverage — Cohesion — Compilers — Complexity — Computation — Computational complexity theory — Computer — Computer-aided design — Computer-aided manufacturing — Computer architecture — Computer bug — Computer file — Computer graphics — Computer model — Computer multitasking — Computer programming — Computer science — Computer software — Computer term etymologies — Concurrent programming — Configuration management — Coupling — Cyclomatic complexity D Data structure — Data-structured language — Database — Dead code — Decision table — Declarative programming — Design pattern — Development stage — Device driver — Disassembler — Disk image — Domain-specific language E EEPROM — Electronic design automation — Embedded system — Engineering — Engineering model — EPROM — Even-odd rule — Expert system — Extreme programming F FIFO (computing and electronics) — File system — Filename extension — Finite-state machine — Firmware — Formal methods — Forth — Fortran — Forward compatibility — Functional decomposition — Functional design — Functional programming G Game development — Game programming — Game tester — GIMP Toolkit — Graphical user interface H Hierarchical database — High-level language — Hoare logic — Human–compute
https://en.wikipedia.org/wiki/Surface%20stress
Surface stress was first defined by Josiah Willard Gibbs (1839-1903) as the amount of the reversible work per unit area needed to elastically stretch a pre-existing surface. A suggestion is surface stress define as association with the amount of the reversible work per unit area needed to elastically stretch a pre-existing surface instead of up definition. A similar term called "surface free energy", which represents the excess free energy per unit area needed to create a new surface, is easily confused with "surface stress". Although surface stress and surface free energy of liquid–gas or liquid–liquid interface are the same, they are very different in solid–gas or solid–solid interface, which will be discussed in details later. Since both terms represent a force per unit length, they have been referred to as "surface tension", which contributes further to the confusion in the literature. Thermodynamics of surface stress Definition of surface free energy is seemly the amount of reversible work performed to create new area of surface, expressed as: Gibbs was the first to define another surface quantity, different from the surface tension , that is associated with the reversible work per unit area needed to elastically stretch a pre-existing surface. Surface stress can be derived from surface free energy as follows: One can define a surface stress tensor that relates the work associated with the variation in , the total excess free energy of the surface, owing to the strain : Now consider the two reversible paths showed in figure 0. The first path (clockwise), the solid object is cut into two same pieces. Then both pieces are elastically strained. The work associated with the first step (unstrained) is , where and are the excess free energy and area of each of new surfaces. For the second step, work (), equals the work needed to elastically deform the total bulk volume and the four (two original and two newly formed) surfaces. In the second path (counter
https://en.wikipedia.org/wiki/Potentiometric%20surface
A potentiometric surface is the imaginary plane where a given reservoir of fluid will "equalize out to" if allowed to flow. A potentiometric surface is based on hydraulic principles. For example, two connected storage tanks with one full and one empty will gradually fill/drain to the same level. This is because of atmospheric pressure and gravity. This idea is heavily used in city water supplies - a tall water tower containing the water supply has a great enough potentiometric surface to provide flowing water at a decent pressure to the houses it supplies. For groundwater "potentiometric surface" is a synonym of "piezometric surface" which is an imaginary surface that defines the level to which water in a confined aquifer would rise were it completely pierced with wells. If the potentiometric surface lies above the ground surface, a flowing artesian well results. Contour maps and profiles of the potentiometric surface can be prepared from the well data. See also Hydraulic head
https://en.wikipedia.org/wiki/Low-pass%20filter
A low-pass filter is a filter that passes signals with a frequency lower than a selected cutoff frequency and attenuates signals with frequencies higher than the cutoff frequency. The exact frequency response of the filter depends on the filter design. The filter is sometimes called a high-cut filter, or treble-cut filter in audio applications. A low-pass filter is the complement of a high-pass filter. In optics, high-pass and low-pass may have different meanings, depending on whether referring to the frequency or wavelength of light, since these variables are inversely related. High-pass frequency filters would act as low-pass wavelength filters, and vice versa. For this reason, it is a good practice to refer to wavelength filters as short-pass and long-pass to avoid confusion, which would correspond to high-pass and low-pass frequencies. Low-pass filters exist in many different forms, including electronic circuits such as a hiss filter used in audio, anti-aliasing filters for conditioning signals before analog-to-digital conversion, digital filters for smoothing sets of data, acoustic barriers, blurring of images, and so on. The moving average operation used in fields such as finance is a particular kind of low-pass filter and can be analyzed with the same signal processing techniques as are used for other low-pass filters. Low-pass filters provide a smoother form of a signal, removing the short-term fluctuations and leaving the longer-term trend. Filter designers will often use the low-pass form as a prototype filter. That is a filter with unity bandwidth and impedance. The desired filter is obtained from the prototype by scaling for the desired bandwidth and impedance and transforming into the desired bandform (that is, low-pass, high-pass, band-pass or band-stop). Examples Examples of low-pass filters occur in acoustics, optics and electronics. A stiff physical barrier tends to reflect higher sound frequencies, acting as an acoustic low-pass filter for
https://en.wikipedia.org/wiki/Time-invariant%20system
In control theory, a time-invariant (TI) system has a time-dependent system function that is not a direct function of time. Such systems are regarded as a class of systems in the field of system analysis. The time-dependent system function is a function of the time-dependent input function. If this function depends only indirectly on the time-domain (via the input function, for example), then that is a system that would be considered time-invariant. Conversely, any direct dependence on the time-domain of the system function could be considered as a "time-varying system". Mathematically speaking, "time-invariance" of a system is the following property: Given a system with a time-dependent output function , and a time-dependent input function , the system will be considered time-invariant if a time-delay on the input directly equates to a time-delay of the output function. For example, if time is "elapsed time", then "time-invariance" implies that the relationship between the input function and the output function is constant with respect to time In the language of signal processing, this property can be satisfied if the transfer function of the system is not a direct function of time except as expressed by the input and output. In the context of a system schematic, this property can also be stated as follows, as shown in the figure to the right: If a system is time-invariant then the system block commutes with an arbitrary delay. If a time-invariant system is also linear, it is the subject of linear time-invariant theory (linear time-invariant) with direct applications in NMR spectroscopy, seismology, circuits, signal processing, control theory, and other technical areas. Nonlinear time-invariant systems lack a comprehensive, governing theory. Discrete time-invariant systems are known as shift-invariant systems. Systems which lack the time-invariant property are studied as time-variant systems. Simple example To demonstrate how to determine if a syst
https://en.wikipedia.org/wiki/Superscalar%20processor
A superscalar processor is a CPU that implements a form of parallelism called instruction-level parallelism within a single processor. In contrast to a scalar processor, which can execute at most one single instruction per clock cycle, a superscalar processor can execute more than one instruction during a clock cycle by simultaneously dispatching multiple instructions to different execution units on the processor. It therefore allows more throughput (the number of instructions that can be executed in a unit of time) than would otherwise be possible at a given clock rate. Each execution unit is not a separate processor (or a core if the processor is a multi-core processor), but an execution resource within a single CPU such as an arithmetic logic unit. While a superscalar CPU is typically also pipelined, superscalar and pipelining execution are considered different performance enhancement techniques. The former executes multiple instructions in parallel by using multiple execution units, whereas the latter executes multiple instructions in the same execution unit in parallel by dividing the execution unit into different phases. The superscalar technique is traditionally associated with several identifying characteristics (within a given CPU): Instructions are issued from a sequential instruction stream The CPU dynamically checks for data dependencies between instructions at run time (versus software checking at compile time) The CPU can execute multiple instructions per clock cycle History Seymour Cray's CDC 6600 from 1964 is often mentioned as the first superscalar design. The 1967 IBM System/360 Model 91 was another superscalar mainframe. The Intel i960CA (1989), the AMD 29000-series 29050 (1990), and the Motorola MC88110 (1991), microprocessors were the first commercial single-chip superscalar microprocessors. RISC microprocessors like these were the first to have superscalar execution, because RISC architectures free transistors and die area which can be us
https://en.wikipedia.org/wiki/Log%20Gabor%20filter
In signal processing it is useful to simultaneously analyze the space and frequency characteristics of a signal. While the Fourier transform gives the frequency information of the signal, it is not localized. This means that we cannot determine which part of a (perhaps long) signal produced a particular frequency. It is possible to use a short time Fourier transform for this purpose, however the short time Fourier transform limits the basis functions to be sinusoidal. To provide a more flexible space-frequency signal decomposition several filters (including wavelets) have been proposed. The Log-Gabor filter is one such filter that is an improvement upon the original Gabor filter. The advantage of this filter over the many alternatives is that it better fits the statistics of natural images compared with Gabor filters and other wavelet filters. Applications The Log-Gabor filter is able to describe a signal in terms of the local frequency responses. Because this is a fundamental signal analysis technique, it has many applications in signal processing. Indeed, any application that uses Gabor filters, or other wavelet basis functions may benefit from the Log-Gabor filter. However, there may not be any benefit depending on the particulars of the design problem. Nevertheless, the Log-Gabor filter has been shown to be particularly useful in image processing applications, because it has been shown to better capture the statistics of natural images. In image processing, there are a few low-level examples of the use of Log-Gabor filters. Edge detection is one such primitive operation, where the edges of the image are labeled. Because edges appear in the frequency domain as high frequencies, it is natural to use a filter such as the Log-Gabor to pick out these edges. These detected edges can be used as the input to a segmentation algorithm or a recognition algorithm. A related problem is corner detection. In corner detection the goal is to find points in the image that are c
https://en.wikipedia.org/wiki/WSDMA
WSDMA (Wideband Space Division Multiple Access) is a high bandwidth channel access method, developed for multi-transceiver systems such as active array antennas. WSDMA is a beamforming technique suitable for overlay on the latest air-interface protocols including WCDMA and OFDM. WSDMA enabled systems can determine the angle of arrival (AoA) of received signals to spatially divide a cell sector into many sub-sectors. This spatial awareness provides information necessary to maximise Carrier to Noise+Interference Ratio (CNIR) link budget, through a range of digital processing routines. WSDMA facilitates a flexible approach to how uplink and downlink beamforming is performed and is capable of spatial filtering known interference generating locations. Key features Transmit and receive beam shaping and steering Multiple sub-sector path processing Spatial interference filtering Sector activity scan Characteristics and principles of operation Active Panel Antenna Calibration Active Panel Antenna systems, comprising a planar array of micro-radios and associated antenna element, rely upon a comprehensive calibration scheme which is able to correct inter-path signal mismatches in phase, amplitude and latency. This facilitates precise control of the uplink and downlink RF beam pattern and avoids distortion effects that occur in the absence of calibration. Multiple Sub-Sector path processing By dividing the cell sector into a number of sub-sector beams, WSDMA provides the network with spatially filtered signals, maximising link budget through improved antenna gain and interference mitigation. This allows for mobile users in the cell to reduce their uplink power transmission, thereby further reducing interference and minimising both base station and UE power consumption. WSDMA provides simultaneous sector-wide and sub-sector beam processing to improve link performance in multipath environments. Sub-sector beam processing can optimise changing user demographics within th
https://en.wikipedia.org/wiki/Equals%20Pi
Equals Pi is a painting created by American artist Jean-Michel Basquiat in 1982. The painting was published in GQ magazine in 1983 and W magazine in 2018. History Equals Pi was executed by Jean-Michel Basquiat in 1982, which is considered his most coveted year. The robin egg blue painting contains Basquiat's signature crown motif and a head alongside his characteristic scrawled text with phrases such as "AMORITE," "TEN YEN" and "DUNCE." The title refers to the mathematical equations incorporated on the right side of the work. The cone refers to the pointed dunce caps depicted in the work. The painting was acquired in 1982 by Anne Dayton, who was the advertising manager of Artforum magazine. She purchase it for $7,000 from Basquiat's exhibition at the Fun Gallery in the East Village. At the time the painting was called Still Pi, however, when the work appeared in the March 1983 issue of GQ magazine, it was titled Knowledge of the Cone, which is written on the top of the painting. According to reports in August 2021, the luxury jewelry brand Tiffany & Co. had recently acquired the painting privately from the Sabbadini family, for a price in the range of $15 million to $20 million. The painting, which is the brand's signature blue color, is displayed in the Tiffany & Co. Landmark store on Fifth Avenue in New York City. Although initial reports claimed that the painting was never seen before, it was previously offered at auction twice and had appeared in magazines. The work was first offered at a Sotheby's sale in London in June 1990, where it went unsold. In December 1996, the Sabbadinis, a Milan-based clan behind the eponymous jewelry house, purchased it during a Sotheby's London auction for $253,000. Mother and daughter Stefania and Micól Sabbadini posed in front of the painting in their living room for a 2018 feature in W magazine. Stephen Torton, a former assistant of Basquiat’s posted an Instagram statement saying, “I designed and built stretchers, painted ba
https://en.wikipedia.org/wiki/Dark%20current%20%28physics%29
In physics and in electronic engineering, dark current is the relatively small electric current that flows through photosensitive devices such as a photomultiplier tube, photodiode, or charge-coupled device even when no photons enter the device; it consists of the charges generated in the detector when no outside radiation is entering the detector. It is referred to as reverse bias leakage current in non-optical devices and is present in all diodes. Physically, dark current is due to the random generation of electrons and holes within the depletion region of the device. The charge generation rate is related to specific crystallographic defects within the depletion region. Dark-current spectroscopy can be used to determine the defects present by monitoring the peaks in the dark current histogram's evolution with temperature. Dark current is one of the main sources for noise in image sensors such as charge-coupled devices. The pattern of different dark currents can result in a fixed-pattern noise; dark frame subtraction can remove an estimate of the mean fixed pattern, but there still remains a temporal noise, because the dark current itself has a shot noise. This dark current is the same that is studied in PN-Junction studies.
https://en.wikipedia.org/wiki/MCDRAM
Multi-Channel DRAM or MCDRAM (pronounced em cee dee ram) is a 3D-stacked DRAM that is used in the Intel Xeon Phi processor codenamed Knights Landing. It is a version of Hybrid Memory Cube developed in partnership with Micron Technology, and a competitor to High Bandwidth Memory. The many cores in the Xeon Phi processors, along with their associated vector processing units, enable them to consume many more gigabytes per second than traditional DRAM DIMMs can supply. The "Multi-channel" part of the MCDRAM full name reflects the cores having many more channels available to access the MCDRAM than processors have to access their attached DIMMs. This high channel count leads to MCDRAM's high bandwidth, up to 400+ GB/s, although the latencies are similar to a DIMM access. Its physical placement on the processor imposes some limits on capacity – up to 16 GB at launch, although speculated to go higher in the future. Programming The memory can be partitioned at boot time, with some used as cache for more distant DDR, and the remainder mapped into the physical address space. The application can request pages of virtual memory to be assigned to either the distant DDR directly, to the portion of DDR that is cached by the MCDRAM, or to the portion of the MCDRAM that is not being used as cache. One way to do this is via thememkind API. When used as cache, the latency of a miss accessing both the MCDRAM and DDR is slightly higher than going directly to DDR, and so applications may need to be tuned to avoid excessive cache misses.
https://en.wikipedia.org/wiki/Gate%20array
A gate array is an approach to the design and manufacture of application-specific integrated circuits (ASICs) using a prefabricated chip with components that are later interconnected into logic devices (e.g. NAND gates, flip-flops, etc.) according to custom order by adding metal interconnect layers in the factory. It was popular during the upheaval in the semiconductor industry in the 1980s, and its usage declined by the end of the 1990s. Similar technologies have also been employed to design and manufacture analog, analog-digital, and structured arrays, but, in general, these are not called gate arrays. Gate arrays have also been known as uncommitted logic arrays (ULAs), which also offered linear circuit functions, and semi-custom chips. History Development Gate arrays had several concurrent development paths. Ferranti in the UK pioneered commercializing bipolar ULA technology, offering circuits of "100 to 10,000 gates and above" by 1983. The company's early lead in semi-custom chips, with the initial application of a ULA integrated circuit involving a camera from Rollei in 1972, expanding to "practically all European camera manufacturers" as users of the technology, led to the company's dominance in this particular market throughout the 1970s. However, by 1982, as many as 30 companies had started to compete with Ferranti, reducing the company's market share to around 30 percent. Ferranti's "major competitors" were other British companies such as Marconi and Plessey, both of which had licensed technology from another British company, Micro Circuit Engineering. A contemporary initiative, UK5000, also sought to produce a CMOS gate array with "5,000 usable gates", with involvement from British Telecom and a number of other major British technology companies. IBM developed proprietary bipolar master slices that it used in mainframe manufacturing in the late 1970s and early 1980s, but never commercialized them externally. Fairchild Semiconductor also flirted brief
https://en.wikipedia.org/wiki/Harvard%20architecture
The Harvard architecture is a computer architecture with separate storage and signal pathways for instructions and data. It is often contrasted with the von Neumann architecture, where program instructions and data share the same memory and pathways. The term is often stated as having originated from the Harvard Mark I relay-based computer, which stored instructions on punched tape (24 bits wide) and data in electro-mechanical counters. These early machines had data storage entirely contained within the central processing unit, and provided no access to the instruction storage as data. Programs needed to be loaded by an operator; the processor could not initialize itself. However, in the only peer-reviewed published paper on the topic - The Myth of the Harvard Architecture published in the IEEE Annals of the History of Computing - the author demonstrates that: - 'The term “Harvard architecture” was coined decades later, in the context of microcontroller design' and only 'retrospectively applied to the Harvard machines and subsequently applied to RISC microprocessors with separated caches' - 'The so-called “Harvard” and “von Neumann” architectures are often portrayed as a dichotomy, but the various devices labeled as the former have far more in common with the latter than they do with each other.' - 'In short [the Harvard architecture] isn't an architecture and didn't derive from work at Harvard.' Modern processors appear to the user to be systems with von Neumann architectures, with the program code stored in the same main memory as the data. For performance reasons, internally and largely invisible to the user, most designs have separate processor caches for the instructions and data, with separate pathways into the processor for each. This is one form of what is known as the modified Harvard architecture. Harvard architecture is historically, and traditionally, split into two address spaces, but having three, i.e. two extra (and all accessed in each cycle)
https://en.wikipedia.org/wiki/List%20of%20geodesic%20polyhedra%20and%20Goldberg%20polyhedra
This is a list of selected geodesic polyhedra and Goldberg polyhedra, two infinite classes of polyhedra. Geodesic polyhedra and Goldberg polyhedra are duals of each other. The geodesic and Goldberg polyhedra are parameterized by integers m and n, with and . T is the triangulation number, which is equal to . Icosahedral Octahedral Tetrahedral
https://en.wikipedia.org/wiki/B5000%20instruction%20set
The Burroughs B5000 was the first stack machine and also the first computer with a segmented virtual memory. The Burroughs B5000 instruction set includes the set of valid operations for the B5000, B5500 and B5700. It is not compatible with the B6500, B7500, B8500 or their successors. Instruction streams on a B5000 contain 12-bit syllables, four to a word. The architecture has two modes, Word Mode and Character Mode, and each has a separate repertoire of syllables. A processor may be either Control State or Normal State, and certain syllables are only permissible in Control State. The architecture does not provide for addressing registers or storage directly; all references are through the 1024 word Program Reference Table (PRT), current code segment, marked locations within the stack or to the A and B registers holding the top two locations on the stack. Burroughs numbers bits in a syllable from 0 (high bit) to 11 (low bit) and in a word from 0 (high bit) to 47 (low bit). Word Mode In Word Mode, there are four types of syllables. The interpretation of the 10-bit relative address in Operand Call and Descriptor Call depends on the setting of several processor flags. For main programs (SALF off) it is always an offset into the Program Reference Table (PRT). Character Mode
https://en.wikipedia.org/wiki/U.S.%20National%20Vegetation%20Classification
The U.S. National Vegetation Classification (NVC or USNVC) is a scheme for classifying the natural and cultural vegetation communities of the United States. The purpose of this standardized vegetation classification system is to facilitate communication between land managers, scientists, and the public when managing, researching, and protecting plant communities. The non-profit group NatureServe maintains the NVC for the U.S. government. See also British National Vegetation Classification Vegetation classification External links The U.S. National Vegetation Classification website "National Vegetation Classification Standard, Version 2" FGDC-STD-005-2008, Vegetation Subcommittee, Federal Geographic Data Committee, February 2008 U.S. Geological Survey page about the Vegetation Characterization Program Federal Geographic Data Committee page about the NVC Environment of the United States Flora of the United States NatureServe Biological classification
https://en.wikipedia.org/wiki/Metatheorem
In logic, a metatheorem is a statement about a formal system proven in a metalanguage. Unlike theorems proved within a given formal system, a metatheorem is proved within a metatheory, and may reference concepts that are present in the metatheory but not the object theory. A formal system is determined by a formal language and a deductive system (axioms and rules of inference). The formal system can be used to prove particular sentences of the formal language with that system. Metatheorems, however, are proved externally to the system in question, in its metatheory. Common metatheories used in logic are set theory (especially in model theory) and primitive recursive arithmetic (especially in proof theory). Rather than demonstrating particular sentences to be provable, metatheorems may show that each of a broad class of sentences can be proved, or show that certain sentences cannot be proved. Examples Examples of metatheorems include: The deduction theorem for first-order logic says that a sentence of the form φ→ψ is provable from a set of axioms A if and only if the sentence ψ is provable from the system whose axioms consist of φ and all the axioms of A. The class existence theorem of von Neumann–Bernays–Gödel set theory states that for every formula whose quantifiers range only over sets, there is a class consisting of the sets satisfying the formula. Consistency proofs of systems such as Peano arithmetic. See also Metamathematics Use–mention distinction
https://en.wikipedia.org/wiki/Modulo%20%28mathematics%29
In mathematics, the term modulo ("with respect to a modulus of", the Latin ablative of modulus which itself means "a small measure") is often used to assert that two distinct mathematical objects can be regarded as equivalent—if their difference is accounted for by an additional factor. It was initially introduced into mathematics in the context of modular arithmetic by Carl Friedrich Gauss in 1801. Since then, the term has gained many meanings—some exact and some imprecise (such as equating "modulo" with "except for"). For the most part, the term often occurs in statements of the form: A is the same as B modulo C which is often equivalent to "A is the same as B up to C", and means A and B are the same—except for differences accounted for or explained by C. History Modulo is a mathematical jargon that was introduced into mathematics in the book Disquisitiones Arithmeticae by Carl Friedrich Gauss in 1801. Given the integers a, b and n, the expression "a ≡ b (mod n)", pronounced "a is congruent to b modulo n", means that a − b is an integer multiple of n, or equivalently, a and b both share the same remainder when divided by n. It is the Latin ablative of modulus, which itself means "a small measure." The term has gained many meanings over the years—some exact and some imprecise. The most general precise definition is simply in terms of an equivalence relation R, where a is equivalent (or congruent) to b modulo R if aRb. More informally, the term is found in statements of the form: A is the same as B modulo C which means A and B are the same—except for differences accounted for or explained by C. Usage Original use Gauss originally intended to use "modulo" as follows: given the integers a, b and n, the expression a ≡ b (mod n) (pronounced "a is congruent to b modulo n") means that a − b is an integer multiple of n, or equivalently, a and b both leave the same remainder when divided by n. For example: 13 is congruent to 63 modulo 10 means that 13 − 63 is a
https://en.wikipedia.org/wiki/Mathematica%3A%20A%20World%20of%20Numbers...%20and%20Beyond
Mathematica: A World of Numbers... and Beyond is a kinetic and static exhibition of mathematical concepts designed by Charles and Ray Eames, originally debuted at the California Museum of Science and Industry in 1961. Duplicates have since been made, and they (as well as the original) have been moved to other institutions. History In March, 1961 a new science wing at the California Museum of Science and Industry in Los Angeles opened. The IBM Corporation had been asked by the Museum to make a contribution; IBM in turn asked the famous California designer team of Charles Eames and his wife Ray Eames to come up with a good proposal. The result was that the Eames Office was commissioned by IBM to design an interactive exhibition called Mathematica: A World of Numbers... and Beyond. This was the first of many exhibitions designed by the Eames Office. The exhibition stayed at the Museum until January 1998, making it the longest running of any corporate sponsored museum exhibition. Furthermore, it is the only one of the dozens of exhibitions designed by the Office of Charles and Ray Eames that is still extant. This original Mathematica exhibition was reassembled for display at the Alyce de Roulet Williamson Gallery at Art Center College of Design in Pasadena, California, July 30 through October 1, 2000. It is now owned by and on display at the New York Hall of Science, though it currently lacks the overhead plaques with quotations from mathematicians that were part of the original installation. Duplicates In November, 1961 an exact duplicate was made for Chicago's Museum of Science and Industry, where it was shown until late 1980. From there it was sold and relocated to the Museum of Science in Boston, Massachusetts, where it is permanently on display. The Boston installation bears the closest resemblance to the original Eames design, including numerous overhead plaques featuring historic quotations from famous mathematicians. As part of a refurbishment, a graphic p
https://en.wikipedia.org/wiki/Hann%20function
The Hann function is named after the Austrian meteorologist Julius von Hann. It is a window function used to perform Hann smoothing. The function, with length and amplitude is given by:   For digital signal processing, the function is sampled symmetrically (with spacing and amplitude ): which is a sequence of samples, and can be even or odd. (see ) It is also known as the raised cosine window, Hann filter, von Hann window, etc. Fourier transform The Fourier transform of is given by: Discrete transforms The Discrete-time Fourier transform (DTFT) of the length, time-shifted sequence is defined by a Fourier series, which also has a 3-term equivalent that is derived similarly to the Fourier transform derivation: The truncated sequence is a DFT-even (aka periodic) Hann window. Since the truncated sample has value zero, it is clear from the Fourier series definition that the DTFTs are equivalent. However, the approach followed above results in a significantly different-looking, but equivalent, 3-term expression: An N-length DFT of the window function samples the DTFT at frequencies for integer values of From the expression immediately above, it is easy to see that only 3 of the N DFT coefficients are non-zero. And from the other expression, it is apparent that all are real-valued. These properties are appealing for real-time applications that require both windowed and non-windowed (rectangularly windowed) transforms, because the windowed transforms can be efficiently derived from the non-windowed transforms by convolution. Name The function is named in honor of von Hann, who used the three-term weighted average smoothing technique on meteorological data. However, the term Hanning function is also conventionally used, derived from the paper in which the term hanning a signal was used to mean applying the Hann window to it. The confusion arose from the similar Hamming function, named after Richard Hamming. See also Window function Apod
https://en.wikipedia.org/wiki/Great%20Elephant%20Census
The Great Elephant Census—the largest wildlife survey in history—was an African-wide census designed to provide accurate data about the number and distribution of African elephants by using standardized aerial surveys of hundreds of thousands of square miles or terrain in Africa. The census was completed and published in the online journal PeerJ on 31 August 2016 at a cost of US$7 million. History Scientists believe that there were as many as 20 million African elephants two centuries ago. By 1979, only 600,000 elephants remained on the continent. A pan-African elephant census has not been conducted since the 1970s. The idea of a modern census was devised by Elephants Without Borders and supported, both financially and logistically, by Paul G. Allen. It was also supported by other organizations and individuals, including African Parks, Frankfurt Zoological Society, Wildlife Conservation Society, The Nature Conservancy, IUCN African Elephant Specialist Group, Howard Frederick, Mike Norton-Griffith, Kevin Dunham, Chris Touless, and Curtice Griffin with the report released in September 2016. Mike Chase, the founder of Elephants Without Borders, was the lead scientist of the census. Chase lead a group of 90 scientists and 286 crew in 18 African countries for over two years to collect the data. During this time the team flew a distance of over , equivalent to flying to the moon and a quarter of the way back, in over 10,000 hours of collecting data. The area covered represents 93% of the elephants known range. Forest Elephants which live in central and western Africa were excluded from the survey. Report The final report was released on 31 August 2016 in Honolulu at the IUCN World Conservation Congress. Data collected showed a 30 percent decline in the population of African savanna elephant in 15 of the 18 countries surveyed. The reduction occurred between 2007 and 2014, representing a loss of approximately 144,000 elephants. The total population of Africa's savan
https://en.wikipedia.org/wiki/Coherence%20%28signal%20processing%29
In signal processing, the coherence is a statistic that can be used to examine the relation between two signals or data sets. It is commonly used to estimate the power transfer between input and output of a linear system. If the signals are ergodic, and the system function is linear, it can be used to estimate the causality between the input and output. Definition and formulation The coherence (sometimes called magnitude-squared coherence) between two signals x(t) and y(t) is a real-valued function that is defined as: where Gxy(f) is the Cross-spectral density between x and y, and Gxx(f) and Gyy(f) the auto spectral density of x and y respectively. The magnitude of the spectral density is denoted as |G|. Given the restrictions noted above (ergodicity, linearity) the coherence function estimates the extent to which y(t) may be predicted from x(t) by an optimum linear least squares function. Values of coherence will always satisfy . For an ideal constant parameter linear system with a single input x(t) and single output y(t), the coherence will be equal to one. To see this, consider a linear system with an impulse response h(t) defined as: , where denotes convolution. In the Fourier domain this equation becomes , where Y(f) is the Fourier transform of y(t) and H(f) is the linear system transfer function. Since, for an ideal linear system: and , and since is real, the following identity holds, . However, in the physical world an ideal linear system is rarely realized, noise is an inherent component of system measurement, and it is likely that a single input, single output linear system is insufficient to capture the complete system dynamics. In cases where the ideal linear system assumptions are insufficient, the Cauchy–Schwarz inequality guarantees a value of . If Cxy is less than one but greater than zero it is an indication that either: noise is entering the measurements, that the assumed function relating x(t) and y(t) is not linear, or that y(t) is produ
https://en.wikipedia.org/wiki/Food%20safety
Food safety (or food hygiene) is used as a scientific method/discipline describing handling, preparation, and storage of food in ways that prevent foodborne illness. The occurrence of two or more cases of a similar illness resulting from the ingestion of a common food is known as a food-borne disease outbreak. This includes a number of routines that should be followed to avoid potential health hazards. In this way, food safety often overlaps with food defense to prevent harm to consumers. The tracks within this line of thought are safety between industry and the market and then between the market and the consumer. In considering industry-to-market practices, food safety considerations include the origins of food including the practices relating to food labeling, food hygiene, food additives and pesticide residues, as well as policies on biotechnology and food and guidelines for the management of governmental import and export inspection and certification systems for foods. In considering market-to-consumer practices, the usual thought is that food ought to be safe in the market and the concern is safe delivery and preparation of the food for the consumer. Food safety, nutrition and food security are closely related. Unhealthy food creates a cycle of disease and malnutrition that affects infants and adults as well. Food can transmit pathogens, which can result in the illness or death of the person or other animals. The main types of pathogens are bacteria, viruses, parasites, and fungus. The WHO Foodborne Disease Epidemiology Reference Group conducted the only study that solely and comprehensively focused on the global health burden of foodborne diseases. This study, which involved the work of over 60 experts for a decade, is the most comprehensive guide to the health burden of foodborne diseases. The first part of the study revealed that 31 foodborne hazards considered priority accounted for roughly 420,000 deaths in LMIC and posed a burden of about 33 million disa
https://en.wikipedia.org/wiki/Rolanet
Rolanet (Robotron Local Area Network) was a networking standard, developed in the former German Democratic Republic (GDR) and introduced in 1987 by the computer manufacturer Robotron. It enabled computer networking over coax cable and glass fiber with a range of . Networking speed was 500 kBd, comparable to other standards of the day. A maximum of 253 computers could be connected using Rolanet. Two variants of Rolanet existed: Rolanet 1, introduced in 1987, saw limited deployment; Rolanet 2 was planned as a successor to Rolanet 1, but presumably never got beyond the prototype stage. A scaled-down version of Rolanet, BICNet, was used for educational purposes. It is no longer possible to assemble a functioning Rolanet system today, due to lack of software and working hardware. External links More information about Robotron networking technologies on Robotrontechnik.de Computer networking Science and technology in East Germany
https://en.wikipedia.org/wiki/Total%20variation%20denoising
In signal processing, particularly image processing, total variation denoising, also known as total variation regularization or total variation filtering, is a noise removal process (filter). It is based on the principle that signals with excessive and possibly spurious detail have high total variation, that is, the integral of the image gradient magnitude is high. According to this principle, reducing the total variation of the signal—subject to it being a close match to the original signal—removes unwanted detail whilst preserving important details such as edges. The concept was pioneered by L. I. Rudin, S. Osher, and E. Fatemi in 1992 and so is today known as the ROF model. This noise removal technique has advantages over simple techniques such as linear smoothing or median filtering which reduce noise but at the same time smooth away edges to a greater or lesser degree. By contrast, total variation denoising is a remarkably effective edge-preserving filter, i.e., simultaneously preserving edges whilst smoothing away noise in flat regions, even at low signal-to-noise ratios. 1D signal series For a digital signal , we can, for example, define the total variation as Given an input signal , the goal of total variation denoising is to find an approximation, call it , that has smaller total variation than but is "close" to . One measure of closeness is the sum of square errors: So the total-variation denoising problem amounts to minimizing the following discrete functional over the signal : By differentiating this functional with respect to , we can derive a corresponding Euler–Lagrange equation, that can be numerically integrated with the original signal as initial condition. This was the original approach. Alternatively, since this is a convex functional, techniques from convex optimization can be used to minimize it and find the solution . Regularization properties The regularization parameter plays a critical role in the denoising process. Wh
https://en.wikipedia.org/wiki/Vkernel
A virtual kernel architecture (vkernel) is an operating system virtualisation paradigm where kernel code can be compiled to run in the user space, for example, to ease debugging of various kernel-level components, in addition to general-purpose virtualisation and compartmentalisation of system resources. It is used by DragonFly BSD in its vkernel implementation since DragonFly 1.7, having been first revealed in , and first released in the stable branch with DragonFly 1.8 in . The long-term goal, in addition to easing kernel development, is to make it easier to support internet-connected computer clusters without compromising local security. Similar concepts exist in other operating systems as well; in Linux, a similar virtualisation concept is known as user-mode Linux; whereas in NetBSD since the summer of 2007, it has been the initial focus of the rump kernel infrastructure. The virtual kernel concept is nearly the exact opposite of the unikernel concept — with vkernel, kernel components get to run in userspace to ease kernel development and debugging, supported by a regular operating system kernel; whereas with a unikernel, userspace-level components get to run directly in kernel space for extra performance, supported by baremetal hardware or a hardware virtualisation stack. However, both vkernels and unikernels can be used for similar tasks as well, for example, to self-contain software to a virtualised environment with low overhead. In fact, NetBSD's rump kernel, originally having a focus of running kernel components in userspace, has since shifted into the unikernel space as well (going after the anykernel moniker for supporting both paradigms). The vkernel concept is different from a FreeBSD jail in that a jail is only meant for resource isolation, and cannot be used to develop and test new kernel functionality in the userland, because each jail is sharing the same kernel. (DragonFly, however, still has FreeBSD jail support as well.) In DragonFly, the v
https://en.wikipedia.org/wiki/Beraha%20constants
The Beraha constants are a series of mathematical constants by which the Beraha constant is given by Notable examples of Beraha constants include is , where is the golden ratio, is the silver constant (also known as the silver root), and . The following table summarizes the first ten Beraha constants. See also Chromatic polynomial Notes
https://en.wikipedia.org/wiki/Gardner%E2%80%93Salinas%20braille%20codes
The Gardner–Salinas braille codes are a method of encoding mathematical and scientific notation linearly using braille cells for tactile reading by the visually impaired. The most common form of Gardner–Salinas braille is the 8-cell variety, commonly called GS8. There is also a corresponding 6-cell form called GS6. The codes were developed as a replacement for Nemeth Braille by John A. Gardner, a physicist at Oregon State University, and Norberto Salinas, an Argentinian mathematician. The Gardner–Salinas braille codes are an example of a compact human-readable markup language. The syntax is based on the LaTeX system for scientific typesetting. Table of Gardner–Salinas 8-dot (GS8) braille The set of lower-case letters, the period, comma, semicolon, colon, exclamation mark, apostrophe, and opening and closing double quotes are the same as in Grade-2 English Braille. Digits Apart from 0, this is the same as the Antoine notation used in French and Luxembourgish Braille. Upper-case letters GS8 upper-case letters are indicated by the same cell as standard English braille (and GS8) lower-case letters, with dot #7 added. Compare Luxembourgish Braille. Greek letters Dot 8 is added to the letter forms of International Greek Braille to derive Greek letters: Characters differing from English Braille ASCII symbols and mathematical operators Text symbols Math and science symbols Markup * Encodes the fraction-slash for the single adjacent digits/letters as numerator and denominator. * Used for any > 1 digit radicand. ** Used for markup to represent inkprint text. Typeface indicators Shape symbols Set theory
https://en.wikipedia.org/wiki/Hostname
In computer networking, a hostname (archaically nodename) is a label that is assigned to a device connected to a computer network and that is used to identify the device in various forms of electronic communication, such as the World Wide Web. Hostnames may be simple names consisting of a single word or phrase, or they may be structured. Each hostname usually has at least one numeric network address associated with it for routing packets for performance and other reasons. Internet hostnames may have appended the name of a Domain Name System (DNS) domain, separated from the host-specific label by a period ("dot"). In the latter form, a hostname is also called a domain name. If the domain name is completely specified, including a top-level domain of the Internet, then the hostname is said to be a fully qualified domain name (FQDN). Hostnames that include DNS domains are often stored in the Domain Name System together with the IP addresses of the host they represent for the purpose of mapping the hostname to an address, or the reverse process. Internet hostnames In the Internet, a hostname is a domain name assigned to a host computer. This is usually a combination of the host's local name with its parent domain's name. For example, en.wikipedia.org consists of a local hostname (en) and the domain name wikipedia.org. This kind of hostname is translated into an IP address via the local hosts file, or the Domain Name System (DNS) resolver. It is possible for a single host computer to have several hostnames; but generally the operating system of the host prefers to have one hostname that the host uses for itself. Any domain name can also be a hostname, as long as the restrictions mentioned below are followed. So, for example, both en.wikipedia.org and wikipedia.org are hostnames because they both have IP addresses assigned to them. A hostname may be a domain name, if it is properly organized into the domain name system. A domain name may be a hostname if it has been a
https://en.wikipedia.org/wiki/Recognition%20signal
A recognition signal is a signal whereby a person, a ship, an airplane or something else is recognized. They can be used during war or can be used to help the police recognize each other during undercover operations. It can also be used in biology to signal that a molecule or chemical is to be bound to another molecule. War These signals are often used to recognize friends and enemies in a war. For military use these signals often use colored lights or the International marine signal flags. Police Other uses of the signal include the police who sometimes use a recognition signal so that officers in uniform can recognize officers in normal clothing (undercover). The NYPD often use headbands, wristbands or colored clothing as recognition signals which are known as the "color of the day". Biology A recognition signal is also a chemical signal used in biology to signal the end of a section of DNA or RNA during gene duplication in cells. See also Communication International Code of Signals Notes External links Signalman manual Communication Biological techniques and tools Military communications
https://en.wikipedia.org/wiki/Mathematics%20of%20apportionment
Mathematics of apportionment describes mathematical principles and algorithms for fair allocation of identical items among parties with different entitlements. Such principles are used to apportion seats in parliaments among federal states or political parties. See apportionment (politics) for the more concrete principles and issues related to apportionment, and apportionment by country for practical methods used around the world. Mathematically, an apportionment method is just a method of rounding fractions to integers. As simple as it may sound, each and every method for rounding suffers from one or more paradoxes. The mathematical theory of apportionment aims to decide what paradoxes can be avoided, or in other words, what properties can be expected from an apportionment method. The mathematical theory of apportionment was studied as early as 1907 by the mathematician Agner Krarup Erlang. It was later developed to a great detail by the mathematician Michel Balinsky and the economist Peyton Young. Besides its application to political parties, it is also applicable to fair item allocation when agents have different entitlements. It is also relevant in manpower planning - where jobs should be allocated in proportion to characteristics of the labor pool, to statistics - where the reported rounded numbers of percentages should sum up to 100%, and to bankruptcy problems. Definitions Input The inputs to an apportionment method are: A positive integer representing the total number of items to allocate. It is also called the house size, since in many cases, the items to allocate are seats in a house of representatives. A positive integer representing the number of agents to which items should be allocated. For example, these can be federal states or political parties. A vector of numbers representing entitlements - represents the entitlement of agent , that is, the amount of items to which is entitled (out of the total of ). These entitlements are often norma
https://en.wikipedia.org/wiki/Constant%20amplitude%20zero%20autocorrelation%20waveform
In signal processing, a Constant Amplitude Zero AutoCorrelation waveform (CAZAC) is a periodic complex-valued signal with modulus one and out-of-phase periodic (cyclic) autocorrelations equal to zero. CAZAC sequences find application in wireless communication systems, for example in 3GPP Long Term Evolution for synchronization of mobile phones with base stations. Zadoff–Chu sequences are well-known CAZAC sequences with special properties. Example CAZAC Sequence For a CAZAC sequence of length where is relatively prime to the th symbol is given by: Even N Odd N Power Spectrum of CAZAC Sequence The power spectrum of a CAZAC sequence is flat. If we have a CAZAC sequence the time domain autocorrelation is an impulse The discrete fourier transform of the autocorrelation is flat Power spectrum is related to autocorrelation by As a result the power spectrum is also flat.
https://en.wikipedia.org/wiki/Cambrian%20explosion
The Cambrian explosion, Cambrian radiation, Cambrian diversification, or the Biological Big Bang refers to an interval of time approximately in the Cambrian Period of early Paleozoic when there was a sudden radiation of complex life and practically all major animal phyla started appearing in the fossil record. It lasted for about 13 – 25 million years and resulted in the divergence of most modern metazoan phyla. The event was accompanied by major diversification in other groups of organisms as well. Before early Cambrian diversification, most organisms were relatively simple, composed of individual cells, or small multicellular organisms, occasionally organized into colonies. As the rate of diversification subsequently accelerated, the variety of life became much more complex, and began to resemble that of today. Almost all present-day animal phyla appeared during this period, including the earliest chordates. A 2019 paper suggests that the timing should be expanded back to include the late Ediacaran, where another diverse soft-bodied biota existed and possibly persisted into the Cambrian, rather than just the narrower timeframe of the "Cambrian Explosion" event visible in the fossil record, based on analysis of chemicals that would have laid the building blocks for a progression of transitional radiations starting with the Ediacaran period and continuing at a similar rate into the Cambrian. History and significance The seemingly rapid appearance of fossils in the "Primordial Strata" was noted by William Buckland in the 1840s, and in his 1859 book On the Origin of Species, Charles Darwin discussed the then-inexplicable lack of earlier fossils as one of the main difficulties for his theory of descent with slow modification through natural selection. The long-running puzzlement about the seemingly-sudden appearance of the Cambrian fauna without evident precursor(s) centers on three key points: whether there really was a mass diversification of complex organisms
https://en.wikipedia.org/wiki/Wafer-scale%20integration
Wafer-scale integration (WSI) is a rarely used system of building very-large integrated circuit (commonly called a "chip") networks from an entire silicon wafer to produce a single "super-chip". Combining large size and reduced packaging, WSI was expected to lead to dramatically reduced costs for some systems, notably massively parallel supercomputers. The name is taken from the term very-large-scale integration, the state of the art when WSI was being developed. Overview In the normal integrated circuit manufacturing process, a single large cylindrical crystal (boule) of silicon is produced and then cut into disks known as wafers. The wafers are then cleaned and polished in preparation for the fabrication process. A photographic process is used to pattern the surface where material ought to be deposited on top of the wafer and where not to. The desired material is deposited and the photographic mask is removed for the next layer. From then on the wafer is repeatedly processed in this fashion, putting on layer after layer of circuitry on the surface. Multiple copies of these patterns are deposited on the wafer in a grid fashion across the surface of the wafer. After all the possible locations are patterned, the wafer surface appears like a sheet of graph paper, with grid lines delineating the individual chips. Each of these grid locations is tested for manufacturing defects by automated equipment. Those locations that are found to be defective are recorded and marked with a dot of paint (this process is referred to as "inking a die" and more modern wafer fabrication techniques no longer require physical markings to identify defective die). The wafer is then sawed apart to cut out the individual chips. Those defective chips are thrown away, or recycled, while the working chips are placed into packaging and re-tested for any damage that might occur during the packaging process. Flaws on the surface of the wafers and problems during the layering/depositing process a
https://en.wikipedia.org/wiki/Joan%20Mott%20Prize%20Lecture
The Joan Mott Prize Lecture is a prize lecture awarded annually by The Physiological Society in honour of Joan Mott. Laureates Laureates of the award have included: - Intestinal absorption of sugars and peptides: from textbook to surprises See also Physiological Society Annual Review Prize Lecture
https://en.wikipedia.org/wiki/Kernel-phase
Kernel-phases are observable quantities used in high resolution astronomical imaging used for superresolution image creation. It can be seen as a generalization of closure phases for redundant arrays. For this reason, when the wavefront quality requirement are met, it is an alternative to aperture masking interferometry that can be executed without a mask while retaining phase error rejection properties. The observables are computed through linear algebra from the Fourier transform of direct images. They can then be used for statistical testing, model fitting, or image reconstruction. Prerequisites In order to extract kernel-phases from an image, some requirements must be met: Images are nyquist-sampled (at least 2 pixels per resolution element ()) Images are taken in near monochromatic light Exposure time is shorter than the timescale of aberrations Strehl ratio is high (good adaptive optics) Linearity of the pixel response (i.e. no saturation) Deviations from these requirements are known to be acceptable, but lead to observational bias that should be corrected by the observation of calibrators. Definition The method relies on a discrete model of the instrument's pupil plane and the corresponding list of baselines to provide corresponding vectors of pupil plane errors and of image plane Fourier Phases. When the wavefront error in the pupil plane is small enough (i.e. when the Strehl ratio of the imaging system is sufficiently high), the complex amplitude associated to the instrumental phase in one point of the pupil , can be approximated by . This permits the expression of the pupil-plane phase aberrations to the image plane Fourier phase as a linear transformation described by the matrix : Where is the theoretical Fourier phase vector of the object. In this formalism, singular value decomposition can be used to find a matrix satisfying . The rows of constitute a basis of the kernel of . The vector is called the kernel-phase vector of observab
https://en.wikipedia.org/wiki/MPLAB
MPLAB is a proprietary freeware integrated development environment for the development of embedded applications on PIC and dsPIC microcontrollers, and is developed by Microchip Technology. MPLAB X is the latest edition of MPLAB, and is developed on the NetBeans platform. MPLAB and MPLAB X support project management, code editing, debugging and programming of Microchip 8-bit PIC and AVR (including ATMEGA) microcontrollers, 16-bit PIC24 and dsPIC microcontrollers, as well as 32-bit SAM (ARM) and PIC32 (MIPS) microcontrollers. MPLAB is designed to work with MPLAB-certified devices such as the MPLAB ICD 3 and MPLAB REAL ICE, for programming and debugging PIC microcontrollers using a personal computer. PICKit programmers are also supported by MPLAB. MPLAB X supports automatic code generation with the MPLAB Code Configurator and the MPLAB Harmony Configurator plugins. MPLAB X MPLAB X is the latest version of the MPLAB IDE built by Microchip Technology, and is based on the open-source NetBeans platform. MPLAB X supports editing, debugging and programming of Microchip 8-bit, 16-bit and 32-bit PIC microcontrollers. MPLAB X is the first version of the IDE to include cross-platform support for macOS and Linux operating systems, in addition to Microsoft Windows. MPLAB X supports the following compilers: MPLAB XC8 — C compiler for 8-bit PIC and AVR devices MPLAB XC16 — C compiler for 16-bit PIC devices MPLAB XC32 — C/C++ compiler for 32-bit MIPS-based PIC32 and ARM-based SAM devices HI-TECH C — C compiler for 8-bit PIC devices (discontinued) SDCC — open-source C compiler MPLAB 8.x MPLAB 8.x is the last version of the legacy MPLAB IDE technology, custom built by Microchip Technology in Microsoft Visual C++. MPLAB supports project management, editing, debugging and programming of Microchip 8-bit, 16-bit and 32-bit PIC microcontrollers. MPLAB only works on Microsoft Windows. MPLAB is still available from Microchip's archives, but is not recommended for new projects. MP
https://en.wikipedia.org/wiki/List%20of%20operator%20splitting%20topics
This is a list of operator splitting topics. General Alternating direction implicit method — finite difference method for parabolic, hyperbolic, and elliptic partial differential equations GRADELA — simple gradient elasticity model Matrix splitting — general method of splitting a matrix operator into a sum or difference of matrices Paul Tseng — resolved question on convergence of matrix splitting algorithms PISO algorithm — pressure-velocity calculation for Navier-Stokes equations Projection method (fluid dynamics) — computational fluid dynamics method Reactive transport modeling in porous media — modeling of chemical reactions and fluid flow through the Earth's crust Richard S. Varga — developed matrix splitting Strang splitting — specific numerical method for solving differential equations using operator splitting Numerical analysis Mathematics-related lists Outlines of mathematics and logic Outlines
https://en.wikipedia.org/wiki/Conventionally%20grown
Conventionally grown is an agriculture term referring to a method of growing edible plants (such as fruit and vegetables) and other products. It is opposite to organic growing methods which attempt to produce without synthetic chemicals (fertilizers, pesticides, antibiotics, hormones) or genetically modified organisms. Conventionally grown products, meanwhile, often use fertilizers and pesticides which allow for higher yield, out of season growth, greater resistance, greater longevity and a generally greater mass. Conventionally grown fruit: PLU code consists of 4 numbers (e.g. 4012). Organically grown fruit: PLU code consists of 5 numbers and begins with 9 (e.g. 94012) Genetically engineered fruit: PLU code consists of 5 numbers and begins with 8 (e.g. 84012). Food science
https://en.wikipedia.org/wiki/List%20of%20polyhedral%20stellations
In the geometry of three dimensions, a stellation extends a polyhedron to form a new figure that is also a polyhedron. The following is a list of stellations of various polyhedra. See also List of Wenninger polyhedron models The Fifty-Nine Icosahedra Footnotes
https://en.wikipedia.org/wiki/Pole%E2%80%93zero%20plot
In mathematics, signal processing and control theory, a pole–zero plot is a graphical representation of a rational transfer function in the complex plane which helps to convey certain properties of the system such as: Stability Causal system / anticausal system Region of convergence (ROC) Minimum phase / non minimum phase A pole-zero plot shows the location in the complex plane of the poles and zeros of the transfer function of a dynamic system, such as a controller, compensator, sensor, equalizer, filter, or communications channel. By convention, the poles of the system are indicated in the plot by an X while the zeros are indicated by a circle or O. A pole-zero plot is plotted in the plane of a complex frequency domain, which can represent either a continuous-time or a discrete-time system: Continuous-time systems use the Laplace transform and are plotted in the s-plane: Real frequency components are along its vertical axis (the imaginary line where ) Discrete-time systems use the Z-transform and are plotted in the z-plane: Real frequency components are along its unit circle Continuous-time systems In general, a rational transfer function for a continuous-time LTI system has the form: where and are polynomials in , is the order of the numerator polynomial, is the coefficient of the numerator polynomial, is the order of the denominator polynomial, and is the coefficient of the denominator polynomial. Either or or both may be zero, but in real systems, it should be the case that ; otherwise the gain would be unbounded at high frequencies. Poles and zeros the zeros of the system are roots of the numerator polynomial: such that the poles of the system are roots of the denominator polynomial: such that Region of convergence The region of convergence (ROC) for a given continuous-time transfer function is a half-plane or vertical strip, either of which contains no poles. In general, the ROC is not unique, and the particular ROC
https://en.wikipedia.org/wiki/List%20of%20order%20theory%20topics
Order theory is a branch of mathematics that studies various kinds of objects (often binary relations) that capture the intuitive notion of ordering, providing a framework for saying when one thing is "less than" or "precedes" another. An alphabetical list of many notions of order theory can be found in the order theory glossary. See also inequality, extreme value and mathematical optimization. Overview Partially ordered set Preorder Totally ordered set Total preorder Chain Trichotomy Extended real number line Antichain Strict order Hasse diagram Directed acyclic graph Duality (order theory) Product order Distinguished elements of partial orders Greatest element (maximum, top, unit), Least element (minimum, bottom, zero) Maximal element, minimal element Upper bound Least upper bound (supremum, join) Greatest lower bound (infimum, meet) Limit superior and limit inferior Irreducible element Prime element Compact element Subsets of partial orders Cofinal and coinitial set, sometimes also called dense Meet-dense set and join-dense set Linked set (upwards and downwards) Directed set (upwards and downwards) centered and σ-centered set Net (mathematics) Upper set and lower set Ideal and filter Ultrafilter Special types of partial orders Completeness (order theory) Dense order Distributivity (order theory) modular lattice distributive lattice completely distributive lattice Ascending chain condition Infinite descending chain Countable chain condition, often abbreviated as ccc Knaster's condition, sometimes denoted property (K) Well-orders Well-founded relation Ordinal number Well-quasi-ordering Completeness properties Semilattice Lattice (Directed) complete partial order, (d)cpo Bounded complete Complete lattice Knaster–Tarski theorem Infinite divisibility Orders with further algebraic operations Heyting algebra Relatively complemented lattice Complete Heyting algebra Pointless topology MV-algebra Ockham algebras: Stone algebra De Morgan algebra Kleene alg
https://en.wikipedia.org/wiki/SEMAT
SEMAT (Software Engineering Method and Theory) is an initiative to reshape software engineering such that software engineering qualifies as a rigorous discipline. The initiative was launched in December 2009 by Ivar Jacobson, Bertrand Meyer, and Richard Soley with a call for action statement and a vision statement. The initiative was envisioned as a multi-year effort for bridging the gap between the developer community and the academic community and for creating a community giving value to the whole software community. The work is now structured in four different but strongly related areas: Practice, Education, Theory, and Community. The Practice area primarily addresses practices. The Education area is concerned with all issues related to training for both the developers and the academics including students. The Theory area is primarily addressing the search for a General Theory in Software Engineering. Finally, the Community area works with setting up legal entities, creating websites and community growth. It was expected that the Practice area, the Education area and the Theory area would at some point in time integrate in a way of value to all of them: the Practice area would be a "customer" of the Theory area, and direct the research to useful results for the developer community. The Theory area would give a solid and practical platform for the Practice area. And, the Education area would communicate the results in proper ways. Practice area The first step was here to develop a common ground or a kernel including the essence of software engineering – things we always have, always do, always produce when developing software. The second step was envisioned to add value on top of this kernel in the form of a library of practices to be composed to become specific methods, specific for all kinds of reasons such as the preferences of the team using it, kind of software being built, etc. The first step is as of this writing just about to be concluded. The res
https://en.wikipedia.org/wiki/Bridging%20fault
In electronic engineering, a bridging fault consists of two signals that are connected when they should not be. Depending on the logic circuitry employed, this may result in a wired-OR or wired-AND logic function. Since there are O(n^2) potential bridging faults, they are normally restricted to signals that are physically adjacent in the design. Modeling bridge fault Bridging to VDD or Vss is equivalent to stuck at fault model. Traditionally bridged signals were modeled with logic AND or OR of signals. If one driver dominates the other driver in a bridging situation, the dominant driver forces the logic to the other one, in such case a dominant bridging fault is used. To better reflect the reality of CMOS VLSI devices, a dominant AND or dominant OR bridging fault model is used where dominant driver keeps its value, while the other signal value is the result of AND (or OR) of its own value with the dominant driver.
https://en.wikipedia.org/wiki/Catalan%27s%20constant
In mathematics, Catalan's constant , is defined by where is the Dirichlet beta function. Its numerical value is approximately It is not known whether is irrational, let alone transcendental. has been called "arguably the most basic constant whose irrationality and transcendence (though strongly suspected) remain unproven". Catalan's constant was named after Eugène Charles Catalan, who found quickly-converging series for its calculation and published a memoir on it in 1865. Uses In low-dimensional topology, Catalan's constant is 1/4 of the volume of an ideal hyperbolic octahedron, and therefore 1/4 of the hyperbolic volume of the complement of the Whitehead link. It is 1/8 of the volume of the complement of the Borromean rings. In combinatorics and statistical mechanics, it arises in connection with counting domino tilings, spanning trees, and Hamiltonian cycles of grid graphs. In number theory, Catalan's constant appears in a conjectured formula for the asymptotic number of primes of the form according to Hardy and Littlewood's Conjecture F. However, it is an unsolved problem (one of Landau's problems) whether there are even infinitely many primes of this form. Catalan's constant also appears in the calculation of the mass distribution of spiral galaxies. Known digits The number of known digits of Catalan's constant has increased dramatically during the last decades. This is due both to the increase of performance of computers as well as to algorithmic improvements. Integral identities As Seán Stewart writes, "There is a rich and seemingly endless source of definite integrals that can be equated to or expressed in terms of Catalan's constant." Some of these expressions include: where the last three formulas are related to Malmsten's integrals. If is the complete elliptic integral of the first kind, as a function of the elliptic modulus , then If is the complete elliptic integral of the second kind, as a function of the elliptic modulus , th
https://en.wikipedia.org/wiki/List%20of%20cohomology%20theories
This is a list of some of the ordinary and generalized (or extraordinary) homology and cohomology theories in algebraic topology that are defined on the categories of CW complexes or spectra. For other sorts of homology theories see the links at the end of this article. Notation S = π = S0 is the sphere spectrum. Sn is the spectrum of the n-dimensional sphere SnY = Sn∧Y is the nth suspension of a spectrum Y. [X,Y] is the abelian group of morphisms from the spectrum X to the spectrum Y, given (roughly) as homotopy classes of maps. [X,Y]n = [SnX,Y] [X,Y]* is the graded abelian group given as the sum of the groups [X,Y]n. πn(X) = [Sn, X] = [S, X]n is the nth stable homotopy group of X. π*(X) is the sum of the groups πn(X), and is called the coefficient ring of X when X is a ring spectrum. X∧Y is the smash product of two spectra. If X is a spectrum, then it defines generalized homology and cohomology theories on the category of spectra as follows. Xn(Y) = [S, X∧Y]n = [Sn, X∧Y] is the generalized homology of Y, Xn(Y) = [Y, X]−n = [S−nY, X] is the generalized cohomology of Y Ordinary homology theories These are the theories satisfying the "dimension axiom" of the Eilenberg–Steenrod axioms that the homology of a point vanishes in dimension other than 0. They are determined by an abelian coefficient group G, and denoted by H(X, G) (where G is sometimes omitted, especially if it is Z). Usually G is the integers, the rationals, the reals, the complex numbers, or the integers mod a prime p. The cohomology functors of ordinary cohomology theories are represented by Eilenberg–MacLane spaces. On simplicial complexes, these theories coincide with singular homology and cohomology. Homology and cohomology with integer coefficients. Spectrum: H (Eilenberg–MacLane spectrum of the integers.) Coefficient ring: πn(H) = Z if n = 0, 0 otherwise. The original homology theory. Homology and cohomology with rational (or real or complex) coefficients. Spectrum: HQ (Eilenberg–Mac
https://en.wikipedia.org/wiki/Arithmetic%20and%20geometric%20Frobenius
In mathematics, the Frobenius endomorphism is defined in any commutative ring R that has characteristic p, where p is a prime number. Namely, the mapping φ that takes r in R to rp is a ring endomorphism of R. The image of φ is then Rp, the subring of R consisting of p-th powers. In some important cases, for example finite fields, φ is surjective. Otherwise φ is an endomorphism but not a ring automorphism. The terminology of geometric Frobenius arises by applying the spectrum of a ring construction to φ. This gives a mapping φ*: Spec(Rp) → Spec(R) of affine schemes. Even in cases where Rp = R this is not the identity, unless R is the prime field. Mappings created by fibre product with φ*, i.e. base changes, tend in scheme theory to be called geometric Frobenius. The reason for a careful terminology is that the Frobenius automorphism in Galois groups, or defined by transport of structure, is often the inverse mapping of the geometric Frobenius. As in the case of a cyclic group in which a generator is also the inverse of a generator, there are in many situations two possible definitions of Frobenius, and without a consistent convention some problem of a minus sign may appear.
https://en.wikipedia.org/wiki/Gating%20signal
Signal gating is a concept commonly used in the field of electronics and signal processing. It refers to the process of controlling the flow of signals based on certain conditions or criteria. The goal of signal gating is to selectively allow or block the transmission of signals through a circuit or system. In signal gating, a gating signal is used to modulate the passage of the main signal. The gating signal acts as a control mechanism, determining when the main signal can pass through the gate and when it is blocked. The gating signal can be generated by various means, such as an external trigger, a specific voltage level, or a specific frequency range. Signal gating is often employed in applications where precise control over the transmission of signals is required. Here are a few examples of how signal gating is used in different fields: 1. Telecommunications: In telecommunications systems, signal gating is used to regulate the flow of data packets. By opening and closing the gate based on specific criteria, such as error detection or network congestion, signal gating helps ensure that the data is transmitted efficiently and reliably. 2. Audio processing: In audio applications, signal gating is used to reduce background noise or eliminate unwanted sounds. For example, in live sound reinforcement, a noise gate is often employed to mute or attenuate the microphone signal when the sound level falls below a certain threshold. This helps minimize the pickup of ambient noise and unwanted signals. 3. Radar systems: Signal gating plays a crucial role in radar systems, particularly in pulse-Doppler radar. Gating is used to control the transmission and reception of radar pulses, allowing the system to focus on specific ranges or angles of interest while ignoring other signals. This helps improve target detection and reduces interference from unwanted reflections. 4. Medical imaging: Signal gating is utilized in medical imaging techniques like computed tomography (CT
https://en.wikipedia.org/wiki/Retrogradation%20%28starch%29
Retrogradation is a reaction that takes place when the amylose and amylopectin chains in cooked, gelatinized starch realign themselves as the cooked starch cools. When native starch is heated and dissolved in water, the crystalline structure of amylose and amylopectin molecules is lost and they hydrate to form a viscous solution. If the viscous solution is cooled or left at lower temperature for a long enough period, the linear molecules, amylose, and linear parts of amylopectin molecules retrograde and rearrange themselves again to a more crystalline structure. The linear chains place themselves parallel and form hydrogen bridges. In viscous solutions the viscosity increases to form a gel. At temperatures between and , the aging process is enhanced drastically. Amylose crystallization occurs much faster than crystallization of the amylopectin. The crystal melting temperature of amylose is much higher (about ) than amylopectin (about ). The temperature range between cooking starch and storing in room temperature is optimum for amylose crystallization, and therefore amylose crystallization is responsible for the development of initial hardness of the starch gel. On the other hand, amylopectin has a narrower temperature range for crystallization as crystallization does not occur at a temperature higher than its melting temperature. Therefore, amylopectin is responsible for development of the long-term crystallinity and gel structure. Retrogradation can expel water from the polymer network. This process is known as syneresis. A small amount of water can be seen on top of the gel. Retrogradation is directly related to the staling or aging of bread. Retrograded starch is less digestible (see resistant starch). Chemical modification of starches can reduce or enhance the retrogradation. Waxy, high amylopectin, starches also have much less of a tendency to retrogradate. Additives such as fat, glucose, sodium nitrate and emulsifier can reduce retrogradation of starch.
https://en.wikipedia.org/wiki/Diagnostic%20board
In electronic systems a diagnostic board is a specialized device with diagnostic circuitry on a printed circuit board that connects to a computer or other electronic equipment replacing an existing module, or plugging into an expansion card slot. A multi-board electronic system such as a computer comprises multiple printed circuit boards or cards connected via connectors. When a fault occurs in the system, it is sometimes possible to isolate or identify the fault by replacing one of the boards with a diagnostic board. A diagnostic board can range from extremely simple to extremely sophisticated. Simple standard diagnostic plug-in boards for computers are available that display numeric codes to assist in identifying issues detected during the power-on self-test executed automatically during system startup. Dummy board A dummy board provides a minimal interface. This type of diagnostic board in intended to confirm that the interface is correctly implemented. For example, a PC motherboard manufacturer can test PCI functionality of a PC motherboard by connecting a dummy PCI board into each PCI slot on the motherboard Extender board An extender board (or board extender, card extender, extender card) is a simple circuit board that interposes between a card cage backplane and the circuit board of interest to physically 'extend' the circuit board of interest out from the card cage allowing access to both sides of the circuit board to connect diagnostic equipment such as an oscilloscope or systems analyzer. For example, a PCI extender board can be plugged into a PCI slot on a computer motherboard, and then a PCI card connected to the extender board to 'extend' the board into free space for access. This approach was common in the 1970s and 1980s particularly on S-100 bus systems. The concept can become unworkable when signal timing is affected by the length of the signal paths on the diagnostic board, as well as introducing Radio Frequency Interference (RFI) into the ci
https://en.wikipedia.org/wiki/System%20context%20diagram
A system context diagram in engineering is a diagram that defines the boundary between the system, or part of a system, and its environment, showing the entities that interact with it. This diagram is a high level view of a system. It is similar to a block diagram. Overview System context diagrams show a system, as a whole and its inputs and outputs from/to external factors. According to Kossiakoff and Sweet (2011): System context diagrams are used early in a project to get agreement on the scope under investigation. Context diagrams are typically included in a requirements document. These diagrams must be read by all project stakeholders and thus should be written in plain language, so the stakeholders can understand items within the document. Building blocks Context diagrams can be developed with the use of two types of building blocks: Entities (Actors): labeled boxes; one in the center representing the system, and around it multiple boxes for each external actor Relationships: labeled lines between the entities and system For example, "customer places order." Context diagrams can also use many different drawing types to represent external entities. They can use ovals, stick figures, pictures, clip art or any other representation to convey meaning. Decision trees and data storage are represented in system flow diagrams. A context diagram can also list the classifications of the external entities as one of a set of simple categories (Examples:), which add clarity to the level of involvement of the entity with regards to the system. These categories include: Active: Dynamic to achieve some goal or purpose (Examples: "Article readers" or "customers"). Passive: Static external entities which infrequently interact with the system (Examples: "Article editors" or "database administrator"). Cooperative: Predictable external entities which are used by the system to bring about some desired outcome (Examples: "Internet service providers" or "shipping companie
https://en.wikipedia.org/wiki/Dn42
dn42 is a decentralized peer-to-peer network built using VPNs and software/hardware BGP routers. While other darknets try to establish anonymity for their participants, that is not what dn42 aims for. It is a network to explore routing technologies used in the Internet and tries to establish direct non-NAT-ed connections between the members. The network is not fully meshed. dn42 uses mostly tunnels instead of physical links between the individual networks. Each participant is connected to one or more other participants. Over the VPN or the physical links, BGP is used for inter AS routing. While OSPF is the most commonly used protocol for intra AS routing, each participant is free to choose any other IGP, like Babel, inside their AS. History The DN42 project grew out of the popular PeerIX project started by HardForum members in mid-2009. The PeerIX project, while small in initial numbers grew to over 50 active members with a backlog of 100 requests to join the network. Ultimately the project was unable to meet the demand of user scale and eventually deprecated (though many of the core member team still have their networks online.) The founding members of the DN42 project tried to unsuccessfully rekindle the PeerIX project(through the private google group) and instead formed their own IPv6 only network, successfully scaling it to the size it is today. Technical setup Address space Network address space for IPv4 consists of private subnets: 172.20.0.0/14 is the main subnet. Note that other private address ranges may also be announced in dn42, as the network is interconnected with other similar projects. Most notably, ChaosVPN uses 172.31.0.0/16 and parts of 10.0.0.0/8, Freifunk ICVPN uses 10.0.0.0/8 and NeoNetwork uses 10.127.0.0/16. For IPv6, Unique Local Address (ULA, the IPv6 equivalent of private address range) (fd00::/8) are used. Please note that other network use IPv6 addresses in this range as well, including NeoNetwork's use of fd10:127::/32. AS nu
https://en.wikipedia.org/wiki/Microlithography
Microlithography is a general name for any manufacturing process that can create a minutely patterned thin film of protective materials over a substrate, such as a silicon wafer, in order to protect selected areas of it during subsequent etching, deposition, or implantation operations. The term is normally used for processes that can reliably produce features of microscopic size, such as 10 micrometres or less. The term nanolithography may be used to designate processes that can produce nanoscale features, such as less than 100 nanometres. Microlithography is a microfabrication process that is extensively used in the semiconductor industry and also manufacture microelectromechanical systems. Processes Specific microlithography processes include: Photolithography using light projected on a photosensitive metarial film (photoresist). Electron beam lithography, using a steerable electron beam. Nanoimprinting Interference lithography Magnetolithography Scanning probe lithography Surface-charge lithography Diffraction lithography These processes differ in speed and cost, as well as in the material they can be applied to and the range of feature sizes they can produce. For instance, while the size of features achievable with photolithography is limited by the wavelength of the light used, the technique it is considerably faster and simpler than electron beam lithography, that can achieve much smaller ones. Applications The main application for microlithography is fabrication of integrated circuits ("electronic chips"), such as solid-state memories and microprocessors. They can also be used to create diffraction gratings, microscope calibration grids, and other flat structures with microscopic details. See also Printed circuit board
https://en.wikipedia.org/wiki/Process%20gain
In a spread-spectrum system, the process gain (or "processing gain") is the ratio of the spread (or RF) bandwidth to the unspread (or baseband) bandwidth. It is usually expressed in decibels (dB). For example, if a 1 kHz signal is spread to 100 kHz, the process gain expressed as a numerical ratio would be / = 100. Or in decibels, 10 log10(100) = 20 dB. Note that process gain does not reduce the effects of wideband thermal noise. It can be shown that a direct-sequence spread-spectrum (DSSS) system has exactly the same bit error behavior as a non-spread-spectrum system with the same modulation format. Thus, on an additive white Gaussian noise (AWGN) channel without interference, a spread system requires the same transmitter power as an unspread system, all other things being equal. Unlike a conventional communication system, however, a DSSS system does have a certain resistance against narrowband interference, as the interference is not subject to the process gain of the DSSS signal, and hence the signal-to-interference ratio is improved. In frequency modulation (FM), the processing gain can be expressed as where: Gp is the processing gain, Bn is the noise bandwidth, Δf is the peak frequency deviation, W is the sinusoidal modulating frequency. Signal processing
https://en.wikipedia.org/wiki/Pulse%20width
The pulse width is a measure of the elapsed time between the leading and trailing edges of a single pulse of energy. The measure is typically used with electrical signals and is widely used in the fields of radar and power supplies. There are two closely related measures. The pulse repetition interval measures the time between the leading edges of two pulses but is normally expressed as the pulse repetition frequency (PRF), the number of pulses in a given time, typically a second. The duty cycle expresses the pulse width as a fraction or percentage of one complete cycle. Pulse width is an important measure in radar systems. Radars transmit pulses of radio frequency energy out of an antenna and then listen for their reflection off of target objects. The amount of energy that is returned to the radar receiver is a function of the peak energy of the pulse, the pulse width, and the pulse repetition frequency. Increasing the pulse width increases the amount of energy reflected off the target and thereby increases the range at which an object can be detected. Radars measure range based on the time between transmission and reception, and the resolution of that measurement is a function of the length of the received pulse. This leads to the basic outcome that increasing the pulse width allows the radar to detect objects at longer range but at the cost of decreasing the accuracy of that range measurement. This can be addressed by encoding the pulse with additional information, as is the case in pulse compression systems. In modern switched-mode power supplies, the voltage of the output electrical power is controlled by rapidly switching a fixed-voltage source on and off and then smoothing the resulting stepped waveform. Increasing the pulse width increases the output voltage. This allows complex output waveforms to be constructed by rapidly changing the pulse width to produce the desired signal, a concept known as pulse-width modulation.
https://en.wikipedia.org/wiki/Passivation%20%28chemistry%29
In physical chemistry and engineering, passivation is coating a material so that it becomes "passive", that is, less readily affected or corroded by the environment. Passivation involves creation of an outer layer of shield material that is applied as a microcoating, created by chemical reaction with the base material, or allowed to build by spontaneous oxidation in the air. As a technique, passivation is the use of a light coat of a protective material, such as metal oxide, to create a shield against corrosion. Passivation of silicon is used during fabrication of microelectronic devices. Undesired passivation of electrodes, called "fouling", increases the circuit resistance so it interferes with some electrochemical applications such as electrocoagulation for wastewater treatment, amperometric chemical sensing, and electrochemical synthesis. When exposed to air, many metals naturally form a hard, relatively inert surface layer, usually an oxide (termed the "native oxide layer") or a nitride, that serves as a passivation layer. In the case of silver, the dark tarnish is a passivation layer of silver sulfide formed from reaction with environmental hydrogen sulfide. (In contrast, metals such as iron oxidize readily to form a rough porous coating of rust that adheres loosely and sloughs off readily, allowing further oxidation.) The passivation layer of oxide markedly slows further oxidation and corrosion in room-temperature air for aluminium, beryllium, chromium, zinc, titanium, and silicon (a metalloid). The inert surface layer formed by reaction with air has a thickness of about 1.5 nm for silicon, 1–10 nm for beryllium, and 1 nm initially for titanium, growing to 25 nm after several years. Similarly, for aluminium, it grows to about 5 nm after several years. In the context of the semiconductor device fabrication, such as silicon MOSFET transistors and solar cells, surface passivation refers not only to reducing the chemical reactivity of the surface but also to e
https://en.wikipedia.org/wiki/Holborn%209100
The Holborn 9100 was a personal computer introduced in 1981 by a small Dutch company called Holborn, designed by H.A. Polak. Very few of these devices were sold with Holborn going into bankruptcy on the 27 April 1983. The 9100 base module is a server, and 9120 is a terminal. Peripherals 30MB Hard Disk drive Light pen
https://en.wikipedia.org/wiki/Datakit
Datakit is a virtual circuit switch which was developed by Sandy Fraser at Bell Labs for both local-area and wide-area networks, and in widespread deployment by the Regional Bell Operating Companies (RBOCs). Datakit uses a cell relay protocol similar to Asynchronous Transfer Mode. Datakit is a connection-oriented switch, with all packets for a particular call traveling through the network over the same virtual circuit. Datakit networks are still in widespread use by the major telephone companies in the United States. Interfaces to these networks include TCP/IP and UDP, X.25, asynchronous protocols and several synchronous protocols, such as SDLC, HDLC, Bisync and others. These networks support host to terminal traffic and vice versa, host-to-host traffic, file transfers, remote login, remote printing, and remote command execution. At the physical layer, it can operate over multiple media, from slow speed EIA-232 to 500Mbit fiber optic links including 10/100 Megabit ethernet links. Most of Bell Laboratories was trunked together via Datakit networking. On top of Datakit transport service, several operating systems (including UNIX) implemented UUCP for electronic mail and dkcu for remote login. Datakit uses an adaptation protocol called Universal Receiver Protocol (URP) that spreads PDU overhead across multiple cells and performs immediate packet processing. URP assumes that cells arrive in order and may force retransmissions if not. The Information Systems Network (ISN) was the pre-version of Datakit that was supported by the former AT&T Information Systems. The ISN was a packet switching network that was built similar to digital System 75 platform. LAN and WAN applications with the use of what was referred to as a Concentrator that was connected via fiber optics up to 15 miles away from the main ISN. The speeds of these connections were very slow to today's standards, from 1200 to 5600 baud with most connections / end users on dumb terminals. The main support fo
https://en.wikipedia.org/wiki/Upload
Uploading refers to transmitting data from one computer system to another through means of a network. Common methods of uploading include: uploading via web browsers, FTP clients], and terminals (SCP/SFTP). Uploading can be used in the context of (potentially many) clients that send files to a central server. While uploading can also be defined in the context of sending files between distributed clients, such as with a peer-to-peer (P2P) file-sharing protocol like BitTorrent, the term file sharing is more often used in this case. Moving files within a computer system, as opposed to over a network, is called file copying. Uploading directly contrasts with downloading, where data is received over a network. In the case of users uploading files over the internet, uploading is often slower than downloading as many internet service providers (ISPs) offer asymmetric connections, which offer more network bandwidth for downloading than uploading. Definition To transfer something (such as data or files), from a computer or other digital device to the memory of another device (such as a larger or remote computer) especially via the internet. Historical development Remote file sharing first came into fruition in January 1978, when Ward Christensen and Randy Suess, who were members of the Chicago Area Computer Hobbyists' Exchange (CACHE), created the Computerized Bulletin Board System (CBBS). This used an early file transfer protocol (MODEM, later XMODEM) to send binary files via a hardware modem, accessible by another modem via a telephone number. In the following years, new protocols such as Kermit were released, until the File Transfer Protocol (FTP) was standardized 1985 (). FTP is based on TCP/IP and gave rise to many FTP clients, which, in turn, gave users all around the world access to the same standard network protocol to transfer data between devices. The transfer of data saw a significant increase in popularity after the release of the World Wide Web in 1991, wh
https://en.wikipedia.org/wiki/Seven%20Solutions
Seven Solutions is a Spanish hardware technology company headquartered in Granada, Spain, that developed the first white rabbit element on The White Rabbit Project which it was the White Rabbit Switch to use the Precision Time Protocol (PTP) in real application as networking. Seven Solutions got involved on it with the design, manufacture, testing and support. This project was financed by The government of Spain and CERN. Through this project Seven Solution demonstrated a high performance enhanced PTP switch with sub-ns accuracy.
https://en.wikipedia.org/wiki/Inductive%20coupling
In electrical engineering, two conductors are said to be inductively coupled or magnetically coupled when they are configured in a way such that change in current through one wire induces a voltage across the ends of the other wire through electromagnetic induction. A changing current through the first wire creates a changing magnetic field around it by Ampere's circuital law. The changing magnetic field induces an electromotive force (EMF) voltage in the second wire by Faraday's law of induction. The amount of inductive coupling between two conductors is measured by their mutual inductance. The coupling between two wires can be increased by winding them into coils and placing them close together on a common axis, so the magnetic field of one coil passes through the other coil. Coupling can also be increased by a magnetic core of a ferromagnetic material like iron or ferrite in the coils, which increases the magnetic flux. The two coils may be physically contained in a single unit, as in the primary and secondary windings of a transformer, or may be separated. Coupling may be intentional or unintentional. Unintentional inductive coupling can cause signals from one circuit to be induced into a nearby circuit, this is called cross-talk, and is a form of electromagnetic interference. An inductively coupled transponder consists of a solid state transceiver chip connected to a large coil that functions as an antenna. When brought within the oscillating magnetic field of a reader unit, the transceiver is powered up by energy inductively coupled into its antenna and transfers data back to the reader unit inductively. Magnetic coupling between two magnets can also be used to mechanically transfer power without contact, as in the magnetic gear. Uses Inductive coupling is widely used throughout electrical technology; examples include: Electric motors and generators Inductive charging products Induction cookers and induction heating systems Induction loop
https://en.wikipedia.org/wiki/Gravitational%20contact%20terms
In quantum field theory, a contact term is a radiatively induced point-like interaction. These typically occur when the vertex for the emission of a massless particle such as a photon, a graviton, or a gluon, is proportional to (the invariant momentum of the radiated particle). This factor cancels the of the Feynman propagator, and causes the exchange of the massless particle to produce a point-like -function effective interaction, rather than the usual long-range potential. A notable example occurs in the weak interactions where a W-boson radiative correction to a gluon vertex produces a term, leading to what is known as a "penguin" interaction. The contact term then generates a correction to the full action of the theory. Contact terms occur in gravity when there are non-minimal interactions, , or in Brans-Dicke Theory, . The non-minimal couplings are quantum equivalent to an "Einstein frame," with a pure Einstein-Hilbert action, , owing to gravitational contact terms. These arise classically from graviton exchange interactions. The contact terms are an essential, yet hidden, part of the action and, if they are ignored, the Feynman diagram loops in different frames yield different results. At the leading order in including the contact terms is equivalent to performing a Weyl Transformation to remove the non-minimal couplings and taking the theory to the Einstein-Hilbert form. In this sense, the Einstein-Hilbert form of the action is unique and "frame ambiguities" in loop calculations do not exist.
https://en.wikipedia.org/wiki/Ethernet%20over%20USB
Ethernet over USB is the use of a USB link as a part of an Ethernet network, resulting in an Ethernet connection over USB (instead of e.g. PCI or PCIe). USB over Ethernet (also called USB over Network or USB over IP) is a system to share USB-based devices over Ethernet, Wi-Fi, or the Internet, allowing access to devices over a network. It can be done across multiple network devices by using USB over Ethernet Hubs. Protocols There are numerous protocols for Ethernet-style networking over USB. The use of these protocols is to allow application-independent exchange of data with USB devices, instead of specialized protocols such as video or MTP (Media Transfer Protocol). Even though the USB is not a physical Ethernet, the networking stacks of all major operating systems are set up to transport IEEE 802.3 frames, without needing a particular underlying transport. The main industry protocols are (in chronological order): Remote NDIS (RNDIS, a Microsoft vendor protocol), Ethernet Control Model (ECM), Ethernet Emulation Model (EEM), and Network Control Model (NCM). The latter three are part of the larger Communications Device Class (CDC) group of protocols of the USB Implementers Forum (USB-IF). They are available for download from the USB-IF (see below). The RNDIS specification is available from Microsoft's web site. Regarding de facto standards, some standards, such as ECM, specify use of USB resources that early systems did not have. However, minor modifications of the standard subsets make practical implementations possible on such platforms. Remarkably, even some of the most modern platforms need minor accommodations and therefore support for these subsets is still needed. Of these protocols, ECM could be classified the simplest—frames are simply sent and received without modification one at a time. This was a satisfactory strategy for USB 1.1 systems (current when the protocol was issued) with 64 byte packets but not for USB 2.0 systems which use 512 byte packet
https://en.wikipedia.org/wiki/List%20of%20types%20of%20interferometers
An interferometer is a device for extracting information from the superposition of multiple waves. Field and linear interferometers Air-wedge shearing interferometer Astronomical interferometer / Michelson stellar interferometer Classical interference microscopy Bath interferometer (common path) Cyclic interferometer Diffraction-grating interferometer (white light) Double-slit interferometer Dual-polarization interferometry Fabry–Pérot interferometer Fizeau interferometer Fourier-transform interferometer Fresnel interferometer (e.g. Fresnel biprism, Fresnel mirror or Lloyd's mirror) Fringes of Equal Chromatic Order interferometer (FECO) Gabor hologram Gires–Tournois etalon Heterodyne interferometer (see heterodyne) Holographic interferometer Jamin interferometer Laser Doppler vibrometer Linnik interferometer (microscopy) LUPI variant of Michelson Lummer–Gehrcke interferometer Mach–Zehnder interferometer Martin–Puplett interferometer Michelson interferometer Mirau interferometer (also known as a Mirau objective) (microscopy) Moiré interferometer (see moiré pattern) Multi-beam interferometer (microscopy) Near-field interferometer Newton interferometer (see Newton's rings) Nomarski interferometer Nonlinear Michelson interferometer / Step-phase Michelson interferometer N-slit interferometer Phase-shifting interferometer Planar lightwave circuit interferometer (PLC) Photon Doppler velocimeter interferometer (PDV) Polarization interferometer (see also Babinet–Soleil compensator) Point diffraction interferometer Rayleigh interferometer Sagnac interferometer Schlieren interferometer (phase-shifting) Shearing interferometer (lateral and radial) Twyman–Green interferometer Talbot–Lau interferometer Watson interferometer (microscopy) White-light interferometer (see also Optical coherence tomography, White light interferometry, and Coherence Scanning Interferometry) White-light scatterplate interferometer (white-light) (microscopy) Young's double-slit interferometer Zernik