source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Internetowy%20System%20Akt%C3%B3w%20Prawnych
The Internetowy System Aktów Prawnych ( in Polish), shortly ISAP, is a database with information about the legislation in force in Poland, which is part of the oldest and one of the most famous Polish legal information systems, and is publicly available on the website of the Sejm of the Republic of Poland.
https://en.wikipedia.org/wiki/Two-domain%20system
The two-domain system is a biological classification by which all organisms in the tree of life are classified into two big domains, Bacteria and Archaea. It emerged from development of knowledge of archaea diversity and challenges to the widely accepted three-domain system that defines life into Bacteria, Archaea, and Eukarya. It was preceded by the eocyte hypothesis of James A. Lake in the 1980s, which was largely superseded by the three-domain system, due to evidence at the time. Better understanding of archaea, especially of their roles in the origin of eukaryotes through symbiogenesis with bacteria, led to the revival of the eocyte hypothesis in the 2000s. The two-domain system became more widely accepted after the discovery of a large group (superphylum) of archaea called Asgard in 2017, which evidence suggests to be the evolutionary root of eukaryotes, implying that eukaryotes are members of the domain Archaea. While the features of Asgard archaea do not directly rule out the three-domain system, the notion that eukaryotes originated from archaea and thus belong to Archaea has been strengthened by genetic and proteomic studies. Under the three-domain system, Eukarya is mainly distinguished by the presence of "eukaryotic signature proteins", that are not found in archaea and bacteria. However, Asgards contain genes that code for multiple such proteins, indicating that "eukaryotic signature proteins" originated in archaea. Background Classification of life into two main divisions is not a new concept, with the first such proposal by French biologist Édouard Chatton in 1938. Chatton distinguished organisms into: Procaryotes (including bacteria) Eucaryotes (including protozoans) These were later named empires, and Chatton's classification as the two-empire system. Chatton used the name Eucaryotes only for protozoans, excluded other eukaryotes, and published in limited circulation so that his work was not recognised. His classification was rediscovered by
https://en.wikipedia.org/wiki/Archaea
Archaea ( ; : archaeon ) is a domain of single-celled organisms. These microorganisms lack cell nuclei and are therefore prokaryotes. Archaea were initially classified as bacteria, receiving the name archaebacteria (in the Archaebacteria kingdom), but this term has fallen out of use. Archaeal cells have unique properties separating them from the other two domains, Bacteria and Eukaryota. Archaea are further divided into multiple recognized phyla. Classification is difficult because most have not been isolated in a laboratory and have been detected only by their gene sequences in environmental samples. It is unknown if these are able to produce endospores. Archaea and bacteria are generally similar in size and shape, although a few archaea have very different shapes, such as the flat, square cells of Haloquadratum walsbyi. Despite this morphological similarity to bacteria, archaea possess genes and several metabolic pathways that are more closely related to those of eukaryotes, notably for the enzymes involved in transcription and translation. Other aspects of archaeal biochemistry are unique, such as their reliance on ether lipids in their cell membranes, including archaeols. Archaea use more diverse energy sources than eukaryotes, ranging from organic compounds such as sugars, to ammonia, metal ions or even hydrogen gas. The salt-tolerant Haloarchaea use sunlight as an energy source, and other species of archaea fix carbon (autotrophy), but unlike plants and cyanobacteria, no known species of archaea does both. Archaea reproduce asexually by binary fission, fragmentation, or budding; unlike bacteria, no known species of Archaea form endospores. The first observed archaea were extremophiles, living in extreme environments such as hot springs and salt lakes with no other organisms. Improved molecular detection tools led to the discovery of archaea in almost every habitat, including soil, oceans, and marshlands. Archaea are particularly numerous in the oceans, and
https://en.wikipedia.org/wiki/Embedded%20system
An embedded system is a computer system—a combination of a computer processor, computer memory, and input/output peripheral devices—that has a dedicated function within a larger mechanical or electronic system. It is embedded as part of a complete device often including electrical or electronic hardware and mechanical parts. Because an embedded system typically controls physical operations of the machine that it is embedded within, it often has real-time computing constraints. Embedded systems control many devices in common use. , it was estimated that ninety-eight percent of all microprocessors manufactured were used in embedded systems. Modern embedded systems are often based on microcontrollers (i.e. microprocessors with integrated memory and peripheral interfaces), but ordinary microprocessors (using external chips for memory and peripheral interface circuits) are also common, especially in more complex systems. In either case, the processor(s) used may be types ranging from general purpose to those specialized in a certain class of computations, or even custom designed for the application at hand. A common standard class of dedicated processors is the digital signal processor (DSP). Since the embedded system is dedicated to specific tasks, design engineers can optimize it to reduce the size and cost of the product and increase its reliability and performance. Some embedded systems are mass-produced, benefiting from economies of scale. Embedded systems range in size from portable personal devices such as digital watches and MP3 players to bigger machines like home appliances, industrial assembly lines, robots, transport vehicles, traffic light controllers, and medical imaging systems. Often they constitute subsystems of other machines like avionics in aircraft and astrionics in spacecraft. Large installations like factories, pipelines and electrical grids rely on multiple embedded systems networked together. Generalized through software customization, embed
https://en.wikipedia.org/wiki/Directory%20System%20Agent
A Directory System Agent (DSA) is the element of an X.500 directory service that provides User Agents with access to a portion of the directory (usually the portion associated with a single Organizational Unit). X.500 is an international standard developed by the International Organization for Standardization (ISO) and the International Telecommunication Union (ITU-T). The model and function of a directory system agent are specified in ITU-T Recommendation X.501. Active Directory In Microsoft's Active Directory the DSA is a collection of servers and daemon processes that run on Windows Server systems that provide various means for clients to access the Active Directory data store. Clients connect to an Active Directory DSA using various communications protocols: LDAP version 3.0—used by Windows 2000 and Windows XP clients LDAP version 2.0 Security Account Manager (SAM) interface—used by Windows NT clients MAPI RPC interface—used by Microsoft Exchange Server and other MAPI clients A proprietary RPC interface—used by Active Directory DSAs to communicate with one another and replicate data amongst themselves
https://en.wikipedia.org/wiki/List%20of%20exponential%20topics
This is a list of exponential topics, by Wikipedia page. See also list of logarithm topics. Accelerating change Approximating natural exponents (log base e) Artin–Hasse exponential Bacterial growth Baker–Campbell–Hausdorff formula Cell growth Barometric formula Beer–Lambert law Characterizations of the exponential function Catenary Compound interest De Moivre's formula Derivative of the exponential map Doléans-Dade exponential Doubling time e-folding Elimination half-life Error exponent Euler's formula Euler's identity e (mathematical constant) Exponent Exponent bias Exponential (disambiguation) Exponential backoff Exponential decay Exponential dichotomy Exponential discounting Exponential diophantine equation Exponential dispersion model Exponential distribution Exponential error Exponential factorial Exponential family Exponential field Exponential formula Exponential function Exponential generating function Exponential-Golomb coding Exponential growth Exponential hierarchy Exponential integral Exponential integrator Exponential map (Lie theory) Exponential map (Riemannian geometry) Exponential map (discrete dynamical systems) Exponential notation Exponential object (category theory) Exponential polynomials—see also Touchard polynomials (combinatorics) Exponential response formula Exponential sheaf sequence Exponential smoothing Exponential stability Exponential sum Exponential time Sub-exponential time Exponential tree Exponential type Exponentially equivalent measures Exponentiating by squaring Exponentiation Fermat's Last Theorem Forgetting curve Gaussian function Gudermannian function Half-exponential function Half-life Hyperbolic function Inflation, inflation rate Interest Lambert W function Lifetime (physics) Limiting factor Lindemann–Weierstrass theorem
https://en.wikipedia.org/wiki/Time-lapse%20microscopy
Time-lapse microscopy is time-lapse photography applied to microscopy. Microscope image sequences are recorded and then viewed at a greater speed to give an accelerated view of the microscopic process. Before the introduction of the video tape recorder in the 1960s, time-lapse microscopy recordings were made on photographic film. During this period, time-lapse microscopy was referred to as microcinematography. With the increasing use of video recorders, the term time-lapse video microscopy was gradually adopted. Today, the term video is increasingly dropped, reflecting that a digital still camera is used to record the individual image frames, instead of a video recorder. Applications Time-lapse microscopy can be used to observe any microscopic object over time. However, its main use is within cell biology to observe artificially cultured cells. Depending on the cell culture, different microscopy techniques can be applied to enhance characteristics of the cells as most cells are transparent. To enhance observations further, cells have therefore traditionally been stained before observation. Unfortunately, the staining process kills the cells. The development of less destructive staining methods and methods to observe unstained cells has led to that cell biologists increasingly observe living cells. This is known as live-cell imaging. A few tools have been developed to identify and analyze single cells during live-cell imaging. Time-lapse microscopy is the method that extends live-cell imaging from a single observation in time to the observation of cellular dynamics over long periods of time. Time-lapse microscopy is primarily used in research, but is clinically used in IVF clinics as studies has proven it to increase pregnancy rates, lower abortion rates and predict aneuploidy Modern approaches are further extending time-lapse microscopy observations beyond making movies of cellular dynamics. Traditionally, cells have been observed in a microscope and measured
https://en.wikipedia.org/wiki/Direct%20numerical%20control
Direct numerical control (DNC), also known as distributed numerical control (also DNC), is a common manufacturing term for networking CNC machine tools. On some CNC machine controllers, the available memory is too small to contain the machining program (for example machining complex surfaces), so in this case the program is stored in a separate computer and sent directly to the machine, one block at a time. If the computer is connected to a number of machines it can distribute programs to different machines as required. Usually, the manufacturer of the control provides suitable DNC software. However, if this provision is not possible, some software companies provide DNC applications that fulfill the purpose. DNC networking or DNC communication is always required when CAM programs are to run on some CNC machine control. Wireless DNC is also used in place of hard-wired versions. Controls of this type are very widely used in industries with significant sheet metal fabrication, such as the automotive, appliance, and aerospace industries. History 1950s-1970s Programs had to be walked to NC controls, generally on paper tape. NC controls had paper tape readers precisely for this purpose. Many companies were still punching programs on paper tape well into the 1980s, more than twenty-five years after its elimination in the computer industry. 1980s The focus in the 1980s was mainly on reliably transferring NC programs between a host computer and the control. The Host computers would frequently be Sun Microsystems, HP, Prime, DEC or IBM type computers running a variety of CAD/CAM software. DNC companies offered machine tool links using rugged proprietary terminals and networks. For example, DLog offered an x86 based terminal, and NCPC had one based on the 6809. The host software would be responsible for tracking and authorising NC program modifications. Depending on program size, for the first time operators had the opportunity to modify programs at the DNC terminal. No
https://en.wikipedia.org/wiki/List%20of%20heaviest%20people
This is a list of the heaviest people who have been weighed and verified, living and dead. The list is organised by the peak weight reached by an individual and is limited to those who are over . Heaviest people ever recorded See also Big Pun (1971–2000), American rapper whose weight at death was . Edward Bright (1721–1750) and Daniel Lambert (1770–1809), men from England who were famous in their time for their obesity. Happy Humphrey, the heaviest professional wrestler, weighing in at at his peak. Israel Kamakawiwoʻole (1959–1997), Hawaiian singer whose weight peaked at . Paul Kimelman (born 1947), holder of Guinness World Record for the greatest weight-loss in the shortest amount of time, 1982 Billy and Benny McCrary, holders of Guinness World Records's World's Heaviest Twins. Alayna Morgan (1948–2009), heavy woman from Santa Rosa, California. Ricky Naputi (1973–2012), heaviest man from Guam. Carl Thompson (1982–2015), heaviest man in the United Kingdom whose weight at death was . Renee Williams (1977–2007), woman from Austin, Texas. Yokozuna, the heaviest WWE wrestler, weighing between and at his peak. Barry Austin and Jack Taylor, two obese British men documented in the comedy-drama The Fattest Man in Britain. Yamamotoyama Ryūta, heaviest Japanese-born sumo wrestler; is also thought to be the heaviest Japanese person ever at .
https://en.wikipedia.org/wiki/Matching%20pursuit
Matching pursuit (MP) is a sparse approximation algorithm which finds the "best matching" projections of multidimensional data onto the span of an over-complete (i.e., redundant) dictionary . The basic idea is to approximately represent a signal from Hilbert space as a weighted sum of finitely many functions (called atoms) taken from . An approximation with atoms has the form where is the th column of the matrix and is the scalar weighting factor (amplitude) for the atom . Normally, not every atom in will be used in this sum. Instead, matching pursuit chooses the atoms one at a time in order to maximally (greedily) reduce the approximation error. This is achieved by finding the atom that has the highest inner product with the signal (assuming the atoms are normalized), subtracting from the signal an approximation that uses only that one atom, and repeating the process until the signal is satisfactorily decomposed, i.e., the norm of the residual is small, where the residual after calculating and is denoted by . If converges quickly to zero, then only a few atoms are needed to get a good approximation to . Such sparse representations are desirable for signal coding and compression. More precisely, the sparsity problem that matching pursuit is intended to approximately solve is where is the pseudo-norm (i.e. the number of nonzero elements of ). In the previous notation, the nonzero entries of are . Solving the sparsity problem exactly is NP-hard, which is why approximation methods like MP are used. For comparison, consider the Fourier transform representation of a signal - this can be described using the terms given above, where the dictionary is built from sinusoidal basis functions (the smallest possible complete dictionary). The main disadvantage of Fourier analysis in signal processing is that it extracts only the global features of the signals and does not adapt to the analysed signals . By taking an extremely redundant dictionary, we can look
https://en.wikipedia.org/wiki/Register%20file
A register file is an array of processor registers in a central processing unit (CPU). Register banking is the method of using a single name to access multiple different physical registers depending on the operating mode. Modern integrated circuit-based register files are usually implemented by way of fast static RAMs with multiple ports. Such RAMs are distinguished by having dedicated read and write ports, whereas ordinary multiported SRAMs will usually read and write through the same ports. The instruction set architecture of a CPU will almost always define a set of registers which are used to stage data between memory and the functional units on the chip. In simpler CPUs, these architectural registers correspond one-for-one to the entries in a physical register file (PRF) within the CPU. More complicated CPUs use register renaming, so that the mapping of which physical entry stores a particular architectural register changes dynamically during execution. The register file is part of the architecture and visible to the programmer, as opposed to the concept of transparent caches. Register-bank switching Register files may be clubbed together as register banks. A processor may have more than one register bank. ARM processors have both banked and unbanked registers. While all modes always share the same physical registers for the first eight general-purpose registers, R0 to R7, the physical register which the banked registers, R8 to R14, point to depends on the operating mode the processor is in. Notably, Fast Interrupt Request (FIQ) mode has its own bank of registers for R8 to R12, with the architecture also providing a private stack pointer (R13) for every interrupt mode. x86 processors use context switching and fast interrupt for switching between instruction, decoder, GPRs and register files, if there is more than one, before the instruction is issued, but this is only existing on processors that support superscalar. However, context switching is a totall
https://en.wikipedia.org/wiki/Programmable%20Array%20Logic
Programmable Array Logic (PAL) is a family of programmable logic device semiconductors used to implement logic functions in digital circuits introduced by Monolithic Memories, Inc. (MMI) in March 1978. MMI obtained a registered trademark on the term PAL for use in "Programmable Semiconductor Logic Circuits". The trademark is currently held by Lattice Semiconductor. PAL devices consisted of a small PROM (programmable read-only memory) core and additional output logic used to implement particular desired logic functions with few components. Using specialized machines, PAL devices were "field-programmable". PALs were available in several variants: "One-time programmable" (OTP) devices could not be updated and reused after initial programming (MMI also offered a similar family called HAL, or "hard array logic", which were like PAL devices except that they were mask-programmed at the factory.). UV erasable versions (e.g.: PALCxxxxx e.g.: PALC22V10) had a quartz window over the chip die and could be erased for re-use with an ultraviolet light source just like an EPROM. Later versions (PALCExxx e.g.: PALCE22V10) were flash erasable devices. In most applications, electrically-erasable GALs are now deployed as pin-compatible direct replacements for one-time programmable PALs. History Before PALs were introduced, designers of digital logic circuits would use small-scale integration (SSI) components, such as those in the 7400 series TTL (transistor-transistor logic) family; the 7400 family included a variety of logic building blocks, such as gates (NOT, NAND, NOR, AND, OR), multiplexers (MUXes) and demultiplexers (DEMUXes), flip flops (D-type, JK, etc.) and others. One PAL device would typically replace dozens of such "discrete" logic packages, so the SSI business declined as the PAL business took off. PALs were used advantageously in many products, such as minicomputers, as documented in Tracy Kidder's best-selling book The Soul of a New Machine. PALs were not the
https://en.wikipedia.org/wiki/Signal%20transfer%20function
The signal transfer function (SiTF) is a measure of the signal output versus the signal input of a system such as an infrared system or sensor. There are many general applications of the SiTF. Specifically, in the field of image analysis, it gives a measure of the noise of an imaging system, and thus yields one assessment of its performance. SiTF evaluation In evaluating the SiTF curve, the signal input and signal output are measured differentially; meaning, the differential of the input signal and differential of the output signal are calculated and plotted against each other. An operator, using computer software, defines an arbitrary area, with a given set of data points, within the signal and background regions of the output image of the infrared sensor, i.e. of the unit under test (UUT), (see "Half Moon" image below). The average signal and background are calculated by averaging the data of each arbitrarily defined region. A second order polynomial curve is fitted to the data of each line. Then, the polynomial is subtracted from the average signal and background data to yield the new signal and background. The difference of the new signal and background data is taken to yield the net signal. Finally, the net signal is plotted versus the signal input. The signal input of the UUT is within its own spectral response. (e.g. color-correlated temperature, pixel intensity, etc.). The slope of the linear portion of this curve is then found using the method of least squares. SiTF curve The net signal is calculated from the average signal and background, as in signal to noise ratio (imaging)#Calculations. The SiTF curve is then given by the signal output data, (net signal data), plotted against the signal input data (see graph of SiTF to the right). All the data points in the linear region of the SiTF curve can be used in the method of least squares to find a linear approximation. Given data points a best fit line parameterized as is given by: See also Optica
https://en.wikipedia.org/wiki/Routh%E2%80%93Hurwitz%20stability%20criterion
In the control system theory, the Routh–Hurwitz stability criterion is a mathematical test that is a necessary and sufficient condition for the stability of a linear time-invariant (LTI) dynamical system or control system. A stable system is one whose output signal is bounded; the position, velocity or energy do not increase to infinity as time goes on. The Routh test is an efficient recursive algorithm that English mathematician Edward John Routh proposed in 1876 to determine whether all the roots of the characteristic polynomial of a linear system have negative real parts. German mathematician Adolf Hurwitz independently proposed in 1895 to arrange the coefficients of the polynomial into a square matrix, called the Hurwitz matrix, and showed that the polynomial is stable if and only if the sequence of determinants of its principal submatrices are all positive. The two procedures are equivalent, with the Routh test providing a more efficient way to compute the Hurwitz determinants () than computing them directly. A polynomial satisfying the Routh–Hurwitz criterion is called a Hurwitz polynomial. The importance of the criterion is that the roots p of the characteristic equation of a linear system with negative real parts represent solutions ept of the system that are stable (bounded). Thus the criterion provides a way to determine if the equations of motion of a linear system have only stable solutions, without solving the system directly. For discrete systems, the corresponding stability test can be handled by the Schur–Cohn criterion, the Jury test and the Bistritz test. With the advent of computers, the criterion has become less widely used, as an alternative is to solve the polynomial numerically, obtaining approximations to the roots directly. The Routh test can be derived through the use of the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices. Hurwitz derived his conditions differently. Using Euclid's algorithm The criterion is rela
https://en.wikipedia.org/wiki/Integrated%20Geo%20Systems
Integrated Geo Systems (IGS) is a computational architecture system developed for managing geoscientific data through systems and data integration. Geosciences often involve large volumes of diverse data which have to be processed by computer and graphics intensive applications. The processes involved in processing these large datasets are often so complex that no single applications software can perform all the required tasks. Specialized applications have emerged for specific tasks. To get the required results, it is necessary that all applications software involved in various stages of data processing, analysis and interpretation effectively communicate with each other by sharing data. IGS provides a framework for maintaining an electronic workflow between various geoscience software applications through data connectivity. The main components of IGS are: Geographic information systems as a front end. Format engine for data connectivity link between various geoscience software applications. The format engine uses Output Input Language (OIL), an interpreted language, to define various data formats. An array of geoscience relational databases for data integration. Data highways as internal data formats for each data type. Specialized geoscience applications software as processing modules. Geoscientific processing libraries External links Geological Society Books American Association of Petroleum Geologists Book Store Integrated Geo Systems Research Paper Computer systems
https://en.wikipedia.org/wiki/Principal%20%28computer%20security%29
A principal in computer security is an entity that can be authenticated by a computer system or network. It is referred to as a security principal in Java and Microsoft literature. Principals can be individual people, computers, services, computational entities such as processes and threads, or any group of such things. They need to be identified and authenticated before they can be assigned rights and privileges over resources in the network. A principal typically has an associated identifier (such as a security identifier) that allows it to be referenced for identification or assignment of properties and permissions.
https://en.wikipedia.org/wiki/Service-oriented%20software%20engineering
Service-oriented Software Engineering (SOSE), also referred to as service engineering, is a software engineering methodology focused on the development of software systems by composition of reusable services (service-orientation) often provided by other service providers. Since it involves composition, it shares many characteristics of component-based software engineering, the composition of software systems from reusable components, but it adds the ability to dynamically locate necessary services at run-time. These services may be provided by others as web services, but the essential element is the dynamic nature of the connection between the service users and the service providers. Service-oriented interaction pattern There are three types of actors in a service-oriented interaction: service providers, service users and service registries. They participate in a dynamic collaboration which can vary from time to time. Service providers are software services that publish their capabilities and availability with service registries. Service users are software systems (which may be services themselves) that accomplish some task through the use of services provided by service providers. Service users use service registries to discover and locate the service providers they can use. This discovery and location occurs dynamically when the service user requests them from a service registry. See also Service-oriented architecture (SOA) Service-oriented analysis and design Separation of concerns Component-based software engineering Web services
https://en.wikipedia.org/wiki/Mathematical%20proof
A mathematical proof is a deductive argument for a mathematical statement, showing that the stated assumptions logically guarantee the conclusion. The argument may use other previously established statements, such as theorems; but every proof can, in principle, be constructed using only certain basic or original assumptions known as axioms, along with the accepted rules of inference. Proofs are examples of exhaustive deductive reasoning which establish logical certainty, to be distinguished from empirical arguments or non-exhaustive inductive reasoning which establish "reasonable expectation". Presenting many cases in which the statement holds is not enough for a proof, which must demonstrate that the statement is true in all possible cases. A proposition that has not been proved but is believed to be true is known as a conjecture, or a hypothesis if frequently used as an assumption for further mathematical work. Proofs employ logic expressed in mathematical symbols, along with natural language which usually admits some ambiguity. In most mathematical literature, proofs are written in terms of rigorous informal logic. Purely formal proofs, written fully in symbolic language without the involvement of natural language, are considered in proof theory. The distinction between formal and informal proofs has led to much examination of current and historical mathematical practice, quasi-empiricism in mathematics, and so-called folk mathematics, oral traditions in the mainstream mathematical community or in other cultures. The philosophy of mathematics is concerned with the role of language and logic in proofs, and mathematics as a language. History and etymology The word "proof" comes from the Latin probare (to test). Related modern words are English "probe", "probation", and "probability", Spanish probar (to smell or taste, or sometimes touch or test), Italian provare (to try), and German probieren (to try). The legal term "probity" means authority or credibility, th
https://en.wikipedia.org/wiki/Path%20computation%20element
In computer networks, a path computation element (PCE) is a system component, application, or network node that is capable of determining and finding a suitable route for conveying data between a source and a destination. Description Routing can be subject to a set of constraints, such as quality of service (QoS), policy, or price. Constraint-based path computation is a strategic component of traffic engineering in MPLS, GMPLS and Segment Routing networks. It is used to determine the path through the network that traffic should follow, and provides the route for each label-switched path (LSP) that is set up. Path computation has previously been performed either in a management system or at the head end of each LSP. But path computation in large, multi-domain networks may be very complex and may require more computational power and network information than is typically available at a network element, yet may still need to be more dynamic than can be provided by a management system. Thus, a PCE is an entity capable of computing paths for a single or set of services. A PCE might be a network node, network management station, or dedicated computational platform that is resource-aware and has the ability to consider multiple constraints for sophisticated path computation. PCE applications compute label-switched paths for MPLS and GMPLS traffic engineering. The various components of the PCE architecture are in the process of being standardized by the IETF's PCE Working Group. PCE represents a vision of networks that separates route computations from the signaling of end-to-end connections and from actual packet forwarding. There is a basic tutorial on PCE as presented at ISOCORE's MPLS2008 conference and a tutorial on advanced PCE as presented at ISOCORE's SDN/MPLS 2014 conference. Since the early days, the PCE architecture has evolved considerably to encompass more sophisticated concepts and allow application to more complicated network scenarios. This evolution inc
https://en.wikipedia.org/wiki/Reflectometry
Reflectometry is a general term for the use of the reflection of waves or pulses at surfaces and interfaces to detect or characterize objects, sometimes to detect anomalies as in fault detection and medical diagnosis. There are many different forms of reflectometry. They can be classified in several ways: by the used radiation (electromagnetic, ultrasound, particle beams), by the geometry of wave propagation (unguided versus wave guides or cables), by the involved length scales (wavelength and penetration depth in relation to size of the investigated object), by the method of measurement (continuous versus pulsed, polarization resolved, ...), and by the application domain. Radiation sources Electromagnetic radiation of widely varying wavelength is used in many different forms of reflectometry: Radar: Reflections of radiofrequency pulses are used to detect the presence and to measure the location and speed of objects such as aircraft, missiles, ships, vehicles. Lidar: Reflections of light pulses are used typically to penetrate ground cover by vegetation in aerial archaeological surveys. Characterization of semiconductor and dielectric thin films: Analysis of reflectance data utilizing the Forouhi Bloomer dispersion equations can determine the thickness, refractive index, and extinction coefficient of thin films utilized in the semiconductor industry. X-ray reflectometry: is a surface-sensitive analytical technique used in chemistry, physics, and materials science to characterize surfaces, thin films and multilayers. Propagation of electric pulses and reflection at discontinuities in cables is used in time domain reflectometry (TDR) to detect and localize defects in electric wiring. Skin reflectance: In anthropology, reflectometry devices are often used to gauge human skin color through the measurement of skin reflectance. These devices are typically pointed at the upper arm or forehead, with the emitted waves then interpreted at various percentages. Lower fr
https://en.wikipedia.org/wiki/Safety-critical%20system
A safety-critical system or life-critical system is a system whose failure or malfunction may result in one (or more) of the following outcomes: death or serious injury to people loss or severe damage to equipment/property environmental harm A safety-related system (or sometimes safety-involved system) comprises everything (hardware, software, and human aspects) needed to perform one or more safety functions, in which failure would cause a significant increase in the safety risk for the people or environment involved. Safety-related systems are those that do not have full responsibility for controlling hazards such as loss of life, severe injury or severe environmental damage. The malfunction of a safety-involved system would only be that hazardous in conjunction with the failure of other systems or human error. Some safety organizations provide guidance on safety-related systems, for example the Health and Safety Executive in the United Kingdom. Risks of this sort are usually managed with the methods and tools of safety engineering. A safety-critical system is designed to lose less than one life per billion (109) hours of operation. Typical design methods include probabilistic risk assessment, a method that combines failure mode and effects analysis (FMEA) with fault tree analysis. Safety-critical systems are increasingly computer-based. Safety-critical systems are a concept often used together with the Swiss cheese model to represent (usually in a bow-tie diagram) how a threat can escalate to a major accident through the failure of multiple critical barriers. This use has become common especially in the domain of process safety, in particular when applied to oil and gas drilling and production both for illustrative purposes and to support other processes, such as asset integrity management and incident investigation. Reliability regimes Several reliability regimes for safety-critical systems exist: Fail-operational systems continue to operate when their
https://en.wikipedia.org/wiki/Graduate%20Texts%20in%20Mathematics
Graduate Texts in Mathematics (GTM) () is a series of graduate-level textbooks in mathematics published by Springer-Verlag. The books in this series, like the other Springer-Verlag mathematics series, are yellow books of a standard size (with variable numbers of pages). The GTM series is easily identified by a white band at the top of the book. The books in this series tend to be written at a more advanced level than the similar Undergraduate Texts in Mathematics series, although there is a fair amount of overlap between the two series in terms of material covered and difficulty level. List of books Introduction to Axiomatic Set Theory, Gaisi Takeuti, Wilson M. Zaring (1982, 2nd ed., ) Measure and Category – A Survey of the Analogies between Topological and Measure Spaces, John C. Oxtoby (1980, 2nd ed., ) Topological Vector Spaces, H. H. Schaefer, M. P. Wolff (1999, 2nd ed., ) A Course in Homological Algebra, Peter Hilton, Urs Stammbach (1997, 2nd ed., ) Categories for the Working Mathematician, Saunders Mac Lane (1998, 2nd ed., ) Projective Planes, Daniel R. Hughes, Fred C. Piper, (1982, ) A Course in Arithmetic, Jean-Pierre Serre (1996, ) Axiomatic Set Theory, Gaisi Takeuti, Wilson M. Zaring, (1973, ) Introduction to Lie Algebras and Representation Theory, James E. Humphreys (1997, ) A Course in Simple-Homotopy Theory, Marshall. M. Cohen, (1973, ) Functions of One Complex Variable I, John B. Conway (1978, 2nd ed., ) Advanced Mathematical Analysis, Richard Beals (1973, ) Rings and Categories of Modules, Frank W. Anderson, Kent R. Fuller (1992, 2nd ed., ) Stable Mappings and Their Singularities, Martin Golubitsky, Victor Guillemin, (1974, ) Lectures in Functional Analysis and Operator Theory, Sterling K. Berberian, (1974, ) The Structure of Fields, David J. Winter, (1974, ) Random Processes, Murray Rosenblatt, (1974, ) Measure Theory, Paul R. Halmos (1974, ) A Hilbert Space Problem Book, Paul R. Halmos (1982, 2nd ed., ) Fibre Bundles, Dale Husemoller (1994,
https://en.wikipedia.org/wiki/Antibody%20testing
Antibody testing may refer to: Serological testing, tests that detect specific antibodies in the blood Immunoassay, tests that use antibodies to detect substances Antibody titer, tests that measure the amount of a specific antibody in a sample Antibodies Biological techniques and tools
https://en.wikipedia.org/wiki/Johnson%E2%80%93Nyquist%20noise
Johnson–Nyquist noise (thermal noise, Johnson noise, or Nyquist noise) is the electronic noise generated by the thermal agitation of the charge carriers (usually the electrons) inside an electrical conductor at equilibrium, which happens regardless of any applied voltage. Thermal noise is present in all electrical circuits, and in sensitive electronic equipment (such as radio receivers) can drown out weak signals, and can be the limiting factor on sensitivity of electrical measuring instruments. Thermal noise increases with temperature. Some sensitive electronic equipment such as radio telescope receivers are cooled to cryogenic temperatures to reduce thermal noise in their circuits. The generic, statistical physical derivation of this noise is called the fluctuation-dissipation theorem, where generalized impedance or generalized susceptibility is used to characterize the medium. Thermal noise in an ideal resistor is approximately white, meaning that the power spectral density is nearly constant throughout the frequency spectrum, but does decay to zero at extremely high frequencies (terahertz for room temperature). When limited to a finite bandwidth, thermal noise has a nearly Gaussian amplitude distribution. History This type of noise was discovered and first measured by John B. Johnson at Bell Labs in 1926. He described his findings to Harry Nyquist, also at Bell Labs, who was able to explain the results. Derivation As Nyquist stated in his 1928 paper, the sum of the energy in the normal modes of electrical oscillation would determine the amplitude of the noise. Nyquist used the equipartition law of Boltzmann and Maxwell. Using the concept potential energy and harmonic oscillators of the equipartition law, where is the noise power density in (W/Hz), is the Boltzmann constant and is the temperature. Multiplying the equation by bandwidth gives the result as noise power. where N is the noise power and Δf is the bandwidth. Noise voltage and power The
https://en.wikipedia.org/wiki/Clipping%20%28signal%20processing%29
Clipping is a form of distortion that limits a signal once it exceeds a threshold. Clipping may occur when a signal is recorded by a sensor that has constraints on the range of data it can measure, it can occur when a signal is digitized, or it can occur any other time an analog or digital signal is transformed, particularly in the presence of gain or overshoot and undershoot. Clipping may be described as hard, in cases where the signal is strictly limited at the threshold, producing a flat cutoff; or it may be described as soft, in cases where the clipped signal continues to follow the original at a reduced gain. Hard clipping results in many high-frequency harmonics; soft clipping results in fewer higher-order harmonics and intermodulation distortion components. Audio In the frequency domain, clipping produces strong harmonics in the high-frequency range (as the clipped waveform comes closer to a squarewave). The extra high-frequency weighting of the signal could make tweeter damage more likely than if the signal was not clipped. Many electric guitar players intentionally overdrive their amplifiers (or insert a "fuzz box") to cause clipping in order to get a desired sound (see guitar distortion). In general, the distortion associated with clipping is unwanted, and is visible on an oscilloscope even if it is inaudible. Images In the image domain, clipping is seen as desaturated (washed-out) bright areas that turn to pure white if all color components clip. In digital colour photography, it is also possible for individual colour channels to clip, which results in inaccurate colour reproduction. Causes Analog circuitry A circuit designer may intentionally use a clipper or clamper to keep a signal within a desired range. When an amplifier is pushed to create a signal with more power than it can support, it will amplify the signal only up to its maximum capacity, at which point the signal will be amplified no further. An integrated circuit or discrete
https://en.wikipedia.org/wiki/Social%20Bonding%20and%20Nurture%20Kinship
Social Bonding and Nurture Kinship: Compatibility between Cultural and Biological Approaches is a book on human kinship and social behavior by Maximilian Holland, published in 2012. The work synthesizes the perspectives of evolutionary biology, psychology and sociocultural anthropology towards understanding human social bonding and cooperative behavior. It presents a theoretical treatment that many consider to have resolved longstanding questions about the proper place of genetic (or 'blood') connections in human kinship and social relations, and a synthesis that "should inspire more nuanced ventures in applying Darwinian approaches to sociocultural anthropology". The book has been called "A landmark in the field of evolutionary biology" which "gets to the heart of the matter concerning the contentious relationship between kinship categories, genetic relatedness and the prediction of behavior", "places genetic determinism in the correct perspective" and serves as "a shining example of what can be achieved when excellent scholars engage fully across disciplinary boundaries." The aim of the book is to show that "properly interpreted, cultural anthropology approaches (and ethnographic data) and biological approaches are perfectly compatible regarding processes of social bonding in humans." Holland's position is based on demonstrating that the dominant biological theory of social behavior (inclusive fitness theory) is typically misunderstood to predict that genetic ties are necessary for the expression of social behaviors, whereas in fact the theory only implicates genetic associations as necessary for the evolution of social behaviors. Whilst rigorous evolutionary biologists have long understood the distinction between these levels of analysis (see Tinbergen's four questions), past attempts to apply inclusive fitness theory to humans have often overlooked the distinction between evolution and expression. Beyond its central argument, the broader philosophical implic
https://en.wikipedia.org/wiki/Universality%E2%80%93diversity%20paradigm
The universality–diversity paradigm is the analysis of biological materials based on the universality and diversity of its fundamental structural elements and functional mechanisms. The analysis of biological systems based on this classification has been a cornerstone of modern biology. For example, proteins constitute the elementary building blocks of a vast variety of biological materials such as cells, spider silk or bone, where they create extremely robust, multi-functional materials by self-organization of structures over many length- and time scales, from nano to macro. Some of the structural features are commonly found in many different tissues, that is, they are conservation|highly conserved. Examples of such universal building blocks include alpha-helices, beta-sheets or tropocollagen molecules. In contrast, other features are highly specific to tissue types, such as particular filament assemblies, beta-sheet nanocrystals in spider silk or tendon fascicles. This coexistence of universality and diversity—referred to as the universality–diversity paradigm (UDP)—is an overarching feature in biological materials and a crucial component of materiomics. It might provide guidelines for bioinspired and biomimetic material development, where this concept is translated into the use of inorganic or hybrid organic-inorganic building blocks. See also Materiomics Phylogenetics
https://en.wikipedia.org/wiki/List%20of%20Lie%20groups%20topics
This is a list of Lie group topics, by Wikipedia page. Examples See Table of Lie groups for a list General linear group, special linear group SL2(R) SL2(C) Unitary group, special unitary group SU(2) SU(3) Orthogonal group, special orthogonal group Rotation group SO(3) SO(8) Generalized orthogonal group, generalized special orthogonal group The special unitary group SU(1,1) is the unit sphere in the ring of coquaternions. It is the group of hyperbolic motions of the Poincaré disk model of the Hyperbolic plane. Lorentz group Spinor group Symplectic group Exceptional groups G2 F4 E6 E7 E8 Affine group Euclidean group Poincaré group Heisenberg group Lie algebras Commutator Jacobi identity Universal enveloping algebra Baker-Campbell-Hausdorff formula Casimir invariant Killing form Kac–Moody algebra Affine Lie algebra Loop algebra Graded Lie algebra Foundational results One-parameter group, One-parameter subgroup Matrix exponential Infinitesimal transformation Lie's third theorem Maurer–Cartan form Cartan's theorem Cartan's criterion Local Lie group Formal group law Hilbert's fifth problem Hilbert-Smith conjecture Lie group decompositions Real form (Lie theory) Complex Lie group Complexification (Lie group) Semisimple theory Simple Lie group Compact Lie group, Compact real form Semisimple Lie algebra Root system Simply laced group ADE classification Maximal torus Weyl group Dynkin diagram Weyl character formula Representation theory Representation of a Lie group Representation of a Lie algebra Adjoint representation of a Lie group Adjoint representation of a Lie algebra Unitary representation Weight (representation theory) Peter–Weyl theorem Borel–Weil theorem Kirillov character formula Representation theory of SU(2) Representation theory of SL2(R) Applications Physical theories Pauli matrices Gell-Mann matrices Poisson bracket Noether's theorem Wigner's classification Gauge theory Grand unification theory Supergroup Lie superalgebra Twistor theory Anyon Witt
https://en.wikipedia.org/wiki/Reproductive%20biology
Reproductive biology includes both sexual and asexual reproduction. Reproductive biology includes a wide number of fields: Reproductive systems Endocrinology Sexual development (Puberty) Sexual maturity Reproduction Fertility Human reproductive biology Endocrinology Human reproductive biology is primarily controlled through hormones, which send signals to the human reproductive structures to influence growth and maturation. These hormones are secreted by endocrine glands, and spread to different tissues in the human body. In humans, the pituitary gland synthesizes hormones used to control the activity of endocrine glands. Reproductive systems Internal and external organs are included in the reproductive system. There are two reproductive systems including the male and female, which contain different organs from one another. These systems work together in order to produce offspring. Female reproductive system The female reproductive system includes the structures involved in ovulation, fertilization, development of an embryo, and birth. These structures include: Ovaries Oviducts Uterus Vagina Mammary Glands Estrogen is one of the sexual reproductive hormones that aid in the sexual reproductive system of the female. Male reproductive system The male reproductive system includes testes, rete testis, efferent ductules, epididymis, sex accessory glands, sex accessory ducts and external genitalia. Testosterone, an androgen, although present in both males and females, is relatively more abundant in males. Testosterone serves as one of the major sexual reproductive hormones in the male reproductive system However, the enzyme aromatase is present in testes and capable of synthesizing estrogens from androgens. Estrogens are present in high concentrations in luminal fluids of the male reproductive tract. Androgen and estrogen receptors are abundant in epithelial cells of the male reproductive tract. Animal Reproductive Biology Animal reproduction oc
https://en.wikipedia.org/wiki/Packet%20switching
In telecommunications, packet switching is a method of grouping data into packets that are transmitted over a digital network. Packets are made of a header and a payload. Data in the header is used by networking hardware to direct the packet to its destination, where the payload is extracted and used by an operating system, application software, or higher layer protocols. Packet switching is the primary basis for data communications in computer networks worldwide. During the early 1960s, Polish-American engineer Paul Baran developed a concept he called "distributed adaptive message block switching", with the goal of providing a fault-tolerant, efficient routing method for telecommunication messages as part of a research program at the RAND Corporation, funded by the United States Department of Defense. His ideas contradicted then-established principles of pre-allocation of network bandwidth, exemplified by the development of telecommunications in the Bell System. The new concept found little resonance among network implementers until the independent work of British computer scientist Donald Davies at the National Physical Laboratory in 1965. Davies coined the modern term packet switching and inspired numerous packet switching networks in the decade following, including the incorporation of the concept into the design of the ARPANET in the United States and the CYCLADES network in France. The ARPANET and CYCLADES were the primary precursor networks of the modern Internet. Concept A simple definition of packet switching is: Packet switching allows delivery of variable bit rate data streams, realized as sequences of packets, over a computer network which allocates transmission resources as needed using statistical multiplexing or dynamic bandwidth allocation techniques. As they traverse networking hardware, such as switches and routers, packets are received, buffered, queued, and retransmitted (stored and forwarded), resulting in variable latency and throughput de
https://en.wikipedia.org/wiki/List%20of%20irreducible%20Tits%20indices
In the mathematical theory of linear algebraic groups, a Tits index (or index) is an object used to classify semisimple algebraic groups defined over a base field k, not assumed to be algebraically closed. The possible irreducible indices were classified by Jacques Tits, and this classification is reproduced below. (Because every index is a direct sum of irreducible indices, classifying all indices amounts to classifying irreducible indices.) Organization of the list An index can be represented as a Dynkin diagram with certain vertices drawn close to each other (the orbit of the vertices under the *-action of the Galois group of k) and with certain sets of vertices circled (the orbits of the non-distinguished vertices under the *-action). This representation captures the full information of the index except when the underlying Dynkin diagram is D4, in which case one must distinguish between an action by the cyclic group C3 or the permutation group S3. Alternatively, an index can be represented using the name of the underlying Dykin diagram together with additional superscripts and subscripts, to be explained momentarily. This representation, together with the labeled Dynkin diagram described in the previous paragraph, captures the full information of the index. The notation for an index is of the form gX, where X is the letter of the underlying Dynkin diagram (A, B, C, D, E, F, or G), n is the number of vertices of the Dynkin diagram, r is the relative rank of the corresponding algebraic group, g is the order of the quotient of the absolute Galois group that acts faithfully on the Dynkin diagram (so g = 1, 2, 3, or 6), and t is either the degree of a certain division algebra (that is, the square root of its dimension) arising in the construction of the algebraic group when the group is of classical type (A, B, C, or D), in which case t is written in parentheses, or the dimension of the anisotropic kernel of the algebraic group when the group is of excepti
https://en.wikipedia.org/wiki/Landrace
A landrace is a domesticated, locally adapted, often traditional variety of a species of animal or plant that has developed over time, through adaptation to its natural and cultural environment of agriculture and pastoralism, and due to isolation from other populations of the species. Landraces are distinct from cultivars and from standard breeds. A significant proportion of farmers around the world grow landrace crops, and most plant landraces are associated with traditional agricultural systems. Landraces of many crops have probably been grown for millennia. Increasing reliance upon modern plant cultivars that are bred to be uniform has led to a reduction in biodiversity, because most of the genetic diversity of domesticated plant species lies in landraces and other traditionally used varieties. Some farmers using scientifically improved varieties also continue to raise landraces for agronomic reasons that include better adaptation to the local environment, lower fertilizer requirements, lower cost, and better disease resistance. Cultural and market preferences for landraces include culinary uses and product attributes such as texture, color, or ease of use. Plant landraces have been the subject of more academic research, and the majority of academic literature about landraces is focused on botany in agriculture, not animal husbandry. Animal landraces are distinct from ancestral wild species of modern animal stock, and are also distinct from separate species or subspecies derived from the same ancestor as modern domestic stock. Not all landraces derive from wild or ancient animal stock; in some cases, notably dogs and horses, domestic animals have escaped in sufficient numbers in an area to breed feral populations that form new landraces through evolutionary pressure. Characteristics There are differences between authoritative sources on the specific criteria which describe landraces, although there is broad consensus about the existence and utility of the cla
https://en.wikipedia.org/wiki/IP%20connectivity%20access%20network
IP-CAN (or IP connectivity access network) is an access network that provides Internet Protocol (IP) connectivity. The term is usually used in cellular context and usually refers to 3GPP access networks such as GPRS or EDGE, but can be also used to describe wireless LAN (WLAN) or DSL networks. It was introduced in 3GPP IP Multimedia Subsystem (IMS) standards as a generic term referring to any kind of IP-based access network as IMS put much emphasis on access and service network separation. See also IP multimedia subsystem Radio access network
https://en.wikipedia.org/wiki/Projection%20%28mathematics%29
In mathematics, a projection is an idempotent mapping of a set (or other mathematical structure) into a subset (or sub-structure). In this case, idempotent means that projecting twice is the same as projecting once. The restriction to a subspace of a projection is also called a projection, even if the idempotence property is lost. An everyday example of a projection is the casting of shadows onto a plane (sheet of paper): the projection of a point is its shadow on the sheet of paper, and the projection (shadow) of a point on the sheet of paper is that point itself (idempotency). The shadow of a three-dimensional sphere is a closed disk. Originally, the notion of projection was introduced in Euclidean geometry to denote the projection of the three-dimensional Euclidean space onto a plane in it, like the shadow example. The two main projections of this kind are: The projection from a point onto a plane or central projection: If C is a point, called the center of projection, then the projection of a point P different from C onto a plane that does not contain C is the intersection of the line CP with the plane. The points P such that the line CP is parallel to the plane does not have any image by the projection, but one often says that they project to a point at infinity of the plane (see Projective geometry for a formalization of this terminology). The projection of the point C itself is not defined. The projection parallel to a direction D, onto a plane or parallel projection: The image of a point P is the intersection with the plane of the line parallel to D passing through P. See for an accurate definition, generalized to any dimension. The concept of projection in mathematics is a very old one, and most likely has its roots in the phenomenon of the shadows cast by real-world objects on the ground. This rudimentary idea was refined and abstracted, first in a geometric context and later in other branches of mathematics. Over time different versions of the con
https://en.wikipedia.org/wiki/Baking
Baking is a method of preparing food that uses dry heat, typically in an oven, but can also be done in hot ashes, or on hot stones. The most common baked item is bread, but many other types of foods can be baked. Heat is gradually transferred "from the surface of cakes, cookies, and pieces of bread to their center. As heat travels through, it transforms batters and doughs into baked goods and more with a firm dry crust and a softer center". Baking can be combined with grilling to produce a hybrid barbecue variant by using both methods simultaneously, or one after the other. Baking is related to barbecuing because the concept of the masonry oven is similar to that of a smoke pit. Baking has traditionally been performed at home for day-to-day meals and in bakeries and restaurants for local consumption. When production was industrialized, baking was automated by machines in large factories. The art of baking remains a fundamental skill and is important for nutrition, as baked goods, especially bread, are a common and important food, both from an economic and cultural point of view. A person who prepares baked goods as a profession is called a baker. On a related note, a pastry chef is someone who is trained in the art of making pastries, cakes, desserts, bread, and other baked goods. Foods and techniques All types of food can be baked, but some require special care and protection from direct heat. Various techniques have been developed to provide this protection. In addition to bread, baking is used to prepare cakes, pastries, pies, tarts, quiches, cookies, scones, crackers, pretzels, and more. These popular items are known collectively as "baked goods," and are often sold at a bakery, which is a store that carries only baked goods, or at markets, grocery stores, farmers markets or through other venues. Meat, including cured meats, such as ham can also be baked, but baking is usually reserved for meatloaf, smaller cuts of whole meats, or whole meats that contain s
https://en.wikipedia.org/wiki/Curie%27s%20principle
Curie's principle, or Curie's symmetry principle, is a maxim about cause and effect formulated by Pierre Curie in 1894: The idea was based on the ideas of Franz Ernst Neumann and Bernhard Minnigerode. Thus, it is sometimes known as the Neuman–Minnigerode–Curie Principle.
https://en.wikipedia.org/wiki/Table%20of%20mathematical%20symbols%20by%20introduction%20date
The following table lists many specialized symbols commonly used in modern mathematics, ordered by their introduction date. The table can also be ordered alphabetically by clicking on the relevant header title. See also History of mathematical notation History of the Hindu–Arabic numeral system Glossary of mathematical symbols List of mathematical symbols by subject Mathematical notation Mathematical operators and symbols in Unicode Sources External links RapidTables: Math Symbols List Jeff Miller: Earliest Uses of Various Mathematical Symbols Symbols by introduction date Symbols
https://en.wikipedia.org/wiki/Kronecker%20delta
In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: or with use of Iverson brackets: For example, because , whereas because . The Kronecker delta appears naturally in many areas of mathematics, physics, engineering and computer science, as a means of compactly expressing its definition above. In linear algebra, the identity matrix has entries equal to the Kronecker delta: where and take the values , and the inner product of vectors can be written as Here the Euclidean vectors are defined as -tuples: and and the last step is obtained by using the values of the Kronecker delta to reduce the summation over . It is common for and to be restricted to a set of the form or , but the Kronecker delta can be defined on an arbitrary set. Properties The following equations are satisfied: Therefore, the matrix can be considered as an identity matrix. Another useful representation is the following form: This can be derived using the formula for the geometric series. Alternative notation Using the Iverson bracket: Often, a single-argument notation is used, which is equivalent to setting : In linear algebra, it can be thought of as a tensor, and is written . Sometimes the Kronecker delta is called the substitution tensor. Digital signal processing In the study of digital signal processing (DSP), the unit sample function represents a special case of a 2-dimensional Kronecker delta function where the Kronecker indices include the number zero, and where one of the indices is zero. In this case: Or more generally where: However, this is only a special case. In tensor calculus, it is more common to number basis vectors in a particular dimension starting with index 1, rather than index 0. In this case, the relation does not exist, and in fact, the Kronecker delta function and the unit sample function are d
https://en.wikipedia.org/wiki/Elementary%20proof
In mathematics, an elementary proof is a mathematical proof that only uses basic techniques. More specifically, the term is used in number theory to refer to proofs that make no use of complex analysis. Historically, it was once thought that certain theorems, like the prime number theorem, could only be proved by invoking "higher" mathematical theorems or techniques. However, as time progresses, many of these results have also been subsequently reproven using only elementary techniques. While there is generally no consensus as to what counts as elementary, the term is nevertheless a common part of the mathematical jargon. An elementary proof is not necessarily simple, in the sense of being easy to understand or trivial. In fact, some elementary proofs can be quite complicated — and this is especially true when a statement of notable importance is involved. Prime number theorem The distinction between elementary and non-elementary proofs has been considered especially important in regard to the prime number theorem. This theorem was first proved in 1896 by Jacques Hadamard and Charles Jean de la Vallée-Poussin using complex analysis. Many mathematicians then attempted to construct elementary proofs of the theorem, without success. G. H. Hardy expressed strong reservations; he considered that the essential "depth" of the result ruled out elementary proofs: However, in 1948, Atle Selberg produced new methods which led him and Paul Erdős to find elementary proofs of the prime number theorem. A possible formalization of the notion of "elementary" in connection to a proof of a number-theoretical result is the restriction that the proof can be carried out in Peano arithmetic. Also in that sense, these proofs are elementary. Friedman's conjecture Harvey Friedman conjectured, "Every theorem published in the Annals of Mathematics whose statement involves only finitary mathematical objects (i.e., what logicians call an arithmetical statement) can be proved in elementar
https://en.wikipedia.org/wiki/Hilbert%E2%80%93Huang%20transform
The Hilbert–Huang transform (HHT) is a way to decompose a signal into so-called intrinsic mode functions (IMF) along with a trend, and obtain instantaneous frequency data. It is designed to work well for data that is nonstationary and nonlinear. In contrast to other common transforms like the Fourier transform, the HHT is an algorithm that can be applied to a data set, rather than a theoretical tool. The Hilbert–Huang transform (HHT), a NASA designated name, was proposed by Norden E. Huang et al. (1996, 1998, 1999, 2003, 2012). It is the result of the empirical mode decomposition (EMD) and the Hilbert spectral analysis (HSA). The HHT uses the EMD method to decompose a signal into so-called intrinsic mode functions (IMF) with a trend, and applies the HSA method to the IMFs to obtain instantaneous frequency data. Since the signal is decomposed in time domain and the length of the IMFs is the same as the original signal, HHT preserves the characteristics of the varying frequency. This is an important advantage of HHT since a real-world signal usually has multiple causes happening in different time intervals. The HHT provides a new method of analyzing nonstationary and nonlinear time series data. Definition Empirical mode decomposition The fundamental part of the HHT is the empirical mode decomposition (EMD) method. Breaking down signals into various components, EMD can be compared with other analysis methods such as Fourier transform and Wavelet transform. Using the EMD method, any complicated data set can be decomposed into a finite and often small number of components. These components form a complete and nearly orthogonal basis for the original signal. In addition, they can be described as intrinsic mode functions (IMF). Because the first IMF usually carries the most oscillating (high-frequency) components, it can be rejected to remove high-frequency components (e.g., random noise). EMD based smoothing algorithms have been widely used in seismic data processi
https://en.wikipedia.org/wiki/Peer%20group%20%28computer%20networking%29
In computer networking, a peer group is a group of functional units in the same layer (see e.g. OSI model) of a network, by analogy with peer group. See also peer-to-peer (P2P) networking which is a specific type of networking relying on basically equal end hosts rather than on a hierarchy of devices. Computer networking
https://en.wikipedia.org/wiki/Probabilistically%20checkable%20proof
In computational complexity theory, a probabilistically checkable proof (PCP) is a type of proof that can be checked by a randomized algorithm using a bounded amount of randomness and reading a bounded number of bits of the proof. The algorithm is then required to accept correct proofs and reject incorrect proofs with very high probability. A standard proof (or certificate), as used in the verifier-based definition of the complexity class NP, also satisfies these requirements, since the checking procedure deterministically reads the whole proof, always accepts correct proofs and rejects incorrect proofs. However, what makes them interesting is the existence of probabilistically checkable proofs that can be checked by reading only a few bits of the proof using randomness in an essential way. Probabilistically checkable proofs give rise to many complexity classes depending on the number of queries required and the amount of randomness used. The class PCP[r(n),q(n)] refers to the set of decision problems that have probabilistically checkable proofs that can be verified in polynomial time using at most r(n) random bits and by reading at most q(n) bits of the proof. Unless specified otherwise, correct proofs should always be accepted, and incorrect proofs should be rejected with probability greater than 1/2. The PCP theorem, a major result in computational complexity theory, states that PCP[O(log n),O(1)] = NP. Definition Given a decision problem L (or a language L with its alphabet set Σ), a probabilistically checkable proof system for L with completeness c(n) and soundness s(n), where 0 ≤ s(n) ≤ c(n) ≤ 1, consists of a prover and a verifier. Given a claimed solution x with length n, which might be false, the prover produces a proof π which states x solves L (x ∈ L, the proof is a string ∈ Σ*). And the verifier is a randomized oracle Turing Machine V (the verifier) that checks the proof π for the statement that x solves L(or x ∈ L) and decides whether to accept the
https://en.wikipedia.org/wiki/Western%20Digital%20FD1771
The FD1771, sometimes WD1771, is the first in a line of floppy disk controllers produced by Western Digital. It uses single density FM encoding introduced in the IBM 3740. Later models in the series added support for MFM encoding and increasingly added onboard circuitry that formerly had to be implemented in external components. Originally packaged as 40-pin dual in-line package (DIP) format, later models moved to a 28-pin format that further lowered implementation costs. Derivatives The FD1771 was succeeded by many derivatives that were mostly software-compatible: The FD1781 was designed for double density, but required external modulation and demodulation circuitry, so it could support MFM, M2FM, GCR or other double-density encodings. The FD1791-FD1797 series added internal support for double density (MFM) modulation, compatible with the IBM System/34 disk format. They required an external data separator. The WD1761-WD1767 series were versions of the FD179x series rated for a maximum clock frequency of 1 MHz, resulting in a data rate limit of 125 kbit/s for single density and 250 kbit/s for double density, thus preventing them from being used for 8-in (200 mm) floppy drives or the later "high-density" or 90 mm floppy drives. These were sold at a lower price point and widely used in home computer floppy drives. The WD2791-WD2797 series added an internal data separator using an analog phase-locked loop, with some external passive components required for the VCO. They took a 1 MHz or 2 MHz clock and were intended for and drives. The WD1770, WD1772, and WD1773 added an internal digital data separator and write precompensator, eliminating the need for external passive components but raising the clock rate requirement to 8 MHz. They supported double density, despite the apparent regression of the part number, and were packaged in 28-pin DIP packages. The WD1772PH02-02 was a version of the chip that Atari fitted to the Atari STE which supported high density (500
https://en.wikipedia.org/wiki/Audio%20leveler
An audio leveler performs an audio process similar to compression, which is used to reduce the dynamic range of a signal, so that the quietest portion of the signal is loud enough to hear and the loudest portion is not too loud. Levelers work especially well with vocals, as there are huge dynamic differences in the human voice and levelers work in such a way as to sound very natural, letting the character of the sound change with the different levels but still maintaining a predictable and usable dynamic range. A leveler is different from a compressor in that the ratio and threshold are controlled with a single control. External links TLA-100 Tube Levelling Amplifier by Summit Audio Signal processing
https://en.wikipedia.org/wiki/Placement%20%28electronic%20design%20automation%29
Placement is an essential step in electronic design automation — the portion of the physical design flow that assigns exact locations for various circuit components within the chip's core area. An inferior placement assignment will not only affect the chip's performance but might also make it non-manufacturable by producing excessive wire-length, which is beyond available routing resources. Consequently, a placer must perform the assignment while optimizing a number of objectives to ensure that a circuit meets its performance demands. Together, the placement and routing steps of IC design are known as place and route. A placer takes a given synthesized circuit netlist together with a technology library and produces a valid placement layout. The layout is optimized according to the aforementioned objectives and ready for cell resizing and buffering — a step essential for timing and signal integrity satisfaction. Clock-tree synthesis and Routing follow, completing the physical design process. In many cases, parts of, or the entire, physical design flow are iterated a number of times until design closure is achieved. In the case of application-specific integrated circuits, or ASICs, the chip's core layout area comprises a number of fixed height rows, with either some or no space between them. Each row consists of a number of sites which can be occupied by the circuit components. A free site is a site that is not occupied by any component. Circuit components are either standard cells, macro blocks, or I/O pads. Standard cells have a fixed height equal to a row's height, but have variable widths. The width of a cell is an integral number of sites. On the other hand, blocks are typically larger than cells and have variable heights that can stretch a multiple number of rows. Some blocks can have preassigned locations — say from a previous floorplanning process — which limit the placer's task to assigning locations for just the cells. In this case, the blocks are typicall
https://en.wikipedia.org/wiki/Bhabha%20scattering
In quantum electrodynamics, Bhabha scattering is the electron-positron scattering process: There are two leading-order Feynman diagrams contributing to this interaction: an annihilation process and a scattering process. Bhabha scattering is named after the Indian physicist Homi J. Bhabha. The Bhabha scattering rate is used as a luminosity monitor in electron-positron colliders. Differential cross section To leading order, the spin-averaged differential cross section for this process is where s,t, and u are the Mandelstam variables, is the fine-structure constant, and is the scattering angle. This cross section is calculated neglecting the electron mass relative to the collision energy and including only the contribution from photon exchange. This is a valid approximation at collision energies small compared to the mass scale of the Z boson, about 91 GeV; at higher energies the contribution from Z boson exchange also becomes important. Mandelstam variables In this article, the Mandelstam variables are defined by {| |align="right"| |align="right"| |align="right"| |align="right"| |align="right"| |rowspan="3"|         |- |align="right"| |align="right"| |align="right"| | | |- |align="right"| |align="right"| |align="right"| | | |} where the approximations are for the high-energy (relativistic) limit. Deriving unpolarized cross section Matrix elements Both the scattering and annihilation diagrams contribute to the transition matrix element. By letting k and k' represent the four-momentum of the positron, while letting p and p' represent the four-momentum of the electron, and by using Feynman rules one can show the following diagrams give these matrix elements: {| border="0" cellpadding="5" cellspacing="0" | |align="center" | |align="center" | |Where we use: are the Gamma matrices, are the four-component spinors for fermions, while are the four-component spinors for anti-fermions (see Four spinors). |- | |align="center" | (scattering) |align="center" | (anni
https://en.wikipedia.org/wiki/Dottie%20number
In mathematics, the Dottie number is a constant that is the unique real root of the equation , where the argument of is in radians. The decimal expansion of the Dottie number is . Since is decreasing and its derivative is non-zero at , it only crosses zero at one point. This implies that the equation has only one real solution. It is the single real-valued fixed point of the cosine function and is a nontrivial example of a universal attracting fixed point. It is also a transcendental number because of the Lindemann-Weierstrass theorem. The generalised case for a complex variable has infinitely many roots, but unlike the Dottie number, they are not attracting fixed points. Using the Taylor series of the inverse of at (or equivalently, the Lagrange inversion theorem), the Dottie number can be expressed as the infinite series where each is a rational number defined for odd n as The name of the constant originates from a professor of French named Dottie who observed the number by repeatedly pressing the cosine button on her calculator. If a calculator is set to take angles in degrees, the sequence of numbers will instead converge to , the root of . Closed form The Dottie number can be expressed as where is the inverse regularized Beta function. This value can be obtained using Kepler's equation. In Microsoft Excel and LibreOffice Calc spreadsheets, the Dottie number can be expressed in closed form as . In the Mathematica computer algebra system, the Dottie number is . Integral representations Dottie number can be represented as . Another integral representation: Notes
https://en.wikipedia.org/wiki/Probabilistic%20method
In mathematics, the probabilistic method is a nonconstructive method, primarily used in combinatorics and pioneered by Paul Erdős, for proving the existence of a prescribed kind of mathematical object. It works by showing that if one randomly chooses objects from a specified class, the probability that the result is of the prescribed kind is strictly greater than zero. Although the proof uses probability, the final conclusion is determined for certain, without any possible error. This method has now been applied to other areas of mathematics such as number theory, linear algebra, and real analysis, as well as in computer science (e.g. randomized rounding), and information theory. Introduction If every object in a collection of objects fails to have a certain property, then the probability that a random object chosen from the collection has that property is zero. Similarly, showing that the probability is (strictly) less than 1 can be used to prove the existence of an object that does not satisfy the prescribed properties. Another way to use the probabilistic method is by calculating the expected value of some random variable. If it can be shown that the random variable can take on a value less than the expected value, this proves that the random variable can also take on some value greater than the expected value. Alternatively, the probabilistic method can also be used to guarantee the existence of a desired element in a sample space with a value that is greater than or equal to the calculated expected value, since the non-existence of such element would imply every element in the sample space is less than the expected value, a contradiction. Common tools used in the probabilistic method include Markov's inequality, the Chernoff bound, and the Lovász local lemma. Two examples due to Erdős Although others before him proved theorems via the probabilistic method (for example, Szele's 1943 result that there exist tournaments containing a large number of Hamilton
https://en.wikipedia.org/wiki/Motorola%2068881
The Motorola 68881 and Motorola 68882 are floating-point units (FPUs) used in some computer systems in conjunction with Motorola's 32-bit 68020 or 68030 microprocessors. These coprocessors are external chips, designed before floating point math became standard on CPUs. The Motorola 68881 was introduced in 1984. The 68882 is a higher performance version produced later. Overview The 68020 and 68030 CPUs were designed with the separate 68881 chip in mind. Their instruction sets reserved the "F-line" instructions – that is, all opcodes beginning with the hexadecimal digit "F" could either be forwarded to an external coprocessor or be used as "traps" which would throw an exception, handing control to the computer's operating system. If an FPU is not present in the system, the OS would then either call an FPU emulator to execute the instruction's equivalent using 68020 integer-based software code, return an error to the program, terminate the program, or crash and require a reboot. Architecture The 68881 has eight 80-bit data registers (a 64-bit mantissa plus a sign bit, and a 15-bit signed exponent). It allows seven different modes of numeric representation, including single-precision floating point, double-precision floating point, extended-precision floating point, integers as 8-, 16- and 32-bit quantities and a floating-point Binary-coded decimal format. The binary floating point formats are as defined by the IEEE 754 floating-point standard. It was designed specifically for floating-point math and is not a general-purpose CPU. For example, when an instruction requires any address calculations, the main CPU handles them before the 68881 takes control. The CPU/FPU pair are designed such that both can run at the same time. When the CPU encounters a 68881 instruction, it hands the FPU all operands needed for that instruction, and then the FPU releases the CPU to go on and execute the next instruction. 68882 The 68882 is an improved version of the 68881, with b
https://en.wikipedia.org/wiki/Nonlinear%20system
In mathematics and science, a nonlinear system (or a non-linear system) is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many other scientists since most systems are inherently nonlinear in nature. Nonlinear dynamical systems, describing changes in variables over time, may appear chaotic, unpredictable, or counterintuitive, contrasting with much simpler linear systems. Typically, the behavior of a nonlinear system is described in mathematics by a nonlinear system of equations, which is a set of simultaneous equations in which the unknowns (or the unknown functions in the case of differential equations) appear as variables of a polynomial of degree higher than one or in the argument of a function which is not a polynomial of degree one. In other words, in a nonlinear system of equations, the equation(s) to be solved cannot be written as a linear combination of the unknown variables or functions that appear in them. Systems can be defined as nonlinear, regardless of whether known linear functions appear in the equations. In particular, a differential equation is linear if it is linear in terms of the unknown function and its derivatives, even if nonlinear in terms of the other variables appearing in it. As nonlinear dynamical equations are difficult to solve, nonlinear systems are commonly approximated by linear equations (linearization). This works well up to some accuracy and some range for the input values, but some interesting phenomena such as solitons, chaos, and singularities are hidden by linearization. It follows that some aspects of the dynamic behavior of a nonlinear system can appear to be counterintuitive, unpredictable or even chaotic. Although such chaotic behavior may resemble random behavior, it is in fact not random. For example, some aspects of the weather are seen to be chaotic, where simple changes in one part
https://en.wikipedia.org/wiki/LGM-30%20Minuteman
The LGM-30 Minuteman is an American land-based intercontinental ballistic missile (ICBM) in service with the Air Force Global Strike Command. , the LGM-30G Minuteman III version is the only land-based ICBM in service in the United States and represents the land leg of the U.S. nuclear triad, along with the Trident II submarine-launched ballistic missile (SLBM) and nuclear weapons carried by long-range strategic bombers. Development of the Minuteman began in the mid-1950s when basic research indicated that a solid-fuel rocket motor could stand ready to launch for long periods of time, in contrast to liquid-fueled rockets that required fueling before launch and so might be destroyed in a surprise attack. The missile was named for the colonial minutemen of the American Revolutionary War, who could be ready to fight on short notice. The Minuteman entered service in 1962 as a deterrence weapon that could hit Soviet cities with a second strike and countervalue counterattack if the U.S. was attacked. However, the development of the United States Navy (USN) UGM-27 Polaris, which addressed the same role, allowed the Air Force to modify the Minuteman, boosting its accuracy enough to attack hardened military targets, including Soviet missile silos. The Minuteman II entered service in 1965 with a host of upgrades to improve its accuracy and survivability in the face of an anti-ballistic missile (ABM) system the Soviets were known to be developing. In 1970, the Minuteman III became the first deployed ICBM with multiple independently targetable reentry vehicles (MIRV): three smaller warheads that improved the missile's ability to strike targets defended by ABMs. They were initially armed with the W62 warhead with a yield of 170 kilotons. By the 1970s, 1,000 Minuteman missiles were deployed. This force has shrunk to 400 Minuteman III missiles , deployed in missile silos around Malmstrom AFB, Montana; Minot AFB, North Dakota; and Francis E. Warren AFB, Wyoming. The Minuteman III
https://en.wikipedia.org/wiki/Network%20Protocol%20Virtualization
Network Protocol Virtualization or Network Protocol Stack Virtualization is a concept of providing network connections as a service, without concerning application developer to decide the exact communication stack composition. Concept Network Protocol Virtualization (NPV) was firstly proposed by Heuschkel et al. in 2015 as a rough sketch as part of a transition concept for network protocol stacks. The concept evolved and was published in a deployable state in 2018. The key idea is to decouple applications from their communication stacks. Today the socket API requires application developer to compose the communication stack by hand by choosing between IPv4/IPv6 and UDP/TCP. NPV proposes the network protocol stack should be tailored to the observed network environment (e.g. link layer technology, or current network performance). Thus, the network stack should not be composed at development time, but at runtime and needs the possibility to be adapted if needed. Additionally the decoupling relaxes the chains of the ISO OSI network layer model, and thus enables alternative concepts of communication stacks. Heuschkel et al. proposes the concept of Application layer middleboxes as example to add additional layers to the communication stack to enrich the communication with useful services (e.g. HTTP optimizations) The Figure illustrates the dataflow. Applications interface to the NPV software through some kind of API. Heuschkel et al. proposed socket API equivalent replacements but envision more sophisticated interfaces for future applications. The application payload is assigned by a scheduler to one (of potentially many) communication stack to get processed to network packets, that get sent using networking hardware. A management component decide how communication stacks get composed and how the scheduling scheme should be. To support decisions a management interface is provided to integrate the management system in software-defined networking contexts. NPV has bee
https://en.wikipedia.org/wiki/Information-centric%20networking%20caching%20policies
In computing, cache algorithms (also frequently called cache replacement algorithms or cache replacement policies) are optimizing instructionsor algorithmsthat a computer program or a hardware-maintained structure can follow in order to manage a cache of information stored on the computer. When the cache is full, the algorithm must choose which items to discard to make room for the new ones. Due to the inherent caching capability of nodes in Information-centric networking ICN, the ICN can be viewed as a loosely connect network of caches, which has unique requirements of Caching policies. Unlike proxy servers, in Information-centric networking the cache is a network level solution. Therefore, it has rapidly changing cache states and higher request arrival rates; moreover, smaller cache sizes further impose different kind of requirements on the content eviction policies. In particular, eviction policies for Information-centric networking should be fast and lightweight. Various cache replication and eviction schemes for different Information-centric networking architectures and applications are proposed. Policies Time aware least recently used (TLRU) The Time aware Least Recently Used (TLRU) is a variant of LRU designed for the situation where the stored contents in cache have a valid life time. The algorithm is suitable in network cache applications, such as information-centric networking (ICN), content delivery networks (CDNs) and distributed networks in general. TLRU introduces a new term: TTU (Time to Use). TTU is a time stamp of a content/page which stipulates the usability time for the content based on the locality of the content and the content publisher announcement. Owing to this locality based time stamp, TTU provides more control to the local administrator to regulate in network storage. In the TLRU algorithm, when a piece of content arrives, a cache node calculates the local TTU value based on the TTU value assigned by the content publisher. The local TTU
https://en.wikipedia.org/wiki/Software%20construction
Software construction is a software engineering discipline. It is the detailed creation of working meaningful software through a combination of coding, verification, unit testing, integration testing, and debugging. It is linked to all the other software engineering disciplines, most strongly to software design and software testing. Software construction fundamentals Minimizing complexity The need to reduce complexity is mainly driven by limited ability of most people to hold complex structures and information in their working memories. Reduced complexity is achieved through emphasizing the creation of code that is simple and readable rather than clever. Minimizing complexity is accomplished through making use of standards, and through numerous specific techniques in coding. It is also supported by the construction-focused quality techniques. Anticipating change Anticipating change helps software engineers build extensible software, which means they can enhance a software product without disrupting the underlying structure. Research over 25 years showed that the cost of rework can be 10 to 100 times (5 to 10 times for smaller projects) more expensive than getting the requirements right the first time. Given that 25% of the requirements change during development on average project, the need to reduce the cost of rework elucidates the need for anticipating change. Constructing for verification Constructing for verification means building software in such a way that faults can be ferreted out readily by the software engineers writing the software, as well as during independent testing and operational activities. Specific techniques that support constructing for verification include following coding standards to support code reviews, unit testing, organizing code to support automated testing, and restricted use of complex or hard-to-understand language structures, among others. Reuse Systematic reuse can enable significant software productivity, quality, an
https://en.wikipedia.org/wiki/Banana%20peel
A banana peel, called banana skin in British English, is the outer covering of the banana fruit. Banana peels are used as food for animals, an ingredient in cooking, in water purification, for manufacturing of several biochemical products as well as for jokes and comical situations. There are several methods to remove a peel from a banana. Use Bananas are a popular fruit consumed worldwide with a yearly production of over 165 million tonnes in 2011. Once the peel is removed, the fruit can be eaten raw or cooked and the peel is generally discarded. Because of this removal of the banana peel, a significant amount of organic waste is generated. Banana peels are sometimes used as feedstock for cattle, goats, pigs, monkeys, poultry, rabbits, fish, zebras and several other species, typically on small farms in regions where bananas are grown. There are some concerns over the impact of tannins contained in the peels on animals that consume them. The nutritional value of banana peel depends on the stage of maturity and the cultivar; for example plantain peels contain less fibre than dessert banana peels, and lignin content increases with ripening (from 7 to 15% dry matter). On average, banana peels contain 6-9% dry matter of protein and 20-30% fibre (measured as NDF). Green plantain peels contain 40% starch that is transformed into sugars after ripening. Green banana peels contain much less starch (about 15%) when green than plantain peels, while ripe banana peels contain up to 30% free sugars. Banana peels are also used for water purification, to produce ethanol, cellulase, laccase, as fertilizer and in composting. Culinary use Cooking with banana peel is common place in Southeast Asian, Indian and Venezuelan cuisine where the peel of bananas and plantains is used in recipes. In April 2019, a vegan pulled pork recipe using banana peel by food blogger Melissa Copeland aka The Stingy Vegan went viral. In 2020, The Great British Bake Off winner Nadiya Hussain revealed
https://en.wikipedia.org/wiki/Network%20equipment%20provider
Network equipment providers (NEPs) – sometimes called telecommunications equipment manufacturers (TEMs) – sell products and services to communication service providers such as fixed or mobile operators as well as to enterprise customers. NEP technology allows for calls on mobile phones, Internet surfing, joining a conference calls, or watching video on demand through IPTV (internet protocol TV). The history of the NEPs goes back to the mid-19th century when the first telegraph networks were set up. Some of these players still exist today. Telecommunications equipment manufacturers The terminology of the traditional telecommunications industry has rapidly evolved during the Information Age. The terms "Network" and "Telecoms" are often used interchangeably. The same is true for "provider" and "manufacturer". Historically, NEPs sell integrated hardware/software systems to carriers such as NTT-DoCoMo, ATT, Sprint, and so on. They purchase hardware from TEMs (telecom equipment manufacturers), such as Vertiv, Kontron, and NEC, to name a few. TEMs are responsible for manufacturing the hardware, devices, and equipment the telecommunications industry requires. The distinction between NEP and TEM is sometimes blurred, because all the following phrases may imply NEP: Telecommunications equipment provider Telecommunications equipment industry Telecommunications equipment company Telecommunications equipment manufacturer (TEM) Telecommunications equipment technology Network equipment provider (NEP) Network equipment industry Network equipment companies Network equipment manufacturer Network equipment technology Services This is a highly competitive industry that includes telephone, cable, and data services segments. Products and services include: Mobile networks like GSM (Global System for Mobile Communication), Enhanced Data Rates for GSM Evolution (EDGE) or GPRS (General Packet Radio Service). Networks of this kind are typically also known as 2G and 2.5G net
https://en.wikipedia.org/wiki/Ansatz
In physics and mathematics, an ansatz (; , meaning: "initial placement of a tool at a work piece", plural ansätze ; ) is an educated guess or an additional assumption made to help solve a problem, and which may later be verified to be part of the solution by its results. Use An ansatz is the establishment of the starting equation(s), the theorem(s), or the value(s) describing a mathematical or physical problem or solution. It typically provides an initial estimate or framework to the solution of a mathematical problem, and can also take into consideration the boundary conditions (in fact, an ansatz is sometimes thought of as a "trial answer" and an important technique in solving differential equations). After an ansatz, which constitutes nothing more than an assumption, has been established, the equations are solved more precisely for the general function of interest, which then constitutes a confirmation of the assumption. In essence, an ansatz makes assumptions about the form of the solution to a problem so as to make the solution easier to find. It has been demonstrated that machine learning techniques can be applied to provide initial estimates similar to those invented by humans and to discover new ones in case no ansatz is available. Examples Given a set of experimental data that looks to be clustered about a line, a linear ansatz could be made to find the parameters of the line by a least squares curve fit. Variational approximation methods use ansätze and then fit the parameters. Another example could be the mass, energy, and entropy balance equations that, considered simultaneous for purposes of the elementary operations of linear algebra, are the ansatz to most basic problems of thermodynamics. Another example of an ansatz is to suppose the solution of a homogeneous linear differential equation to take an exponential form, or a power form in the case of a difference equation. More generally, one can guess a particular solution of a system of equatio
https://en.wikipedia.org/wiki/Parts-per%20notation
In science and engineering, the parts-per notation is a set of pseudo-units to describe small values of miscellaneous dimensionless quantities, e.g. mole fraction or mass fraction. Since these fractions are quantity-per-quantity measures, they are pure numbers with no associated units of measurement. Commonly used are parts-per-million (ppm, ), parts-per-billion (ppb, ), parts-per-trillion (ppt, ) and parts-per-quadrillion (ppq, ). This notation is not part of the International System of Units (SI) system and its meaning is ambiguous. Applications Parts-per notation is often used describing dilute solutions in chemistry, for instance, the relative abundance of dissolved minerals or pollutants in water. The quantity "1 ppm" can be used for a mass fraction if a water-borne pollutant is present at one-millionth of a gram per gram of sample solution. When working with aqueous solutions, it is common to assume that the density of water is 1.00 g/mL. Therefore, it is common to equate 1 kilogram of water with 1 L of water. Consequently, 1 ppm corresponds to 1 mg/L and 1 ppb corresponds to 1 μg/L. Similarly, parts-per notation is used also in physics and engineering to express the value of various proportional phenomena. For instance, a special metal alloy might expand 1.2 micrometers per meter of length for every degree Celsius and this would be expressed as Parts-per notation is also employed to denote the change, stability, or uncertainty in measurements. For instance, the accuracy of land-survey distance measurements when using a laser rangefinder might be 1 millimeter per kilometer of distance; this could be expressed as "Accuracy = 1 ppm." Parts-per notations are all dimensionless quantities: in mathematical expressions, the units of measurement always cancel. In fractions like "2 nanometers per meter" so the quotients are pure-number coefficients with positive values less than or equal to 1. When parts-per notations, including the percent symbol (%), are used i
https://en.wikipedia.org/wiki/Beat%20detection
In signal analysis, beat detection is using computer software or computer hardware to detect the beat of a musical score. There are many methods available and beat detection is always a tradeoff between accuracy and speed. Beat detectors are common in music visualization software such as some media player plugins. The algorithms used may utilize simple statistical models based on sound energy or may involve sophisticated comb filter networks or other means. They may be fast enough to run in real time or may be so slow as to only be able to analyze short sections of songs. See also Pitch detection External links Beat This > Beat Detection Algorithm Audio Analysis using the Discrete Wavelet Transform Signal processing
https://en.wikipedia.org/wiki/PHY-Level%20Collision%20Avoidance
PHY-Level Collision Avoidance (PLCA) is a component of the Ethernet reconciliation sublayer (between the PHY and the MAC) defined within IEEE 802.3 clause 148. The purpose of PLCA is to avoid the shared medium collisions and associated retransmission overhead. PLCA is used in 802.3cg (10BASE-T1), which focuses on bringing ethernet connectivity to short-haul embedded internet of things and low throughput, noise-tolerant, industrial deployment use cases. In order for a multidrop 10BASE-T1S standard to successfully compete with CAN XL, some kind of arbitration was necessary. The linear arbitration scheme of PLCA somewhat resembles the one of the Byteflight, but PLCA was designed from scratch to accommodate the existing shared medium Ethernet MACs with their busy sensing mechanisms. Operation Under a PLCA scheme all nodes are assigned unique sequential numbers (IDs) in the range from 0 to N. Zero ID corresponds to a special "master" node that during the idle intervals transmits the synchronization beacon (a special heartbeat frame). After the beacon (within PLCA cycle) each node gets its transmission opportunity (TO). Each opportunity interval is very short (typically 20 bits), so overhead for the nodes that do not have anything to transmit is low. If the PLCA circuitry discovers that the node's TO cannot be used (the other node with a lower ID have started its transmission and the media is busy at the beginning of the TO for this node), it asserts the "local collision" input of the MAC thus delaying the transmission. The condition is cleared once the node gets its TO. A standard MAC reacts to the local collision with a backoff, however, since this is the first and only backoff for this frame, the backoff interval is equal to the smallest possible frame - and the backoff timer will definitely expire by the time the TO is granted, so there is no additional loss of performance. See also Internet of things (IOT)
https://en.wikipedia.org/wiki/Mathematical%20instrument
A mathematical instrument is a tool or device used in the study or practice of mathematics. In geometry, construction of various proofs was done using only a compass and straightedge; arguments in these proofs relied only on idealized properties of these instruments and literal construction was regarded as only an approximation. In applied mathematics, mathematical instruments were used for measuring angles and distances, in astronomy, navigation, surveying and in the measurement of time. Overview Instruments such as the astrolabe, the quadrant, and others were used to measure and accurately record the relative positions and movements of planets and other celestial objects. The sextant and other related instruments were essential for navigation at sea. Most instruments are used within the field of geometry, including the ruler, dividers, protractor, set square, compass, ellipsograph, T-square and opisometer. Others are used in arithmetic (for example the abacus, slide rule and calculator) or in algebra (the integraph). In astronomy, many have said the pyramids (along with Stonehenge) were actually instruments used for tracking the stars over long periods or for the annual planting seasons. In schools The Oxford Set of Mathematical Instruments is a set of instruments used by generations of school children in the United Kingdom and around the world in mathematics and geometry lessons. It includes two set squares, a 180° protractor, a 15 cm ruler, a metal compass, a 9 cm pencil, a pencil sharpener, an eraser and a 10mm stencil. See also The Construction and Principal Uses of Mathematical Instruments Dividing engine Measuring instrument Planimeter Integraph
https://en.wikipedia.org/wiki/Federal%20Networking%20Council
Informally established in the early 1990s, the Federal Networking Council (FNC) was later chartered by the US National Science and Technology Council's Committee on Computing, Information and Communications (CCIC) to continue to act as a forum for networking collaborations among US federal agencies to meet their research, education, and operational mission goals and to bridge the gap between the advanced networking technologies being developed by research FNC agencies and the ultimate acquisition of mature version of these technologies from the commercial sector. The FNC consisted of a group made up of representatives from the United States Department of Defense (DoD), the National Science Foundation, the Department of Energy, and the National Aeronautics and Space Administration (NASA), among others. By October 1997, the FNC advisory committee was de-chartered and many of the FNC activities were transferred to the Large Scale Networking group of the Computing, Information, and Communications (CIC) R&D subcommittee of the Networking and Information Technology Research and Development program, or the Applications Council. On October 24, 1995, the Federal Networking Council passed a resolution defining the term Internet: Resolution: The Federal Networking Council (FNC) agrees that the following language reflects our definition of the term ``Internet. ``Internet'' refers to the global information system that - (i) is logically linked together by a globally unique address space based on the Internet Protocol (IP) or its subsequent extensions/follow-ons; (ii) is able to support communications using the Transmission Control Protocol/Internet Protocol (TCP/IP) suite or its subsequent extensions/follow-ons, and/or other IP-compatible protocols; and (iii) provides, uses or makes accessible, either publicly or privately, high level services layered on the communications and related infrastructure described herein.' Some notable members of the council advisory committee i
https://en.wikipedia.org/wiki/Edge%20states
In physics, Edge states are the topologically protected electronic states that exist at the boundary of the material and cannot be removed without breaking the system's symmetry. Background In solid-state physics, quantum mechanics, materials science, physical chemistry and other several disciplines we study the electronic band structure of materials primarily based on the extent of the band gap, the gap between highest occupied valance bands and lowest unoccupied conduction bands. We can represent the possible energy level of the material that provides the discrete energy values of all possible states in the energy profile diagram by solving the Hamiltonian of the system. This solution provides the corresponding energy eigenvalues and eigenvectors. Based on the energy eigenvalues, conduction band are the high energy states (E>0) while valance bands are the low energy states (E<0). In some materials, for example, in graphene and zigzag graphene quantum dot, there exists the energy states having energy eigenvalues exactly equal to zero (E=0) besides the conduction and valance bands. These states are called edge states which modifies the electronic and optical properties of the materials significantly.
https://en.wikipedia.org/wiki/Test%20data
Test data plays a crucial role in software development by providing inputs that are used to verify the correctness, performance, and reliability of software systems. Test data encompasses various types, such as positive and negative scenarios, edge cases, and realistic user scenarios, and it aims to exercise different aspects of the software to uncover bugs and validate its behavior. By designing and executing test cases with appropriate test data, developers can identify and rectify defects, improve the quality of the software, and ensure it meets the specified requirements. Moreover, test data can also be used for regression testing to validate that new code changes or enhancements do not introduce any unintended side effects or break existing functionalities. Overall, the effective utilization of test data in software development significantly contributes to the production of reliable and robust software systems. Background Some data may be used in a confirmatory way, typically to verify that a given set of inputs to a given function produces some expected result. Other data may be used in order to challenge the ability of the program to respond to unusual, extreme, exceptional, or unexpected input. Test data may be produced in a focused or systematic way (as is typically the case in domain testing), or by using other, less-focused approaches (as is typically the case in high-volume randomized automated tests). Test data may be produced by the tester, or by a program or function that aids the tester. Test data may be recorded for reuse or used only once. Test data can be created manually, by using data generation tools (often based on randomness), or be retrieved from an existing production environment. The data set can consist of synthetic (fake) data, but preferably it consists of representative (real) data. Limitations Due to privacy rules and regulations like GDPR, PCI and HIPAA it is not allowed to use privacy sensitive personal data for testing. But an
https://en.wikipedia.org/wiki/Electroadhesion
Electroadhesion is the electrostatic effect of astriction between two surfaces subjected to an electrical field. Applications include the retention of paper on plotter surfaces, astrictive robotic prehension (electrostatic grippers) etc. Clamping pressures in the range of 0.5 to 1.5 N/cm2 (0.8 to 2.3 psi) have been claimed. An electroadhesive pad consists of conductive electrodes placed upon a polymer substrate. When alternate positive and negative charges are induced on adjacent electrodes, the resulting electric field sets up opposite charges on the surface that the pad touches, and thus causes electrostatic adhesion between the electrodes and the induced charges in the touched surface material. Electroadhesion can be loosely divided into two basic forms: that which concerns the prehension of electrically conducting materials where the general laws of capacitance hold (D = E ε) and that used with electrically insulating subjects where the more advanced theory of electrostatics (D = E ε + P) applies.
https://en.wikipedia.org/wiki/Bond-out%20processor
A bond-out processor is an emulation processor that takes the place of the microcontroller or microprocessor in the target board while an application is being developed and/or debugged. Bond-out processors have internal signals and bus brought out to external pins. The term bond-out derives from connecting (or bonding) the emulation circuitry to these external pins. These devices are designed to be used within an in-circuit emulator and are not typically used in any other kind of system. Bond-out pins were marked as no-connects in the first devices produced by Intel, and were usually not connected to anything on the ordinary production silicon. Later bond-out versions of the microprocessor were produced in a bigger package to provide more signals and functionality. Bond-out processors provides capabilities far beyond those of a simple ROM monitor. A ROM monitor is a firmware program that runs instead of the application code and provides a connection to a host computer to carry out debugging functions. In general the ROM monitor uses part of the processor resources and shares the memory with the user code. Bond-out processors can handle complex breakpoints (even in ROM), real-time traces of processor activity, and no use of target resources. But this extra functionality comes at a high cost, as bond-outs have to be produced for in-circuit emulators only. Therefore, sometimes solutions similar to bond-outs are implemented with an ASIC or FPGA or a faster RISC processor that imitates the core processor code execution and peripherals.
https://en.wikipedia.org/wiki/Opal%20Storage%20Specification
The Opal Storage Specification is a set of specifications for features of data storage devices (such as hard disk drives and solid state drives) that enhance their security. For example, it defines a way of encrypting the stored data so that an unauthorized person who gains possession of the device cannot see the data. That is, it is a specification for self-encrypting drives (SED). The specification is published by the Trusted Computing Group Storage Workgroup. Overview The Opal SSC (Security Subsystem Class) is an implementation profile for Storage Devices built to: Protect the confidentiality of stored user data against unauthorized access once it leaves the owner's control (involving a power cycle and subsequent deauthentication). Enable interoperability between multiple SD vendors. Functions The Opal SSC encompasses these functions: Security provider support Interface communication protocol Cryptographic features Authentication Table management Access control and personalization Issuance SSC discovery Features Security Protocol 1 support Security Protocol 2 support Communications Protocol stack reset commands Security Radboud University researchers indicated in November 2018 that some hardware-encrypted SSDs, including some Opal implementations, had security vulnerabilities. Implementers of SSC Device companies Hitachi Intel Corporation Kingston Technology Lenovo Micron Technology Samsung SanDisk Seagate Technology as "Seagate Secure" Toshiba Storage controller companies Marvell Avago/LSI SandForce flash controllers Software companies Absolute Software Check Point Software Technologies Dell Data Protection Cryptomill McAfee Secude Softex Incorporated Sophos Symantec (Symantec supports OPAL drives, but does not support hardware-based encryption.) Trend Micro WinMagic OpalLock(OpalLock support Self-Encrypt-Drive capable SSD and HDD. Develop by Fidelity Height LLC) Computer OEMs Dell HP Lenovo Fujitsu
https://en.wikipedia.org/wiki/IOIO
IOIO (pronounced yo-yo) is a series of open source PIC microcontroller-based boards that allow Android mobile applications to interact with external electronics. The device was invented by Ytai Ben-Tsvi in 2011, and was first manufactured by SparkFun Electronics. The name "IOIO" is inspired by the function of the device, which enables applications to receive external input ("I") and produce external output ("O"). Features The IOIO board contains a single PIC MCU that acts as a USB host/USB slave and communicates with an Android app running on a connected Android device. The board provides connectivity via USB, USB-OTG or Bluetooth, and is controllable from within an Android application using the Java API. In addition to basic digital input/output and analog input, the IOIO library also handles PWM, I2C, SPI, UART, Input capture, Capacitive sensing and advanced motor control. To connect to older Android devices that use USB 2.0 in slave mode, newer IOIO models use USB On-The-Go to act as a host for such devices. Some models also support the Google Open Accessory USB protocol. The IOIO motor control API can drive up to 9 motors and any number of binary actuators in synchronization and cycle-accurate precision. Developers may send a sequence of high-level commands to the IOIO, which performs the low-level waveform generation on-chip. The IOIO firmware supports 3 different kinds of motors; stepper motors, DC motors and servo motors. Device firmware may be updated on-site by the user. For first-generation devices updating is performed using an Android device and the IOIO Manager application available on Google Play. Second-generation IOIO-OTG devices must be updated using a desktop computer running the IOIODude application. The IOIO supports both computers and Android devices as first-class hosts, and provides the exact API on both types of devices. First-generation devices can only communicate with PCs over Bluetooth, while IOIO-OTG devices can use either Bluetooth
https://en.wikipedia.org/wiki/Sensor%20hub
A sensor hub is a microcontroller unit/coprocessor/DSP set that helps to integrate data from different sensors and process them. This technology can help off-load these jobs from a product's main central processing unit, thus saving battery consumption and providing a performance improvement. Intel has the Intel Integrated Sensor Hub. Starting from Cherrytrail and Haswell, many Intel processors offers on package sensor hub. The Samsung Galaxy Note II is the first smart phone with a sensor hub, which was launched in 2012. Examples Some devices with Snapdragon 800 series chips, including HTC One (M8), Sony Xperia Z1, LG G2, etc., have a sensor hub, the Qualcomm Snapdragon Sensor Core, and all HiSilicon Kirin 920 devices have sensor hub embedded in the chipset with its successor Kirin 925 integrated an i3 chip with same function into it. Some other devices that are not using these chips but with a sensor hub integrated are listed below:
https://en.wikipedia.org/wiki/Fibre%20multi-object%20spectrograph
Fibre multi-object spectrograph (FMOS) is facility instrument for the Subaru telescope on Mauna Kea in Hawaii. The instrument consists of a complex fibre-optic positioning system mounted at the prime focus of the telescope. Fibres are then fed to a pair of large spectrographs, each weighing nearly 3000 kg. The instrument will be used to look at the light from up to 400 stars or galaxies simultaneously over a field of view of 30 arcminutes (about the size of the full moon on the sky). The instrument will be used for a number of key programmes, including galaxy formation and evolution and dark energy via a measurement of the rate at which the universe is expanding. Design, construction, operation It is currently being built by a consortium of institutes led by Kyoto University and Oxford University with parts also being manufactured by the Rutherford Appleton Laboratory, Durham University and the Anglo-Australian Observatory. The instrument is scheduled for engineering first-light in late 2008. OH-suppression The spectrographs use a technique called OH-suppression to increase the sensitivity of the observations: The incoming light from the fibres is dispersed to a relatively high resolution and this spectrum forms an image on a pair of spherical mirrors which have been etched at the positions corresponding to the bright OH-lines. This spectrum is then re-imaged through a second diffraction grating to allow the full spectrum (without the OH lines) to be imaged onto a single infrared detector.
https://en.wikipedia.org/wiki/Arcadia%20%28play%29
Arcadia (1993), written by English playwright Tom Stoppard, explores the relationship between past and present, order and disorder, certainty and uncertainty. It has been praised by many critics as the finest play from "one of the most significant contemporary playwrights" in the English language. In 2006, the Royal Institution of Great Britain named it one of the best science-related works ever written. Synopsis In 1809, Thomasina Coverly, the daughter of the house, is a precocious teenager with ideas about mathematics, nature, and physics well ahead of her time. She studies with her tutor Septimus Hodge, a friend of Lord Byron (an unseen guest in the house). In the present, writer Hannah Jarvis and literature professor Bernard Nightingale converge on the house: she is investigating a hermit who once lived on the grounds; he is researching a mysterious chapter in the life of Byron. As their studies unfold – with the help of Valentine Coverly, a post-graduate student in mathematical biology – the truth about what happened in Thomasina's time is gradually revealed. Scene 1 (Act 1) The play opens on 10 April 1809, in a garden-front room of the house. Septimus Hodge is trying to distract 13-year-old Thomasina from her curiosity about "carnal embrace" by challenging her to prove Fermat's Last Theorem; he also wants to focus on reading the poem "The Couch of Eros" by Ezra Chater, who with his wife is a guest at the house. Thomasina starts asking why jam mixed in rice pudding can never be unstirred, which leads her to the topic of determinism and to a beginning theory about chaotic shapes in nature. This is interrupted by Chater himself, who is angry that his wife was caught in the aforementioned "carnal embrace" with Septimus; he has come to demand a duel. Septimus tries to defuse the situation by heaping praise on Chater's "The Couch of Eros". The tactic works, because Chater does not know it was Septimus who had savaged an earlier work of his, "The Maid of Turkey".
https://en.wikipedia.org/wiki/Wave%20function%20collapse
In quantum mechanics, wave function collapse occurs when a wave function—initially in a superposition of several eigenstates—reduces to a single eigenstate due to interaction with the external world. This interaction is called an observation, and is the essence of a measurement in quantum mechanics, which connects the wave function with classical observables such as position and momentum. Collapse is one of the two processes by which quantum systems evolve in time; the other is the continuous evolution governed by the Schrödinger equation. Collapse is a black box for a thermodynamically irreversible interaction with a classical environment. Calculations of quantum decoherence show that when a quantum system interacts with the environment, the superpositions apparently reduce to mixtures of classical alternatives. Significantly, the combined wave function of the system and environment continue to obey the Schrödinger equation throughout this apparent collapse. More importantly, this is not enough to explain actual wave function collapse, as decoherence does not reduce it to a single eigenstate. Historically, Werner Heisenberg was the first to use the idea of wave function reduction to explain quantum measurement. Mathematical description Before collapsing, the wave function may be any square-integrable function, and is therefore associated with the probability density of a quantum-mechanical system. This function is expressible as a linear combination of the eigenstates of any observable. Observables represent classical dynamical variables, and when one is measured by a classical observer, the wave function is projected onto a random eigenstate of that observable. The observer simultaneously measures the classical value of that observable to be the eigenvalue of the final state. Mathematical background The quantum state of a physical system is described by a wave function (in turn—an element of a projective Hilbert space). This can be expressed as a vector using
https://en.wikipedia.org/wiki/Duncan%27s%20taxonomy
Duncan's taxonomy is a classification of computer architectures, proposed by Ralph Duncan in 1990. Duncan suggested modifications to Flynn's taxonomy to include pipelined vector processes. Taxonomy The taxonomy was developed during 1988-1990 and was first published in 1990. Its original categories are indicated below. Synchronous architectures This category includes all the parallel architectures that coordinate concurrent execution in lockstep fashion and do so via mechanisms such as global clocks, central control units or vector unit controllers. Further subdivision of this category is made primarily on the basis of the synchronization mechanism. Pipelined vector processors Pipelined vector processors are characterized by pipelined functional units that accept a sequential stream of array or vector elements, such that different stages in a filled pipeline are processing different elements of the vector at a given time. Parallelism is provided both by the pipelining in individual functional units described above, as well as by operating multiple units of this kind in parallel and by chaining the output of one unit into another unit as input. Vector architectures that stream vector elements into functional units from special vector registers are termed register-to-register architectures, while those that feed functional units from special memory buffers are designated as memory-to-memory architectures. Early examples of register-to-register architectures from the 1960s and early 1970s include the Cray-1 and Fujitsu VP-200, while the Control Data Corporation STAR-100, CDC 205 and the Texas Instruments Advanced Scientific Computer are early examples of memory-to-memory vector architectures. The late 1980s and early 1990s saw the introduction of vector architectures, such as the Cray Y-MP/4 and Nippon Electric Corporation SX-3 that supported 4-10 vector processors with a shared memory (see NEC SX architecture). SIMD This scheme uses the SIMD (single instruction
https://en.wikipedia.org/wiki/Product%20of%20exponentials%20formula
The product of exponentials (POE) method is a robotics convention for mapping the links of a spatial kinematic chain. It is an alternative to Denavit–Hartenberg parameterization. While the latter method uses the minimal number of parameters to represent joint motions, the former method has a number of advantages: uniform treatment of prismatic and revolute joints, definition of only two reference frames, and an easy geometric interpretation from the use of screw axes for each joint. The POE method was introduced by Roger W. Brockett in 1984. Method The following method is used to determine the product of exponentials for a kinematic chain, with the goal of parameterizing an affine transformation matrix between the base and tool frames in terms of the joint angles Define "zero configuration" The first step is to select a "zero configuration" where all the joint angles are defined as being zero. The 4x4 matrix describes the transformation from the base frame to the tool frame in this configuration. It is an affine transform consisting of the 3x3 rotation matrix R and the 1x3 translation vector p. The matrix is augmented to create a 4x4 square matrix. Calculate matrix exponential for each joint The following steps should be followed for each of N joints to produce an affine transform for each. Define the origin and axis of action For each joint of the kinematic chain, an origin point q and an axis of action are selected for the zero configuration, using the coordinate frame of the base. In the case of a prismatic joint, the axis of action v is the vector along which the joint extends; in the case of a revolute joint, the axis of action ω the vector normal to the rotation. Find twist for each joint A 1x6 twist vector is composed to describe the movement of each joint. For a revolute joint, For a prismatic joint, The resulting twist has two 1x3 vector components: Linear motion along an axis () and rotational motion along the same axis (ω). Calculate rotation m
https://en.wikipedia.org/wiki/Frubber
Frubber (from "flesh rubber") is a patented elastic form of rubber used in robotics. The spongy elastomer has been used by Hanson Robotics for the face of its android robots, including Einstein 3 and Sophia.
https://en.wikipedia.org/wiki/Direction%20of%20arrival
In signal processing, direction of arrival (DOA) denotes the direction from which usually a propagating wave arrives at a point, where usually a set of sensors are located. These set of sensors forms what is called a sensor array. Often there is the associated technique of beamforming which is estimating the signal from a given direction. Various engineering problems addressed in the associated literature are: Find the direction relative to the array where the sound source is located Direction of different sound sources around you are also located by you using a process similar to those used by the algorithms in the literature Radio telescopes use these techniques to look at a certain location in the sky Recently beamforming has also been used in radio frequency (RF) applications such as wireless communication. Compared with the spatial diversity techniques, beamforming is preferred in terms of complexity. On the other hand, beamforming in general has much lower data rates. In multiple access channels (code-division multiple access (CDMA), frequency-division multiple access (FDMA), time-division multiple access (TDMA)), beamforming is necessary and sufficient Various techniques for calculating the direction of arrival, such as angle of arrival (AoA), time difference of arrival (TDOA), frequency difference of arrival (FDOA), or other similar associated techniques. Limitations on the accuracy of estimation of direction of arrival signals in digital antenna arrays are associated with jitter ADC and DAC. Advanced sophisticated techniques perform joint direction of arrival and time of arrival (ToA) estimation to allow a more accurate localization of a node. This also has the merit of localizing more targets with less antenna resources. Indeed, it is well-known in the array processing community that, generally speaking, one can resolve targets via antennas. When JADE (joint angle and delay) estimation is employed, one can go beyond this limit. Typical DOA estimation
https://en.wikipedia.org/wiki/Linde%E2%80%93Buzo%E2%80%93Gray%20algorithm
The Linde–Buzo–Gray algorithm (introduced by Yoseph Linde, Andrés Buzo and Robert M. Gray in 1980) is a vector quantization algorithm to derive a good codebook. It is similar to the k-means method in data clustering. The algorithm At each iteration, each vector is split into two new vectors. A initial state: centroid of the training sequence; B initial estimation #1: code book of size 2; C final estimation after LGA: Optimal code book with 2 vectors; D initial estimation #2: code book of size 4; E final estimation after LGA: Optimal code book with 4 vectors; The final two code vectors are splitted into four and the process is repeated until the desired number of code vector is obtained.
https://en.wikipedia.org/wiki/Fast%20folding%20algorithm
In signal processing, the fast folding algorithm (Staelin, 1969) is an efficient algorithm for the detection of approximately-periodic events within time series data. It computes superpositions of the signal modulo various window sizes simultaneously. The FFA is best known for its use in the detection of pulsars, as popularised by SETI@home and Astropulse. It was also used by the Breakthrough Listen Initiative during their 2023 Investigation for Periodic Spectral Signals campaign. See also Pulsar
https://en.wikipedia.org/wiki/Classical%20limit
The classical limit or correspondence limit is the ability of a physical theory to approximate or "recover" classical mechanics when considered over special values of its parameters. The classical limit is used with physical theories that predict non-classical behavior. Quantum theory A heuristic postulate called the correspondence principle was introduced to quantum theory by Niels Bohr: in effect it states that some kind of continuity argument should apply to the classical limit of quantum systems as the value of the Planck constant normalized by the action of these systems becomes very small. Often, this is approached through "quasi-classical" techniques (cf. WKB approximation). More rigorously, the mathematical operation involved in classical limits is a group contraction, approximating physical systems where the relevant action is much larger than the reduced Planck constant , so the "deformation parameter" / can be effectively taken to be zero (cf. Weyl quantization.) Thus typically, quantum commutators (equivalently, Moyal brackets) reduce to Poisson brackets, in a group contraction. In quantum mechanics, due to Heisenberg's uncertainty principle, an electron can never be at rest; it must always have a non-zero kinetic energy, a result not found in classical mechanics. For example, if we consider something very large relative to an electron, like a baseball, the uncertainty principle predicts that it cannot really have zero kinetic energy, but the uncertainty in kinetic energy is so small that the baseball can effectively appear to be at rest, and hence it appears to obey classical mechanics. In general, if large energies and large objects (relative to the size and energy levels of an electron) are considered in quantum mechanics, the result will appear to obey classical mechanics. The typical occupation numbers involved are huge: a macroscopic harmonic oscillator with  = 2 Hz,  = 10 g, and maximum amplitude 0 = 10 cm, has  = , so that  ≃ 1030. Further see
https://en.wikipedia.org/wiki/Field%20cancerization
Field cancerization or field effect (also termed field change, field change cancerization, field carcinogenesis, cancer field effect or premalignant field defect) is a biological process in which large areas of cells at a tissue surface or within an organ are affected by carcinogenic alterations. The process arises from exposure to an injurious environment, often over a lengthy period. How it arises The initial step in field cancerization is associated with various molecular lesions such as acquired genetic mutations and epigenetic changes, occurring over a widespread, multi-focal "field". These initial molecular changes may subsequently progress to cytologically recognizable premalignant foci of dysplasia, and eventually to carcinoma in situ (CIS) or cancer. The image of a longitudinally opened colon resection on this page shows an area of a colon resection that likely has a field cancerization or field defect. It has one cancer and four premalignant polyps. Field cancerization can occur in any tissue. Prominent examples of field cancerization include premalignant field defects in head and neck cancer, lung cancer, colorectal cancer, Barrett's esophagus, skin, breast ducts and bladder. Field cancerization has implications for cancer surveillance and treatment. Despite adequate resection and being histologically normal, the remaining locoregional tissue has an increased risk for developing multiple independent cancers, either synchronously or metachronously. Common early carcinogenic alterations A common carcinogenic alteration, found in many cancers and in their adjacent field defects from which the cancers likely arose, is reduced expression of one or more DNA repair enzymes. Since reduced DNA repair expression is often present in a field cancerization or a field defect, it is likely to have been an early step in progression to the cancer. Field defects associated with gastrointestinal tract cancers also commonly displayed reduced apoptosis competence, aberra
https://en.wikipedia.org/wiki/Ground%20bounce
In electronic engineering, ground bounce is a phenomenon associated with transistor switching where the gate voltage can appear to be less than the local ground potential, causing the unstable operation of a logic gate. Description Ground bounce is usually seen on high density VLSI where insufficient precautions have been taken to supply a logic gate with a sufficiently low impedance connection (or sufficiently high capacitance) to ground. In this phenomenon, when the base of an NPN transistor is turned on, enough current flows through the emitter-collector circuit that the silicon in the immediate vicinity of the emitter-ground connection is pulled partially high, sometimes by several volts, thus raising the local ground, as perceived at the gate, to a value significantly above true ground. Relative to this local ground, the base voltage can go negative, thus shutting off the transistor. As the excess local charge dissipates, the transistor turns back on, possibly causing a repeat of the phenomenon, sometimes up to a half-dozen bounces. Ground bounce is one of the leading causes of "hung" or metastable gates in modern digital circuit design. This happens because the ground bounce puts the input of a flip flop effectively at voltage level that is neither a one nor a zero at clock time, or causes untoward effects in the clock itself. A similar voltage sag phenomenon may be seen on the collector side, called supply voltage sag (or VCC sag), where VCC is pulled unnaturally low. As a whole, ground bounce is a major issue in nanometer range technologies in VLSI. Ground bounce can also occur when the circuit board has poorly designed ground paths. Improper ground or VCC can lead to local variations in the ground level between various components. This is most commonly seen in circuit boards that have ground and VCC paths on the surfaces of the board. Reduction Ground bounce may be reduced by placing a 10–30-ohm resistor in series to each of the switching outputs
https://en.wikipedia.org/wiki/Hatch%20mark
Hatch marks (also called hash marks or tick marks) are a form of mathematical notation. They are used in three ways as: Unit and value marks — as on a ruler or number line Congruence notation in geometry — as on a geometric figure Graphed points — as on a graph Hatch marks are frequently used as an abbreviation of some common units of measurement. In regard to distance, a single hatch mark indicates feet, and two hatch marks indicate inches. In regard to time, a single hatch mark indicates minutes, and two hatch marks indicate seconds. In geometry and trigonometry, such marks are used following an elevated circle to indicate degrees, minutes, and seconds — Hatch marks can probably be traced to hatching in art works, where the pattern of the hatch marks represents a unique tone or hue. Different patterns indicate different tones. Unit and value marks Unit-and-value hatch marks are short vertical line segments which mark distances. They are seen on rulers and number lines. The marks are parallel to each other in an evenly-spaced manner. The distance between adjacent marks is one unit. Longer line segments are used for integers and natural numbers. Shorter line segments are used for fractions. Hatch marks provide a visual clue as to the value of specific points on the number line, even if some hatch marks are not labeled with a number. Hatch marks are typically seen in number theory and geometry. <----|----|----|----|----|----|----|----|----|----|----|----|----|----|----> -3 -2 -1 0 1 2 3 Congruency notation In geometry, hatch marks are used to denote equal measures of angles, arcs, line segments, or other elements. Hatch marks for congruence notation are in the style of tally marks or of Roman numerals – with some qualifications. These marks are without serifs, and some patterns are not used. For example, the numbers I, II, III, V, and X are used, but IV and VI are not used, since a rotation of
https://en.wikipedia.org/wiki/Software-defined%20perimeter
A software-defined perimeter (SDP), also called a "black cloud", is an approach to computer security which evolved from the work done at the Defense Information Systems Agency (DISA) under the Global Information Grid (GIG) Black Core Network initiative around 2007. Software-defined perimeter (SDP) framework was developed by the Cloud Security Alliance (CSA) to control access to resources based on identity. Connectivity in a Software Defined Perimeter is based on a need-to-know model, in which device posture and identity are verified before access to application infrastructure is granted. Application infrastructure is effectively “black” (a DoD term meaning the infrastructure cannot be detected), without visible DNS information or IP addresses. The inventors of these systems claim that a Software Defined Perimeter mitigates the most common network-based attacks, including: server scanning, denial of service, SQL injection, operating system and application vulnerability exploits, man-in-the-middle, pass-the-hash, pass-the-ticket, and other attacks by unauthorized users. Background The premise of the traditional enterprise network architecture is to create an internal network separated from the outside world by a fixed perimeter that consists of a series of firewall functions that block external users from coming in, but allows internal users to get out. Traditional fixed perimeters help protect internal services from external threats via simple techniques for blocking visibility and accessibility from outside the perimeter to internal applications and infrastructure. But the weaknesses of this traditional fixed perimeter model are becoming ever more problematic because of the popularity of user-managed devices and phishing attacks, providing untrusted access inside the perimeter, and SaaS and IaaS extending the perimeter into the internet. Software defined perimeters address these issues by giving application owners the ability to deploy perimeters that retain the
https://en.wikipedia.org/wiki/List%20of%20exceptional%20set%20concepts
This is a list of exceptional set concepts. In mathematics, and in particular in mathematical analysis, it is very useful to be able to characterise subsets of a given set X as 'small', in some definite sense, or 'large' if their complement in X is small. There are numerous concepts that have been introduced to study 'small' or 'exceptional' subsets. In the case of sets of natural numbers, it is possible to define more than one concept of 'density', for example. See also list of properties of sets of reals. Almost all Almost always Almost everywhere Almost never Almost surely Analytic capacity Closed unbounded set Cofinal (mathematics) Cofinite Dense set IP set 2-large Large set (Ramsey theory) Meagre set Measure zero Natural density Negligible set Nowhere dense set Null set, conull set Partition regular Piecewise syndetic set Schnirelmann density Small set (combinatorics) Stationary set Syndetic set Thick set Thin set (Serre) Exceptional Exceptional
https://en.wikipedia.org/wiki/Versatile%20Service%20Engine
Versatile Service Engine is a second generation IP Multimedia Subsystem developed by Nortel Networks that is compliant with Advanced Telecommunications Computing Architecture specifications. Nortel's versatile service engine provides capability to telecommunication service provider to offer global System for mobile communications and code-division multiple access services in both wireline and wireless mode. History The Versatile Service Engine is a joint effort of Nortel and Motorola. The aim of collaboration was to develop an Advanced Telecommunications Computing Architecture compliant platform for Nortel IP Multimedia Subsystem applications. Nortel joined the PCI Industrial Computer Manufacturers Group in 2002 and the work on Versatile Service Engine was started in 2004. Architecture A single versatile service engine frame consists of three shelves, each shelf having three slots. A single slot can have many sub-slots staging a blade in it. Advanced Telecommunications Computing Architecture blades can be processors, switches, AMC carriers, etc. A typical shelf will contain one or more switch blades and several processor blades. The power supply and cooling fans are located in the back pane of the Versatile Service Engine. Ericsson ownership After Nortel Networks filed for bankruptcy protection in January 2009, Ericsson telecommunications then acquired the code-division multiple access and LTE based assets of then Canada's largest telecom equipment maker, hence taking the ownership of Versatile service engine.
https://en.wikipedia.org/wiki/Square%20root%20of%206
The square root of 6 is the positive real number that, when multiplied by itself, gives the natural number 6. It is more precisely called the principal square root of 6, to distinguish it from the negative number with the same property. This number appears in numerous geometric and number-theoretic contexts. It can be denoted in surd form as: and in exponent form as: It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are: . which can be rounded up to 2.45 to within about 99.98% accuracy (about 1 part in 4800); that is, it differs from the correct value by about . It takes two more digits (2.4495) to reduce the error by about half. The approximation (≈ 2.449438...) is nearly ten times better: despite having a denominator of only 89, it differs from the correct value by less than , or less than one part in 47,000. Since 6 is the product of 2 and 3, the square root of 6 is the geometric mean of 2 and 3, and is the product of the square root of 2 and the square root of 3, both of which are irrational algebraic numbers. NASA has published more than a million decimal digits of the square root of six. Rational approximations The square root of 6 can be expressed as the continued fraction The successive partial evaluations of the continued fraction, which are called its convergents, approach : Their numerators are 2, 5, 22, 49, 218, 485, 2158, 4801, 21362, 47525, 211462, …, and their denominators are 1, 2, 9, 20, 89, 198, 881, 1960, 8721, 19402, 86329, …. Each convergent is a best rational approximation of ; in other words, it is closer to than any rational with a smaller denominator. Decimal equivalents improve linearly, at a rate of nearly one digit per convergent: The convergents, expressed as , satisfy alternately the Pell's equations When is approximated with the Babylonian method, starting with and using , the th approximant is equal to the th convergent of the continued fraction: The Babylonian
https://en.wikipedia.org/wiki/Biology
Biology is the scientific study of life. It is a natural science with a broad scope but has several unifying themes that tie it together as a single, coherent field. For instance, all organisms are made up of cells that process hereditary information encoded in genes, which can be transmitted to future generations. Another major theme is evolution, which explains the unity and diversity of life. Energy processing is also important to life as it allows organisms to move, grow, and reproduce. Finally, all organisms are able to regulate their own internal environments. Biologists are able to study life at multiple levels of organization, from the molecular biology of a cell to the anatomy and physiology of plants and animals, and evolution of populations. Hence, there are multiple subdisciplines within biology, each defined by the nature of their research questions and the tools that they use. Like other scientists, biologists use the scientific method to make observations, pose questions, generate hypotheses, perform experiments, and form conclusions about the world around them. Life on Earth, which emerged more than 3.7 billion years ago, is immensely diverse. Biologists have sought to study and classify the various forms of life, from prokaryotic organisms such as archaea and bacteria to eukaryotic organisms such as protists, fungi, plants, and animals. These various organisms contribute to the biodiversity of an ecosystem, where they play specialized roles in the cycling of nutrients and energy through their biophysical environment. History The earliest of roots of science, which included medicine, can be traced to ancient Egypt and Mesopotamia in around 3000 to 1200 BCE. Their contributions shaped ancient Greek natural philosophy. Ancient Greek philosophers such as Aristotle (384–322 BCE) contributed extensively to the development of biological knowledge. He explored biological causation and the diversity of life. His successor, Theophrastus, began the scienti
https://en.wikipedia.org/wiki/Capillary%20electrochromatography
In chemical analysis, capillary electrochromatography (CEC) is a chromatographic technique in which the mobile phase is driven through the chromatographic bed by electro-osmosis. Capillary electrochromatography is a combination of two analytical techniques, high-performance liquid chromatography and capillary electrophoresis. Capillary electrophoresis aims to separate analytes on the basis of their mass-to-charge ratio by passing a high voltage across ends of a capillary tube, which is filled with the analyte. High-performance liquid chromatography separates analytes by passing them, under high pressure, through a column filled with stationary phase. The interactions between the analytes and the stationary phase and mobile phase lead to the separation of the analytes. In capillary electrochromatography capillaries, packed with HPLC stationary phase, are subjected to a high voltage. Separation is achieved by electrophoretic migration of solutes and differential partitioning. Principle Capillary electrochromatography (CEC) combines the principles used in HPLC and CE. The mobile phase is driven across the chromatographic bed using electroosmosis instead of pressure (as in HPLC). Electroosmosis is the motion of liquid induced by an applied potential across a porous material, capillary tube, membrane or any other fluid conduit. Electroosmotic flow is caused by the Coulomb force induced by an electric field on net mobile electric charge in a solution. Under alkaline conditions, the surface silanol groups of the fused silica will become ionised leading to a negatively charged surface. This surface will have a layer of positively charged ions in close proximity which are relatively immobilised. This layer of ions is called the Stern layer. The thickness of the double layer is given by the formula: where εr is the relative permittivity of the medium, εo is the permittivity of vacuum, R is the universal gas constant, T is the absolute temperature, c is the molar concentrati
https://en.wikipedia.org/wiki/RapidIO
The RapidIO architecture is a high-performance packet-switched electrical connection technology. RapidIO supports messaging, read/write and cache coherency semantics. Based on industry-standard electrical specifications such as those for Ethernet, RapidIO can be used as a chip-to-chip, board-to-board, and chassis-to-chassis interconnect. History The RapidIO protocol was originally designed by Mercury Computer Systems and Motorola (Freescale) as a replacement for Mercury's RACEway proprietary bus and Freescale's PowerPC bus. The RapidIO Trade Association was formed in February 2000, and included telecommunications and storage OEMs as well as FPGA, processor, and switch companies. Releases The RapidIO specification revision 1.1 (3xN Gen1), released in March 2001, defined a wide, parallel bus. This specification did not achieve extensive commercial adoption. The RapidIO specification revision 1.2, released in June 2002, defined a serial interconnect based on the XAUI physical layer. Devices based on this specification achieved significant commercial success within wireless baseband, imaging and military compute. The RapidIO specification revision 1.3 was released in June 2005. The RapidIO specification revision 2.0 (6xN Gen2), was released in March 2008, added more port widths (2×, 8×, and 16×) and increased the maximum lane speed to 6.25 GBd / 5 Gbit/s. Revision 2.1 has repeated and expanded the commercial success of the 1.2 specification. The RapidIO specification revision 2.1 was released in September 2009. The RapidIO specification revision 2.2 was released in May 2011. The RapidIO specification revision 3.0 (10xN Gen3), was released in October 2013, has the following changes and improvements compared to the 2.x specifications: Based on industry-standard Ethernet 10GBASE-KR electrical specifications for short (20 cm + connector) and long (1 m + 2 connector) reach applications Directly leverages the Ethernet 10GBASE-KR DME training scheme for long-reach
https://en.wikipedia.org/wiki/Application-specific%20instruction%20set%20processor
An application-specific instruction set processor (ASIP) is a component used in system on a chip design. The instruction set architecture of an ASIP is tailored to benefit a specific application. This specialization of the core provides a tradeoff between the flexibility of a general purpose central processing unit (CPU) and the performance of an application-specific integrated circuit (ASIC). Some ASIPs have a configurable instruction set. Usually, these cores are divided into two parts: static logic which defines a minimum ISA (instruction-set architecture) and configurable logic which can be used to design new instructions. The configurable logic can be programmed either in the field in a similar fashion to a field-programmable gate array (FPGA) or during the chip synthesis. ASIPs have two ways of generating code: either through a retargetable code generator or through a retargetable compiler generator. The retargetable code generator uses the application, ISA, and Architecture Template to create the code generator for the object code. The retargetable compiler generator uses only the ISA and Architecture Template as the basis for creating the compiler. The application code will then be used by the compiler to create the object code. ASIPs can be used as an alternative of hardware accelerators for baseband signal processing or video coding. Traditional hardware accelerators for these applications suffer from inflexibility. It is very difficult to reuse the hardware datapath with handwritten finite-state machines (FSM). The retargetable compilers of ASIPs help the designer to update the program and reuse the datapath. Typically, the ASIP design is more or less dependent on the tool flow because designing a processor from scratch can be very complicated. One approach is to describe the processor using a high level language and then to automatically generate the ASIP's software toolset. Examples RISC-V Instruction Set Architecture (ISA) provides minimum base ins
https://en.wikipedia.org/wiki/Globoid%20%28botany%29
A globoid is a spherical crystalline inclusion in a protein body found in seed tissues that contains phytate and other nutrients for plant growth. These are found in several plants, including wheat and the genus Cucurbita. These nutrients are eventually completely depleted during seedling growth. In Cucurbita maxima, globoids form as early as the 3rd day of seedling growth. They are located in conjunction with a larger crystalloid. They are electron–dense and vary widely in size.
https://en.wikipedia.org/wiki/Multiresolution%20Fourier%20transform
Multiresolution Fourier Transform is an integral fourier transform that represents a specific wavelet-like transform with a fully scalable modulated window, but not all possible translations. Comparison of Fourier transform and wavelet transform The Fourier transform is one of the most common approaches when it comes to digital signal processing and signal analysis. It represents a signal through sine and cosine functions thus transforming the time-domain into frequency-domain. A disadvantage of the Fourier transform is that both sine and cosine function are defined in the whole time plane, meaning that there is no time resolution. Certain variants of Fourier transform, such as Short Time Fourier Transform (STFT) utilize a window for sampling, but the window length is fixed meaning that the results will be satisfactory only for either low or high frequency components. Fast fourier transform (FFT) is used often because of its computational speed, but shows better results for stationary signals. On the other hand, the wavelet transform can improve all the aforementioned downsides. It preserves both time and frequency information and it uses a window of variable length, meaning that both low and high frequency components will be derived with higher accuracy than the Fourier transform. The wavelet transform also shows better results in transient states. Multiresolution Fourier Transform leverages the advantageous properties of the wavelet transform and uses them for Fourier transform. Definition Let be a function that has its Fourier transform defined as    The time line can be split by intervals of length π/ω with centers at integer multiples of π/ω    Then, new transforms of function can be introduced       and       where , when n is an integer. Functions and can be used in order to define the complex Fourier transform    Then, set of points in the frequency-time plane is defined for the computation of the introduced transforms   
https://en.wikipedia.org/wiki/Cardiac%20Pacemakers%2C%20Inc.
Cardiac Pacemakers, Inc.(CPI), doing business as Guidant Cardiac Rhythm Management, manufactured implantable cardiac rhythm management devices, such as pacemakers and defibrillators. It sold microprocessor-controlled insulin pumps and equipment to regulate heart rhythm. It developed therapies to treat irregular heartbeat. The company was founded in 1971 and is based in St. Paul, Minnesota. Cardiac Pacemakers, Inc. is a subsidiary of Boston Scientific Corporation. Early history CPI was founded in February 1972 in St. Paul, Minnesota. The first $50,000 capitalization for CPI was raised from a phone booth on the Minneapolis skyway system. They began designing and testing their implantable cardiac pacemaker powered with a new longer-life lithium battery in 1971. The first heart patient to receive a CPI pacemaker emerged from surgery in June 1973. Within two years, the upstart company that challenged Medtronic had sold approximately 8,500 pacemakers. Medtronic at the time had 65% of the artificial pacemaker market. CPI was the first spin-off from Medtronic. It competition using the world's first lithium-powered pacemaker. Medtronic's market share plummeted to 35%. Founding partners Anthony Adducci, Manny Villafaña, Jim Baustert, and Art Schwalm, were former Medtronic employees. Lawsuits ensued, all settled out of court. Acquisition In early 1978, CPI was concerned about a friendly takeover attempt. Despite impressive sales, the company's stock price had fluctuated wildly the year before, dropping from $33 to $11 per share. Some speculated that the stock was being sold short, while others attributed the price to the natural volatility of high-tech stock. As a one-product company, CPI was susceptible to changing market conditions, and its founders knew they needed to diversify. They considered two options: acquiring other medical device companies or being acquired themselves. They chose the latter. Several companies expressed interest in acquiring CPI, including 3M,
https://en.wikipedia.org/wiki/Magic%20%28software%29
Magic is an electronic design automation (EDA) layout tool for very-large-scale integration (VLSI) integrated circuit (IC) originally written by John Ousterhout and his graduate students at UC Berkeley. Work began on the project in February 1983. A primitive version was operational by April 1983, when Joan Pendleton, Shing Kong and other graduate student chip designers suffered through many fast revisions devised to meet their needs in designing the SOAR CPU chip, a follow-on to Berkeley RISC. Fearing that Ousterhout was going to propose another name that started with "C" to match his previous projects Cm*, Caesar, and Crystal, Gordon Hamachi proposed the name Magic because he liked the idea of being able to say that people used magic to design chips. The rest of the development team enthusiastically agreed to this proposal after he devised the backronym Manhattan Artwork Generator for Integrated Circuits. The Magic software developers called themselves magicians, while the chip designers were Magic users. As free and open-source software, subject to the requirements of the BSD license, Magic continues to be popular because it is easy to use and easy to expand for specialized tasks. Differences The main difference between Magic and other VLSI design tools is its use of "corner-stitched" geometry, in which all layout is represented as a stack of planes, and each plane consists entirely of "tiles" (rectangles). The tiles must cover the entire plane. Each tile consists of an (X, Y) coordinate of its lower left-hand corner, and links to four tiles: the right-most neighbor on the top, the top-most neighbor on the right, the bottom-most neighbor on the left, and the left-most neighbor on the bottom. With the addition of the type of material represented by the tile, the layout geometry in the plane is exactly specified. The corner-stitched geometry representation leads to the concept of layout as "paint" to be applied to, or erased from, a canvas. This is con
https://en.wikipedia.org/wiki/Spectral%20concentration%20problem
The spectral concentration problem in Fourier analysis refers to finding a time sequence of a given length whose discrete Fourier transform is maximally localized on a given frequency interval, as measured by the spectral concentration. Spectral concentration The discrete Fourier transform (DFT) U(f) of a finite series , is defined as In the following, the sampling interval will be taken as Δt = 1, and hence the frequency interval as f ∈ [-½,½]. U(f) is a periodic function with a period 1. For a given frequency W such that 0<W<½, the spectral concentration of U(f) on the interval [-W,W] is defined as the ratio of power of U(f) contained in the frequency band [-W,W] to the power of U(f) contained in the entire frequency band [-½,½]. That is, It can be shown that U(f) has only isolated zeros and hence (see [1]). Thus, the spectral concentration is strictly less than one, and there is no finite sequence for which the DTFT can be confined to a band [-W,W] and made to vanish outside this band. Statement of the problem Among all sequences for a given T and W, is there a sequence for which the spectral concentration is maximum? In other words, is there a sequence for which the sidelobe energy outside a frequency band [-W,W] is minimum? The answer is yes; such a sequence indeed exists and can be found by optimizing . Thus maximising the power subject to the constraint that the total power is fixed, say leads to the following equation satisfied by the optimal sequence : This is an eigenvalue equation for a symmetric matrix given by It can be shown that this matrix is positive-definite, hence all the eigenvalues of this matrix lie between 0 and 1. The largest eigenvalue of the above equation corresponds to the largest possible spectral concentration; the corresponding eigenvector is the required optimal sequence . This sequence is called a 0th–order Slepian sequence (also known as a discrete prolate spheroidal sequence, or DPSS), which is a unique tape
https://en.wikipedia.org/wiki/Proof%20sketch%20for%20G%C3%B6del%27s%20first%20incompleteness%20theorem
This article gives a sketch of a proof of Gödel's first incompleteness theorem. This theorem applies to any formal theory that satisfies certain technical hypotheses, which are discussed as needed during the sketch. We will assume for the remainder of the article that a fixed theory satisfying these hypotheses has been selected. Throughout this article the word "number" refers to a natural number (including 0). The key property these numbers possess is that any natural number can be obtained by starting with the number 0 and adding 1 a finite number of times. Hypotheses of the theory Gödel's theorem applies to any formal theory that satisfies certain properties. Each formal theory has a signature that specifies the nonlogical symbols in the language of the theory. For simplicity, we will assume that the language of the theory is composed from the following collection of 15 (and only 15) symbols: A constant symbol for zero. A unary function symbol for the successor operation and two binary function symbols + and × for addition and multiplication. Three symbols for logical conjunction, , disjunction, , and negation, ¬. Two symbols for universal, , and existential, , quantifiers. Two symbols for binary relations, = and <, for equality and order (less than). Two symbols for left, and right, parentheses for establishing precedence of quantifiers. A single variable symbol, and a distinguishing symbol that can be used to construct additional variables of the form x*, x**, x***, ... This is the language of Peano arithmetic. A well-formed formula is a sequence of these symbols that is formed so as to have a well-defined reading as a mathematical formula. Thus is well formed while is not well formed. A theory is a set of well-formed formulas with no free variables. A theory is consistent if there is no formula such that both and its negation are provable. ω-consistency is a stronger property than consistency. Suppose that is a formula with one fr
https://en.wikipedia.org/wiki/List%20of%20National%20Association%20of%20Biology%20Teachers%20presidents
This is a list of the presidents of the National Association of Biology Teachers, from 1939 to the present. 2020s 2020: Sharon Gusky 2010s 2019: Sherri Annee 2018: Elizabeth Cowles 2017: Susan Finazzo 2016: Bob Melton 2015: Jane Ellis 2014: Stacey Kiser 2013: Mark Little 2012: Don French 2011: Dan Ward 2010: Marion V. "Bunny" Jaskot 2000s 2009-John M. Moore 2008-Todd Carter 2007-Pat Waller 2006-Toby Horn 2005-Rebecca E. Ross 2004-Betsy Ott 2003-Catherine Ueckert 2002-Brad Williamson 2001-Ann S. Lumsden 2000-Phil McCrea 1990s 1999-Richard D. Storey 1998-ViviannLee Ward 1997-Alan McCormack 1996-Elizabeth Carvellas 1995-Gordon E. Uno 1994-Barbara Schulz 1993-Ivo E. Lindauer 1992-Alton L. Biggs 1991-Joseph D. McInerney 1990-Nancy V. Ridenour 1980s 1989-John Penick 1988-Jane Abbott 1987-Donald S. Emmeluth 1986-George S. Zahrobsky 1985-Thromas R. Mertens 1984-Marjorie King 1983-Jane Butler Kahle 1982-Jerry Resnick 1981-Edward J. Komondy 1980-Stanley D. Roth 1970s 1979-Manert Kennedy 1978-Glen E. Peterson 1977-Jack L. Carter 1976-Haven Kolb 1975-Thomas Jesse Cleaver, Sr., PhD (1926–1995) 1974-Barbara K. Hopper 1973-Addison E. Lee 1972-Claude A. Welch 1971-H. Bentley Glass 1970-Robert E. Yager 1960s 1969-Burton E. Voss 1968-Jack Fishleder 1967-William V. Mayer 1966-Arnold B. Grobman 1965-L.S. McClung 1964-Ted F. Andrews 1963-Philip R. Fordyce 1962-Muriel Beuschlein 1961-Paul V. Webster 1960-Howard E. Weaver 1950s 1959-Paul Klinge 1958-Irene Hollenbeck 1957-John Breukelman 1956-John P. Harrold 1955-Brother H. Charles Severin 1954-Arthur J. Baker 1953-Leo F. Hadsall 1952-Harvey E. Stork 1951-Richard L. Weaver 1950-Betty L. Wheeler 1940s 1949-Ruth A. Dodge 1948-Howard A. Michaud 1947-E. Laurence Palmer 1946-Prevo L. Whitaker 1945-Helen Trowbridge 1944-1943-Merle A. Russell 1942-Homer A. Stephens 1941-George W. Jeffers 1940-Malcolm D. Campbell 1930s 1939-Myrl C. Lichtenwalter
https://en.wikipedia.org/wiki/List%20of%20trigonometric%20identities
In trigonometry, trigonometric identities are equalities that involve trigonometric functions and are true for every value of the occurring variables for which both sides of the equality are defined. Geometrically, these are identities involving certain functions of one or more angles. They are distinct from triangle identities, which are identities potentially involving angles but also involving side lengths or other lengths of a triangle. These identities are useful whenever expressions involving trigonometric functions need to be simplified. An important application is the integration of non-trigonometric functions: a common technique involves first using the substitution rule with a trigonometric function, and then simplifying the resulting integral with a trigonometric identity. Pythagorean identities The basic relationship between the sine and cosine is given by the Pythagorean identity: where means and means This can be viewed as a version of the Pythagorean theorem, and follows from the equation for the unit circle. This equation can be solved for either the sine or the cosine: where the sign depends on the quadrant of Dividing this identity by , , or both yields the following identities: Using these identities, it is possible to express any trigonometric function in terms of any other (up to a plus or minus sign): Reflections, shifts, and periodicity By examining the unit circle, one can establish the following properties of the trigonometric functions. Reflections When the direction of a Euclidean vector is represented by an angle this is the angle determined by the free vector (starting at the origin) and the positive -unit vector. The same concept may also be applied to lines in a Euclidean space, where the angle is that determined by a parallel to the given line through the origin and the positive -axis. If a line (vector) with direction is reflected about a line with direction then the direction angle of this reflected line (vec
https://en.wikipedia.org/wiki/Computational%20RAM
Computational RAM (C-RAM) is random-access memory with processing elements integrated on the same chip. This enables C-RAM to be used as a SIMD computer. It also can be used to more efficiently use memory bandwidth within a memory chip. The general technique of doing computations in memory is called Processing-In-Memory (PIM). Overview The most influential implementations of computational RAM came from The Berkeley IRAM Project. Vector IRAM (V-IRAM) combines DRAM with a vector processor integrated on the same chip. Reconfigurable Architecture DRAM (RADram) is DRAM with reconfigurable computing FPGA logic elements integrated on the same chip. SimpleScalar simulations show that RADram (in a system with a conventional processor) can give orders of magnitude better performance on some problems than traditional DRAM (in a system with the same processor). Some embarrassingly parallel computational problems are already limited by the von Neumann bottleneck between the CPU and the DRAM. Some researchers expect that, for the same total cost, a machine built from computational RAM will run orders of magnitude faster than a traditional general-purpose computer on these kinds of problems. As of 2011, the "DRAM process" (few layers; optimized for high capacitance) and the "CPU process" (optimized for high frequency; typically twice as many BEOL layers as DRAM; since each additional layer reduces yield and increases manufacturing cost, such chips are relatively expensive per square millimeter compared to DRAM) is distinct enough that there are three approaches to computational RAM: starting with a CPU-optimized process and a device that uses much embedded SRAM, add an additional process step (making it even more expensive per square millimeter) to allow replacing the embedded SRAM with embedded DRAM (eDRAM), giving ≈3x area savings on the SRAM areas (and so lowering net cost per chip). starting with a system with a separate CPU chip and DRAM chip(s), add small amounts of
https://en.wikipedia.org/wiki/Mathematical%20notation
Mathematical notation consists of using symbols for representing operations, unspecified numbers, relations, and any other mathematical objects and assembling them into expressions and formulas. Mathematical notation is widely used in mathematics, science, and engineering for representing complex concepts and properties in a concise, unambiguous, and accurate way. For example, Albert Einstein's equation is the quantitative representation in mathematical notation of the mass–energy equivalence. Mathematical notation was first introduced by François Viète at the end of the 16th century and largely expanded during the 17th and 18th centuries by René Descartes, Isaac Newton, Gottfried Wilhelm Leibniz, and overall Leonhard Euler. Symbols The use of many symbols is the basis of mathematical notation. They play a similar role as words in natural languages. They may play different roles in mathematical notation similarly as verbs, adjective and nouns play different roles in a sentence. Letters as symbols Letters are typically used for naming—in mathematical jargon, one says representing—mathematical objects. This is typically the Latin and Greek alphabets that are used, but some letters of Hebrew alphabet are sometimes used. Uppercase and lowercase letters are considered as different symbols. For Latin alphabet, different typefaces provide also different symbols. For example, and could theoretically appear in the same mathematical text with six different meanings. Normally, roman upright typeface is not used for symbols, except for symbols that are formed of several letters, such as the symbol "" of the sine function. In order to have more symbols, and for allowing related mathematical objects to be represented by related symbols, diacritics, subscripts and superscripts are often used. For example, may denote the Fourier transform of the derivative of a function called Other symbols Symbols are not only used for naming mathematical objects. They can be used fo