source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Nullator
|
In electronics, a nullator is a theoretical linear, time-invariant one-port defined as having zero current and voltage across its terminals. Nullators are strange in the sense that they simultaneously have properties of both a short (zero voltage) and an open circuit (zero current). They are neither current nor voltage sources, yet both at the same time.
Inserting a nullator in a circuit schematic imposes a mathematical constraint on how that circuit must behave, forcing the circuit itself to adopt whatever arrangements needed to meet the condition. For example, the inputs of an ideal operational amplifier (with negative feedback) behave like a nullator, as they draw no current and have no voltage across them, and these conditions are used to analyze the circuitry surrounding the operational amplifier.
A nullator is normally paired with a norator to form a nullor.
Two trivial cases are worth noting: A nullator in parallel with a norator is equivalent to a short (zero voltage any current) and a nullator in series with a norator is an open circuit (zero current, any voltage).
|
https://en.wikipedia.org/wiki/Hexagonal%20Efficient%20Coordinate%20System
|
The Hexagonal Efficient Coordinate System (HECS), formerly known as Array Set Addressing (ASA), is a coordinate system for hexagonal grids that allows hexagonally sampled images to be efficiently stored and processed on digital systems. HECS represents the hexagonal grid as a set of two interleaved rectangular sub-arrays, which can be addressed by normal integer row and column coordinates and are distinguished with a single binary coordinate. Hexagonal sampling is the optimal approach for isotropically band-limited two-dimensional signals and its use provides a sampling efficiency improvement of 13.4% over rectangular sampling. The HECS system enables the use of hexagonal sampling for digital imaging applications without requiring significant additional processing to address the hexagonal array.
Introduction
The advantages of sampling on a hexagonal grid instead of the standard rectangular grid for digital imaging applications include: more efficient sampling, consistent connectivity, equidistant neighboring pixels, greater angular resolution, and higher circular symmetry. Sometimes, more than one of these advantages compound together, thereby increasing the efficiency by 50% in terms of computation and storage when compared to rectangular sampling. Researchers have shown that the hexagonal grid is the optimal sampling lattice and its use provides a sampling efficiency improvement of 13.4% over rectangular sampling for isotropically band-limited two-dimensional signals. Despite all of these advantages of hexagonal sampling over rectangular sampling, its application has been limited because of the lack of an efficient coordinate system. However that limitation has been removed with the recent development of HECS.
Hexagonal Efficient Coordinate System
Description
The Hexagonal Efficient Coordinate System (HECS) is based on the idea of representing the hexagonal grid as a set of two rectangular arrays which can be individually indexed using familiar integer-value
|
https://en.wikipedia.org/wiki/Iodine%20in%20biology
|
Iodine is an essential trace element in biological systems. It has the distinction of being the heaviest element commonly needed by living organisms as well as the second-heaviest known to be used by any form of life (only tungsten, a component of a few bacterial enzymes, has a higher atomic number and atomic weight). It is a component of biochemical pathways in organisms from all biological kingdoms, suggesting its fundamental significance throughout the evolutionary history of life.
Iodine is critical to the proper functioning of the vertebrate endocrine system, and plays smaller roles in numerous other organs, including those of the digestive and reproductive systems. An adequate intake of iodine-containing compounds is important at all stages of development, especially during the fetal and neonatal periods, and diets deficient in iodine can present serious consequences for growth and metabolism.
Vertebrate functions
Thyroid
In vertebrate biology, iodine's primary function is as a constituent of the thyroid hormones, thyroxine (T4) and triiodothyronine (T3). These molecules are made from addition-condensation products of the amino acid tyrosine, and are stored prior to release in an iodine-containing protein called thyroglobulin. T4 and T3 contain four and three atoms of iodine per molecule, respectively; iodine accounts for 65% of the molecular weight of T4 and 59% of T3. The thyroid gland actively absorbs iodine from the blood to produce and release these hormones into the blood, actions which are regulated by a second hormone, called thyroid-stimulating hormone (TSH), which is produced by the pituitary gland. Thyroid hormones are phylogenetically very old molecules which are synthesized by most multicellular organisms, and which even have some effect on unicellular organisms.
Thyroid hormones play a fundamental role in biology, acting upon gene transcription mechanisms to regulate the basal metabolic rate. T3 acts on small intestine cells and adipocytes to
|
https://en.wikipedia.org/wiki/Long-slit%20spectroscopy
|
In astronomy, long-slit spectroscopy involves observing a celestial object using a spectrograph in which the entrance aperture is an elongated, narrow slit. Light entering the slit is then refracted using a prism, diffraction grating, or grism. The dispersed light is typically recorded on a charge-coupled device detector.
Velocity profiles
This technique can be used to observe the rotation curve of a galaxy, as those stars moving towards the observer are blue-shifted, while stars moving away are red-shifted.
Long-slit spectroscopy can also be used to observe the expansion of optically-thin nebulae. When the spectrographic slit extends over the diameter of a nebula, the lines of the velocity profile meet at the edges. In the middle of the nebula, the line splits in two, since one component is redshifted and one is blueshifted. The blueshifted component will appear brighter as it is on the "near side" of the nebula, and is as such subject to a smaller degree of attenuation as the light coming from the far side of the nebula. The tapered edges of the velocity profile stem from the fact that the material at the edge of the nebula is moving perpendicular to the line of sight and so its line of sight velocity will be zero relative to the rest of the nebula.
Several effects can contribute to the transverse broadening of the velocity profile. Individual stars themselves rotate as they orbit, so the side approaching will be blueshifted and the side moving away will be redshifted. Stars also have random (as well as orbital) motion around the galaxy, meaning any individual star may depart significantly from the rest relative to its neighbours in the rotation curve. In spiral galaxies this random motion is small compared to the low-eccentricity orbital motion, but this is not true for an elliptical galaxy. Molecular-scale Doppler broadening will also contribute.
Advantages
Long-slit spectroscopy can ameliorate problems with contrast when observing structures near a very lu
|
https://en.wikipedia.org/wiki/Maxwell%E2%80%93Boltzmann%20statistics
|
In statistical mechanics, Maxwell–Boltzmann statistics describes the distribution of classical material particles over various energy states in thermal equilibrium. It is applicable when the temperature is high enough or the particle density is low enough to render quantum effects negligible.
The expected number of particles with energy for Maxwell–Boltzmann statistics is
where:
is the energy of the i-th energy level,
is the average number of particles in the set of states with energy ,
is the degeneracy of energy level i, that is, the number of states with energy which may nevertheless be distinguished from each other by some other means,
μ is the chemical potential,
k is the Boltzmann constant,
T is absolute temperature,
N is the total number of particles:
Z is the partition function:
e is Euler's number
Equivalently, the number of particles is sometimes expressed as
where the index i now specifies a particular state rather than the set of all states with energy , and .
History
Maxwell–Boltzmann statistics grew out of the Maxwell–Boltzmann distribution, most likely as a distillation of the underlying technique. The distribution was first derived by Maxwell in 1860 on heuristic grounds. Boltzmann later, in the 1870s, carried out significant investigations into the physical origins of this distribution. The distribution can be derived on the ground that it maximizes the entropy of the system.
Applicability
Maxwell–Boltzmann statistics is used to derive the Maxwell–Boltzmann distribution of an ideal gas. However, it can also be used to extend that distribution to particles with a different energy–momentum relation, such as relativistic particles (resulting in Maxwell–Jüttner distribution), and to other than three-dimensional spaces.
Maxwell–Boltzmann statistics is often described as the statistics of "distinguishable" classical particles. In other words, the configuration of particle A in state 1 and particle B in state 2 is different from the case in
|
https://en.wikipedia.org/wiki/Automated%20species%20identification
|
Automated species identification is a method of making the expertise of taxonomists available to ecologists, parataxonomists and others via digital technology and artificial intelligence. Today, most automated identification systems rely on images depicting the species for the identification. Based on precisely identified images of a species, a classifier is trained. Once exposed to a sufficient amount of training data, this classifier can then identify the trained species on previously unseen images.
Introduction
The automated identification of biological objects such as insects (individuals) and/or groups (e.g., species, guilds, characters) has been a dream among systematists for centuries. The goal of some of the first multivariate biometric methods was to address the perennial problem of group discrimination and inter-group characterization. Despite much preliminary work in the 1950s and '60s, progress in designing and implementing practical systems for fully automated object biological identification has proven frustratingly slow. As recently as 2004 Dan Janzen
updated the dream for a new audience:
<blockquote>The spaceship lands. He steps out. He points it around. It says 'friendly–unfriendly—edible–poisonous—safe– dangerous—living–inanimate'. On the next sweep it says Quercus oleoides—Homo sapiens—Spondias mombin—Solanum nigrum—Crotalus durissus—Morpho peleides''—serpentine'. This has been in my head since reading science fiction in ninth grade half a century ago.</blockquote>
The species identification problem
Janzen's preferred solution to this classic problem involved building machines to identify species from their DNA. However, recent developments in computer architectures, as well as innovations in software design, have placed the tools needed to realize Janzen's vision in the hands of the systematics and computer science community not in several years hence, but now; and not just for creating DNA barcodes, but also for identification based on
|
https://en.wikipedia.org/wiki/Pathology
|
Pathology is the study of disease and injury. The word pathology also refers to the study of disease in general, incorporating a wide range of biology research fields and medical practices. However, when used in the context of modern medical treatment, the term is often used in a narrower fashion to refer to processes and tests that fall within the contemporary medical field of "general pathology", an area which includes a number of distinct but inter-related medical specialties that diagnose disease, mostly through analysis of tissue and human cell samples. Idiomatically, "a pathology" may also refer to the predicted or actual progression of particular diseases (as in the statement "the many different forms of cancer have diverse pathologies", in which case a more proper choice of word would be "pathophysiologies"), and the affix pathy is sometimes used to indicate a state of disease in cases of both physical ailment (as in cardiomyopathy) and psychological conditions (such as psychopathy). A physician practicing pathology is called a pathologist.
As a field of general inquiry and research, pathology addresses components of disease: cause, mechanisms of development (pathogenesis), structural alterations of cells (morphologic changes), and the consequences of changes (clinical manifestations). In common medical practice, general pathology is mostly concerned with analyzing known clinical abnormalities that are markers or precursors for both infectious and non-infectious disease, and is conducted by experts in one of two major specialties, anatomical pathology and clinical pathology. Further divisions in specialty exist on the basis of the involved sample types (comparing, for example, cytopathology, hematopathology, and histopathology), organs (as in renal pathology), and physiological systems (oral pathology), as well as on the basis of the focus of the examination (as with forensic pathology).
Pathology is a significant field in modern medical diagnosis and me
|
https://en.wikipedia.org/wiki/Hardware%20backdoor
|
Hardware backdoors are backdoors in hardware, such as code inside hardware or firmware of computer chips. The backdoors may be directly implemented as hardware Trojans in the integrated circuit.
Hardware backdoors are intended to undermine security in smartcards and other cryptoprocessors unless investment is made in anti-backdoor design methods. They have also been considered for car hacking.
Severity
Hardware backdoors are considered to be highly problematic for several reasons. For instance, they cannot be removed by conventional means such as antivirus software. They can also circumvent other types of security, such as disk encryption. Lastly, they can also be injected during production where the user has no control.
Examples
Around 2008 the FBI reported that 3,500 counterfeit Cisco network components were discovered in the US with some of them having found their way into military and government facilities.
In 2011 Jonathan Brossard demonstrated a proof-of-concept hardware backdoor called "Rakshasa" which can be installed by anyone with physical access to hardware. It uses coreboot to re-flash the BIOS with a SeaBIOS and iPXE benign bootkit built of legitimate, open-source tools and can fetch malware over the web at boot time.
In 2012, Sergei Skorobogatov (from the University of Cambridge computer laboratory) and Woods controversially stated that they had found a backdoor in a military-grade FPGA device which could be exploited to access/modify sensitive information. It has been said that this was proven to be a software problem and not a deliberate attempt at sabotage that still brought to light the need for equipment manufacturers to ensure microchips operate as intended.
In 2012 two mobile phones developed by Chinese device manufacturer ZTE were found to carry a backdoor to instantly gain root access via a password that had been hard-coded into the software. This was confirmed by security researcher Dmitri Alperovitch.
U.S. sources have pointed the
|
https://en.wikipedia.org/wiki/Noise%20margin
|
In electrical engineering, noise margin is the maximum voltage amplitude of extraneous signal that can be algebraically added to the noise-free worst-case input level without causing the output voltage to deviate from the allowable logic voltage level. It is commonly used in at least two contexts as follows:
In communications system engineering, noise margin is the ratio by which the signal exceeds the minimum acceptable amount. It is normally measured in decibels.
In a digital circuit, the noise margin is the amount by which the signal exceeds the threshold for a proper '0' (logic low) or '1' (logic high). For example, a digital circuit might be designed to swing between 0.0 and 1.2 volts, with anything below 0.2 volts considered a '0', and anything above 1.0 volts considered a '1'. Then the noise margin for a '0' would be the amount that a signal is below 0.2 volts, and the noise margin for a '1' would be the amount by which a signal exceeds 1.0 volt. In this case noise margins are measured as an absolute voltage, not a ratio. Noise margins for CMOS chips are usually much greater than those for TTL because the VOH min is closer to the power supply voltage and VOL max is closer to zero.
Real digital inverters do not instantaneously switch from a logic high (1) to a logic low (0), there is some capacitance. While an inverter is transitioning from a logic high to low, there is an undefined region where the voltage cannot be considered high or low. This is considered a noise margin. There are two noise margins to consider: Noise margin high (NMH) and noise margin low (NML). NMH is the amount of voltage between an inverter transitioning from a logic high (1) to a logic low (0) and vice versa for NML. The equations are as follows: NMH ≡ VOH - VIH and NML ≡ VIL - VOL. Typically, in a CMOS inverter VOH will equal VDD and VOL will equal the ground potential, as mentioned above.
VIH is defined as the highest input voltage at which the slope of the voltage transfer
|
https://en.wikipedia.org/wiki/Site%20reliability%20engineering
|
Site reliability engineering (SRE) is a set of principles and practices that applies aspects of software engineering to IT infrastructure and operations. SRE claims to create highly reliable and scalable software systems. Although they are closely related, SRE is slightly different from DevOps.
History
The field of site reliability engineering originated at Google with Ben Treynor Sloss, who founded a site reliability team after joining the company in 2003. In 2016, Google employed more than 1,000 site reliability engineers. After originating at Google in 2003, the concept spread into the broader software development industry, and other companies subsequently began to employ site reliability engineers. The position is more common at larger web companies, as small companies often do not operate at a scale that would require dedicated SREs. Organizations that have adopted the concept include Airbnb, Dropbox, IBM, LinkedIn, Netflix, and Wikimedia. According to a 2021 report by the DevOps Institute, 22% of organizations in a survey of 2,000 respondents had adopted the SRE model.
Definition
Site reliability engineering, as a job role, may be performed by individual contributors or organized in teams, responsible for a combination of the following within a broader engineering organization: System availability, latency, performance, efficiency, change management, monitoring, emergency response, and capacity planning. Site reliability engineers often have backgrounds in software engineering, system engineering, or system administration. Focuses of SRE include automation, system design, and improvements to system resilience.
Site reliability engineering, as a set of principles and practices, can be performed by anyone. SRE is similar to security engineering in that everyone is expected to contribute to good security practices, but a company may decide to eventually hire staff specialists for the job. Conversely, for securing internet systems, companies may hire securit
|
https://en.wikipedia.org/wiki/Certified%20software%20development%20professional
|
Certified Software Development Professional (CSDP) is a vendor-neutral professional certification in software engineering developed by the IEEE Computer Society for experienced software engineering professionals. This certification was offered globally since 2001 through Dec. 2014.
The certification program constituted an element of the Computer Society's major efforts in the area of Software engineering professionalism, along with the IEEE-CS and ACM Software Engineering 2004 (SE2004) Undergraduate Curricula Recommendations, and The Guide to the Software Engineering Body of Knowledge (SWEBOK Guide 2004), completed two years later.
As a further development of these elements, to facilitate the global portability of the software engineering certification, since 2005 through 2008 the International Standard ISO/IEC 24773:2008 "Software engineering -- Certification of software engineering professionals -- Comparison framework"
has been developed. (Please, see an overview of this ISO/IEC JTC 1 and IEEE standardization effort in the article published by Stephen B. Seidman, CSDP.
) The standard was formulated in such a way, that it allowed to recognize the CSDP certification scheme as basically aligned with it, soon after the standard's release date, 2008-09-01. Several later revisions of the CSDP certification were undertaken with the aim of making the alignment more complete. In 2019, ISO/IEC 24773:2008 has been withdrawn and revised (by ISO/IEC 24773-1:2019 ).
The certification was initially offered by the IEEE Computer Society to experienced software engineering and software development practitioners globally in 2001 in the course of the certification examination beta-testing. The CSDP certification program has been officially approved in 2002.
After December 2014 this certification program has been discontinued, all issued certificates are recognized as valid forever.
A number of new similar certifications were introduced by the IEEE Computer Society, includi
|
https://en.wikipedia.org/wiki/Triviality%20%28mathematics%29
|
In mathematics, the adjective trivial is often used to refer to a claim or a case which can be readily obtained from context, or an object which possesses a simple structure (e.g., groups, topological spaces). The noun triviality usually refers to a simple technical aspect of some proof or definition. The origin of the term in mathematical language comes from the medieval trivium curriculum, which distinguishes from the more difficult quadrivium curriculum. The opposite of trivial is nontrivial, which is commonly used to indicate that an example or a solution is not simple, or that a statement or a theorem is not easy to prove.
The judgement of whether a situation under consideration is trivial or not depends on who considers it since the situation is obviously true for someone who has sufficient knowledge or experience of it while to someone who has never seen this, it may be even hard to be understood so not trivial at all. And there can be an argument about how quickly and easily a problem should be recognized for the problem to be treated as trivial. So, triviality is not a universally agreed property in mathematics and logic.
Trivial and nontrivial solutions
In mathematics, the term "trivial" is often used to refer to objects (e.g., groups, topological spaces) with a very simple structure. These include, among others:
Empty set: the set containing no or null members
Trivial group: the mathematical group containing only the identity element
Trivial ring: a ring defined on a singleton set
"Trivial" can also be used to describe solutions to an equation that have a very simple structure, but for the sake of completeness cannot be omitted. These solutions are called the trivial solutions. For example, consider the differential equation
where is a function whose derivative is . The trivial solution is the zero function
while a nontrivial solution is the exponential function
The differential equation with boundary conditions is important in mathematics and
|
https://en.wikipedia.org/wiki/List%20of%20the%20verified%20shortest%20people
|
This list includes the shortest ever verified people in their lifetime or profession. The entries below are broken down into different categories which range from sex, to age group and occupations. Most of the sourcing is done by Guinness World Records which in the last decade has added new categories for "mobile" and "non-mobile" men and women. The world's shortest verified man is Chandra Bahadur Dangi, while for women Pauline Musters holds the record.
Men
Women
Shortest pairs
Shortest by age group
This was Nisa's baby height, she later grew.
This was Francis Joseph Flynn's shortest height, because he grew in height after age 16; he is not listed as one of the world's shortest men.
Filed under "Shortest woman to give birth".
Shortest by occupation
Actors
Artists and writers
Athletes
Politicians
Others
See also
Dwarfism
Pygmy peoples
Caroline Crachami, a person about tall
Little people (mythology)
List of dwarfism organisations
Dwarfs and pygmies in ancient Egypt
List of tallest people
|
https://en.wikipedia.org/wiki/Psammon
|
Psammon (from Greek "psammos", "sand") is a group of organisms inhabiting coastal sand moist — biota buried in sediments. Psammon is a part of water fauna, along with periphyton, plankton, nekton, and benthos. Psammon is also sometimes considered a part of benthos due to its near-bottom distribution. Psammon term is commonly used to refer to freshwater reservoirs such as lakes.
|
https://en.wikipedia.org/wiki/Autocorrelation%20technique
|
The autocorrelation technique is a method for estimating the dominating frequency in a complex signal, as well as its variance. Specifically, it calculates the first two moments of the power spectrum, namely the mean and variance. It is also known as the pulse-pair algorithm in radar theory.
The algorithm is both computationally faster and significantly more accurate compared to the Fourier transform, since the resolution is not limited by the number of samples used.
Derivation
The autocorrelation of lag 1 can be expressed using the inverse Fourier transform of the power spectrum :
If we model the power spectrum as a single frequency , this becomes:
where it is apparent that the phase of equals the signal frequency.
Implementation
The mean frequency is calculated based on the autocorrelation with lag one, evaluated over a signal consisting of N samples:
The spectral variance is calculated as follows:
Applications
Estimation of blood velocity and turbulence in color flow imaging used in medical ultrasonography.
Estimation of target velocity in pulse-doppler radar
External links
A covariance approach to spectral moment estimation, Miller et al., IEEE Transactions on Information Theory.
Doppler Radar Meteorological Observations Doppler Radar Theory. Autocorrelation technique described on p.2-11
Real-Time Two-Dimensional Blood Flow Imaging Using an Autocorrelation Technique, by Chihiro Kasai, Koroku Namekawa, Akira Koyano, and Ryozo Omoto, IEEE Transactions on Sonics and Ultrasonics, Vol. SU-32, No.3, May 1985.
Radar theory
Signal processing
Autocorrelation
|
https://en.wikipedia.org/wiki/Gyrator%E2%80%93capacitor%20model
|
The gyrator–capacitor model - sometimes also the capacitor-permeance model - is a lumped-element model for magnetic circuits, that can be used in place of the more common resistance–reluctance model. The model makes permeance elements analogous to electrical capacitance (see magnetic capacitance section) rather than electrical resistance (see magnetic reluctance). Windings are represented as gyrators, interfacing between the electrical circuit and the magnetic model.
The primary advantage of the gyrator–capacitor model compared to the magnetic reluctance model is that the model preserves the correct values of energy flow, storage and dissipation. The gyrator–capacitor model is an example of a group of analogies that preserve energy flow across energy domains by making power conjugate pairs of variables in the various domains analogous. It fills the same role as the impedance analogy for the mechanical domain.
Nomenclature
Magnetic circuit may refer to either the physical magnetic circuit or the model magnetic circuit. Elements and dynamical variables that are part of the model magnetic circuit have names that start with the adjective magnetic, although this convention is not strictly followed. Elements or dynamical variables in the model magnetic circuit may not have a one to one correspondence with components in the physical magnetic circuit. Symbols for elements and variables that are part of the model magnetic circuit may be written with a subscript of M. For example, would be a magnetic capacitor in the model circuit.
Electrical elements in an associated electrical circuit may be brought into the magnetic model for ease of analysis. Model elements in the magnetic circuit that represent electrical elements are typically the electrical dual of the electrical elements. This is because transducers between the electrical and magnetic domains in this model are usually represented by gyrators. A gyrator will transform an element into its dual. For example, a magn
|
https://en.wikipedia.org/wiki/Outline%20of%20discrete%20mathematics
|
Discrete mathematics is the study of mathematical structures that are fundamentally discrete rather than continuous. In contrast to real numbers that have the property of varying "smoothly", the objects studied in discrete mathematics – such as integers, graphs, and statements in logic – do not vary smoothly in this way, but have distinct, separated values. Discrete mathematics, therefore, excludes topics in "continuous mathematics" such as calculus and analysis.
Included below are many of the standard terms used routinely in university-level courses and in research papers. This is not, however, intended as a complete list of mathematical terms; just a selection of typical terms of art that may be encountered.
Subjects in discrete mathematics
Logic – a study of reasoning
Modal Logic: A type of logic for the study of necessity and probability
Set theory – a study of collections of elements
Number theory – study of integers and integer-valued functions
Combinatorics – a study of Counting
Finite mathematics – a course title
Graph theory – a study of graphs
Digital geometry and digital topology
Algorithmics – a study of methods of calculation
Information theory – a mathematical representation of the conditions and parameters affecting the transmission and processing of information
Computability and complexity theories – deal with theoretical and practical limitations of algorithms
Elementary probability theory and Markov chains
Linear algebra – a study of related linear equations
Functions – an expression, rule, or law that defines a relationship between one variable (the independent variable) and another variable (the dependent variable)
Partially ordered set –
Probability – concerns with numerical descriptions of the chances of occurrence of an event
Proofs –
Relation – a collection of ordered pairs containing one object from each set
Discrete mathematical disciplines
For further reading in discrete mathematics, beyond a basic level, see thes
|
https://en.wikipedia.org/wiki/Synchronous%20virtual%20pipe
|
When realizing pipeline forwarding
a predefined schedule for forwarding a pre-allocated amount of bytes during one or more time frames along a path of subsequent switches establishes a synchronous virtual pipe (SVP). The SVP capacity is determined by the total number of bits allocated in every time cycle for the SVP. For example, for a 10 ms time cycle, if 20,000 bits are allocated during each of 2 time frames, the SVP capacity is 4 Mbit/s.
Pipeline forwarding guarantees that reserved traffic, i.e., traveling on an SVP, experiences:
bounded end-to-end delay,
delay jitter lower than two TFs, and
no congestion and resulting losses.
Two implementations of the pipeline forwarding were proposed: time-driven switching (TDS)
and time-driven priority (TDP)
and can be used to create pipeline forwarding parallel network in the future Internet.
|
https://en.wikipedia.org/wiki/List%20of%20chaotic%20maps
|
In mathematics, a chaotic map is a map (namely, an evolution function) that exhibits some sort of chaotic behavior. Maps may be parameterized by a discrete-time or a continuous-time parameter. Discrete maps usually take the form of iterated functions. Chaotic maps often occur in the study of dynamical systems.
Chaotic maps often generate fractals. Although a fractal may be constructed by an iterative procedure, some fractals are studied in and of themselves, as sets rather than in terms of the map that generates them. This is often because there are several different iterative procedures to generate the same fractal.
List of chaotic maps
List of fractals
Cantor set
de Rham curve
Gravity set, or Mitchell-Green gravity set
Julia set - derived from complex quadratic map
Koch snowflake - special case of de Rham curve
Lyapunov fractal
Mandelbrot set - derived from complex quadratic map
Menger sponge
Newton fractal
Nova fractal - derived from Newton fractal
Quaternionic fractal - three dimensional complex quadratic map
Sierpinski carpet
Sierpinski triangle
|
https://en.wikipedia.org/wiki/Von%20Neumann%20architecture
|
The von Neumann architecture—also known as the von Neumann model or Princeton architecture—is a computer architecture based on a 1945 description by John von Neumann, and by others, in the First Draft of a Report on the EDVAC. The document describes a design architecture for an electronic digital computer with these components:
A processing unit with both an arithmetic logic unit and processor registers
A control unit that includes an instruction register and a program counter
Memory that stores data and instructions
External mass storage
Input and output mechanisms
The term "von Neumann architecture" has evolved to refer to any stored-program computer in which an instruction fetch and a data operation cannot occur at the same time (since they share a common bus). This is referred to as the von Neumann bottleneck, which often limits the performance of the corresponding system.
The design of a von Neumann architecture machine is simpler than in a Harvard architecture machine—which is also a stored-program system, yet has one dedicated set of address and data buses for reading and writing to memory, and another set of address and data buses to fetch instructions.
A stored-program computer uses the same underlying mechanism to encode both program instructions and data as opposed to designs which use a mechanism such as discrete plugboard wiring or fixed control circuitry for instruction implementation. Stored-program computers were an advancement over the manually reconfigured or fixed function computers of the 1940s, such as the Colossus and the ENIAC. These were programmed by setting switches and inserting patch cables to route data and control signals between various functional units.
The vast majority of modern computers use the same hardware mechanism to encode and store both data and program instructions, but have caches between the CPU and memory, and, for the caches closest to the CPU, have separate caches for instructions and data, so that most instru
|
https://en.wikipedia.org/wiki/Chirp
|
A chirp is a signal in which the frequency increases (up-chirp) or decreases (down-chirp) with time. In some sources, the term chirp is used interchangeably with sweep signal. It is commonly applied to sonar, radar, and laser systems, and to other applications, such as in spread-spectrum communications (see chirp spread spectrum). This signal type is biologically inspired and occurs as a phenomenon due to dispersion (a non-linear dependence between frequency and the propagation speed of the wave components). It is usually compensated for by using a matched filter, which can be part of the propagation channel. Depending on the specific performance measure, however, there are better techniques both for radar and communication. Since it was used in radar and space, it has been adopted also for communication standards. For automotive radar applications, it is usually called linear frequency modulated waveform (LFMW).
In spread-spectrum usage, surface acoustic wave (SAW) devices are often used to generate and demodulate the chirped signals. In optics, ultrashort laser pulses also exhibit chirp, which, in optical transmission systems, interacts with the dispersion properties of the materials, increasing or decreasing total pulse dispersion as the signal propagates. The name is a reference to the chirping sound made by birds; see bird vocalization.
Definitions
The basic definitions here translate as the common physics quantities location (phase), speed (angular velocity), acceleration (chirpyness).
If a waveform is defined as:
then the instantaneous angular frequency, ω, is defined as the phase rate as given by the first derivative of phase,
with the instantaneous ordinary frequency, f, being its normalized version:
Finally, the instantaneous angular chirpyness (symbol γ) is defined to be the second derivative of instantaneous phase or the first derivative of instantaneous angular frequency,
Angular chirpyness has units of radians per square second (rad/s2); thus, i
|
https://en.wikipedia.org/wiki/Process
|
A process is a series or set of activities that interact to produce a result; it may occur once-only or be recurrent or periodic.
Things called a process include:
Business and management
Business process, activities that produce a specific service or product for customers
Business process modeling, activity of representing processes of an enterprise in order to deliver improvements
Manufacturing process management, a collection of technologies and methods used to define how products are to be manufactured.
Process architecture, structural design of processes, applies to fields such as computers, business processes, logistics, project management
Process area, related processes within an area which together satisfies an important goal for improvements within that area
Process costing, a cost allocation procedure of managerial accounting
Process management (project management), a systematic series of activities directed towards planning, monitoring the performance and causing an end result in engineering activities, business process, manufacturing processes or project management
Process-based management, is a management approach that views a business as a collection of processes
Law
Due process, the concept that governments must respect the rule of law
Legal process, the proceedings and records of a legal case
Service of process, the procedure of giving official notice of a legal proceeding
Science and technology
The general concept of the scientific process, see scientific method
Process theory, the scientific study of processes
Industrial processes, consists of the purposeful sequencing of tasks that combine resources to produce a desired output
Biology and psychology
Process (anatomy), a projection or outgrowth of tissue from a larger body
Biological process, a process of a living organism
Cognitive process, such as attention, memory, language use, reasoning, and problem solving
Mental process, a function or processes of the mind
Neuronal process, also neurite
|
https://en.wikipedia.org/wiki/Folk%20biology
|
Folk biology (or folkbiology) is the cognitive study of how people classify and reason about the organic world. Humans everywhere classify animals and plants into obvious species-like groups. The relationship between a folk taxonomy and a scientific classification can assist in understanding how evolutionary theory deals with the apparent constancy of "common species" and the organic processes centering on them. From the vantage of evolutionary psychology, such natural systems are arguably routine "habits of mind", a sort of heuristic used to make sense of the natural world.
|
https://en.wikipedia.org/wiki/Vincent%27s%20theorem
|
In mathematics, Vincent's theorem—named after Alexandre Joseph Hidulphe Vincent—is a theorem that isolates the real roots of polynomials with rational coefficients.
Even though Vincent's theorem is the basis of the fastest method for the isolation of the real roots of polynomials, it was almost totally forgotten, having been overshadowed by Sturm's theorem; consequently, it does not appear in any of the classical books on the theory of equations (of the 20th century), except for Uspensky's book. Two variants of this theorem are presented, along with several (continued fractions and bisection) real root isolation methods derived from them.
Sign variation
Let c0, c1, c2, ... be a finite or infinite sequence of real numbers. Suppose l < r and the following conditions hold:
If r = l+1 the numbers cl and cr have opposite signs.
If r ≥ l+2 the numbers cl+1, ..., cr−1 are all zero and the numbers cl and cr have opposite signs.
This is called a sign variation or sign change between the numbers cl and cr.
When dealing with the polynomial p(x) in one variable, one defines the number of sign variations of p(x) as the number of sign variations in the sequence of its coefficients.
Two versions of this theorem are presented: the continued fractions version due to Vincent, and the bisection version due to Alesina and Galuzzi.
Vincent's theorem: Continued fractions version (1834 and 1836)
If in a polynomial equation with rational coefficients and without multiple roots, one makes successive transformations of the form
where are any positive numbers greater than or equal to one, then after a number of such transformations, the resulting transformed equation either has zero sign variations or it has a single sign variation. In the first case there is no root, whereas in the second case there is a single positive real root. Furthermore, the corresponding root of the proposed equation is approximated by the finite continued fraction:
Moreover, if infinitely many numb
|
https://en.wikipedia.org/wiki/Heterogeneous%20network
|
In computer networking, a heterogeneous network is a network connecting computers and other devices where the operating systems and protocols have significant differences. For example, local area networks (LANs) that connect Microsoft Windows and Linux based personal computers with Apple Macintosh computers are heterogeneous.
Heterogeneous network also describes wireless networks using different access technologies. For example, a wireless network that provides a service through a wireless LAN and is able to maintain the service when switching to a cellular network is called a wireless heterogeneous network.
HetNet
Reference to a HetNet often indicates the use of multiple types of access nodes in a wireless network. A Wide Area Network can use some combination of macrocells, picocells, and femtocells in order to offer wireless coverage in an environment with a wide variety of wireless coverage zones, ranging from an open outdoor environment to office buildings, homes, and underground areas. Mobile experts define a HetNet as a network with complex interoperation between macrocell, small cell, and in some cases WiFi network elements used together to provide a mosaic of coverage, with handoff capability between network elements. A study from ARCchart estimates that HetNets will help drive the mobile infrastructure market to account for nearly US$57 billion in spending globally by 2017. Small Cell Forum defines the HetNet as ‘multi-x environment – multi-technology, multi-domain, multi-spectrum, multi-operator and multi-vendor. It must be able to automate the reconfiguration of its operation to deliver assured service quality across the entire network, and flexible enough to accommodate changing user needs, business goals and subscriber behaviours.’
HetNet architecture
From an architectural perspective, the HetNet can be viewed as encompassing conventional macro radio access network (RAN) functions, RAN transport capability, small cells, and Wi-Fi functionality,
|
https://en.wikipedia.org/wiki/Hohlraum
|
In radiation thermodynamics, a hohlraum (a non-specific German word for a "hollow space" or "cavity") is a cavity whose walls are in radiative equilibrium with the radiant energy within the cavity. This idealized cavity can be approximated in practice by making a small perforation in the wall of a hollow container of any opaque material. The radiation escaping through such a perforation will be a good approximation to black-body radiation at the temperature of the interior of the container.
Inertial confinement fusion
The indirect drive approach to inertial confinement fusion is as follows: the fusion fuel capsule is held inside a cylindrical hohlraum. The hohlraum body is manufactured using a high-Z (high atomic number) element, usually gold or uranium.
Inside the hohlraum is a fuel capsule containing deuterium and tritium (D-T) fuel. A frozen layer of D-T ice adheres inside the fuel capsule.
The fuel capsule wall is synthesized using light elements such as plastic, beryllium, or high density carbon, i.e. diamond. The outer portion of the fuel capsule explodes outward when ablated by the x-rays produced by the hohlraum wall upon irradiation by lasers. Due to Newton's third law, the inner portion of the fuel capsule implodes, causing the D-T fuel to be supercompressed, activating a fusion reaction.
The radiation source (e.g., laser) is pointed at the interior of the hohlraum rather than at the fuel capsule itself. The hohlraum absorbs and re-radiates the energy as X-rays, a process known as indirect drive. The advantage to this approach, compared to direct drive, is that high mode structures from the laser spot are smoothed out when the energy is re-radiated from the hohlraum walls. The disadvantage to this approach is that low mode asymmetries are harder to control. It is important to be able to control both high mode and low mode asymmetries to achieve a uniform implosion.
The hohlraum walls must have surface roughness less than 1 micron, and hence accurate
|
https://en.wikipedia.org/wiki/MRB%20constant
|
The MRB constant is a mathematical constant, with decimal expansion . The constant is named after its discoverer, Marvin Ray Burns, who published his discovery of the constant in 1999. Burns had initially called the constant "rc" for root constant but, at Simon Plouffe's suggestion, the constant was renamed the 'Marvin Ray Burns's Constant', or "MRB constant".
The MRB constant is defined as the upper limit of the partial sums
As grows to infinity, the sums have upper and lower limit points of −0.812140… and 0.187859…, separated by an interval of length 1. The constant can also be explicitly defined by the following infinite sums:
The constant relates to the divergent series:
There is no known closed-form expression of the MRB constant, nor is it known whether the MRB constant is algebraic, transcendental or even irrational.
|
https://en.wikipedia.org/wiki/Square%20root%20of%205
|
The square root of 5 is the positive real number that, when multiplied by itself, gives the prime number 5. It is more precisely called the principal square root of 5, to distinguish it from the negative number with the same property. This number appears in the fractional expression for the golden ratio. It can be denoted in surd form as:
It is an irrational algebraic number. The first sixty significant digits of its decimal expansion are:
.
which can be rounded down to 2.236 to within 99.99% accuracy. The approximation (≈ 2.23611) for the square root of five can be used. Despite having a denominator of only 72, it differs from the correct value by less than (approx. ). As of January 2022, its numerical value in decimal has been computed to at least 2,250,000,000,000 digits.
Rational approximations
The square root of 5 can be expressed as the continued fraction
The successive partial evaluations of the continued fraction, which are called its convergents, approach :
Their numerators are 2, 9, 38, 161, … , and their denominators are 1, 4, 17, 72, … .
Each of these is a best rational approximation of ; in other words, it is closer to than any rational number with a smaller denominator.
The convergents, expressed as , satisfy alternately the Pell's equations
When is approximated with the Babylonian method, starting with and using , the th approximant is equal to the th convergent of the continued fraction:
The Babylonian method is equivalent to Newton's method for root finding applied to the polynomial . The Newton's method update, , is equal to when . The method therefore converges quadratically.
Relation to the golden ratio and Fibonacci numbers
The golden ratio is the arithmetic mean of 1 and . The algebraic relationship between , the golden ratio and the conjugate of the golden ratio () is expressed in the following formulae:
(See the section below for their geometrical interpretation as decompositions of a rectangle.)
then naturall
|
https://en.wikipedia.org/wiki/Orthogonal%20signal%20correction
|
Orthogonal Signal Correction (OSC) is a spectral preprocessing technique that removes variation from a data matrix X that is orthogonal to the response matrix Y. OSC was introduced by researchers at the University of Umea in 1998 and has since found applications in domains including metabolomics.
|
https://en.wikipedia.org/wiki/Transdifferentiation
|
Transdifferentiation, also known as lineage reprogramming, is the process in which one mature somatic cell is transformed into another mature somatic cell without undergoing an intermediate pluripotent state or progenitor cell type. It is a type of metaplasia, which includes all cell fate switches, including the interconversion of stem cells. Current uses of transdifferentiation include disease modeling and drug discovery and in the future may include gene therapy and regenerative medicine. The term 'transdifferentiation' was originally coined by Selman and Kafatos in 1974 to describe a change in cell properties as cuticle producing cells became salt-secreting cells in silk moths undergoing metamorphosis.
Discovery
Davis et al. 1987 reported the first instance (sight) of transdifferentiation where a cell changed from one adult cell type to another. Forcing mouse embryonic fibroblasts to express MyoD was found to be sufficient to turn those cells into myoblasts.
Natural examples
The only known instances where adult cells change directly from one lineage to another occurs in the species Turritopsis dohrnii (also known as the immortal jellyfish) and Turritopsis nutricula.
In newts, when the eye lens is removed, pigmented epithelial cells de-differentiate and then redifferentiate into the lens cells. Vincenzo Colucci described this phenomenon in 1891 and Gustav Wolff described the same thing in 1894; the priority issue is examined in Holland (2021).
In humans and mice, it has been demonstrated that alpha cells in the pancreas can spontaneously switch fate and transdifferentiate into beta cells. This has been demonstrated for both healthy and diabetic human and mouse pancreatic islets. While it was previously believed that oesophageal cells were developed from the transdifferentiation of smooth muscle cells, that has been shown to be false.
Induced and therapeutic examples
The first example of functional transdifferentiation has been provided by Ferber et al. by i
|
https://en.wikipedia.org/wiki/Fluctuation%20loss
|
Fluctuation loss is an effect seen in radar systems as the target object moves or changes its orientation relative to the radar system. It was extensively studied during the 1950s by Peter Swerling, who introduced the Swerling models to allow the effect to be simulated. For this reason, it is sometimes known as Swerling loss or similar names.
The effect occurs when the target's physical size is within a key range of values relative to the wavelength of the radar signal. As the signal reflects off various parts of the target, they may interfere as they return to the radar receiver. At any single distance from the station, this will cause the signal to be amplified or diminished compared to the baseline signal one calculates from the radar equation. As the target moves, these patterns change. This causes the signal to fluctuate in strength and may cause it to disappear entirely at certain times.
The effect can be reduced or eliminated by operating on more than one frequency or using modulation techniques like pulse compression that change the frequency over the period of a pulse. In these cases, it is unlikely that the pattern of reflections from the target causes the same destructive interference at two different frequencies.
Swerling modeled these effects in a famous 1954 paper introduced while working at RAND Corporation. Swerling's models considered the contribution of multiple small reflectors, or many small reflectors and a single large one. This offered the ability to model real-world objects like aircraft to understand the expected fluctuation loss effects.
Fluctuation loss
For basic considerations of the strength of a signal returned by a given target, the radar equation models the target as a single point in space with a given radar cross-section (RCS). The RCS is difficult to estimate except for the most basic cases, like a perpendicular surface or a sphere. Before the introduction of detailed computer modeling, the RCS for real-world objects was gener
|
https://en.wikipedia.org/wiki/Quantum%20state%20space
|
In physics, a quantum state space is an abstract space in which different "positions" represent, not literal locations, but rather quantum states of some physical system. It is the quantum analog of the phase space of classical mechanics.
Relative to Hilbert space
In quantum mechanics a state space is a complex Hilbert space in which each unit vector represents a different state that could come out of a measurement. The number of dimensions in this Hilbert space depends on the system we choose to describe. Any state vectors in this space can be written as a linear combination of unit vectors. Having an nonzero component along multiple dimensions is called a superposition. In the formalism of quantum mechanics these state vectors are often written using Dirac's compact bra–ket notation.
Examples
The spin (physics) state of a silver atom in the Stern-Gerlach experiment can be represented in a two state space. The spin can be aligned with a measuring apparatus (arbitrarily called 'up') or oppositely ('down'). In Dirac's notation these two states can be written as . The space of a two spin system has four states, .
The spin state is a discrete degree of freedom; quantum state spaces can have continuous degrees of freedom. For example, a particle in one space dimension has one degree of freedom ranging from to . In Dirac notation, the states in this space might be written as or .
Relative to 3D space
Even in the early days of quantum mechanics, the state space (or configurations as they were called at first) was understood to be essential for understanding simple QM problems. In 1929, Nevill Mott showed that "tendency to picture the wave as existing in ordinary three dimensional space, whereas we are really dealing with wave functions in multispace" makes analysis of simple interaction problems more difficult. Mott analyzes -particle emission in a cloud chamber. The emission process is isotropic, a spherical wave in QM, but the tracks observed are linear.
|
https://en.wikipedia.org/wiki/MAX232
|
The MAX232 is an integrated circuit by Maxim Integrated Products, now a subsidiary of Analog Devices, that converts signals from a TIA-232 (RS-232) serial port to signals suitable for use in TTL-compatible digital logic circuits. The MAX232 is a dual transmitter / dual receiver that typically is used to convert the RX, TX, CTS, RTS signals.
The drivers provide TIA-232 voltage level outputs (about ±7.5 volts) from a single 5-volt supply by on-chip charge pumps and external capacitors. This makes it useful for implementing TIA-232 in devices that otherwise do not need any other voltages. The receivers translates the TIA-232 input voltages (up to ±25 volts, though MAX232 supports up to ±30 volts) down to standard 5 volt TTL levels. These receivers have a typical threshold of 1.3 volts and a typical hysteresis of 0.5 volts.
The MAX232 replaced an older pair of chips MC1488 and MC1489 that performed similar RS-232 translation. The MC1488 quad transmitter chip required 12 volt and −12 volt power, and MC1489 quad receiver chip required 5 volt power. The main disadvantages of this older solution was the ±12 volt power requirement, only supported 5 volt digital logic, and two chips instead of one.
History
The MAX232 was proposed by Charlie Allen and designed by Dave Bingham. Maxim Integrated Products announced the MAX232 no later than 1986.
Versions
The later MAX232A is backward compatible with the original MAX232 but may operate at higher baud rates and can use smaller external capacitors 0.1 μF in place of the 1.0 μF capacitors used with the original device. The newer MAX3232 and MAX3232E are also backwards compatible, but operates at a broader voltage range, from 3 to 5.5 V.
Pin-to-pin compatible versions from other manufacturers are ICL232, SP232, ST232, ADM232 and HIN232. Texas Instruments makes compatible chips, using MAX232 as the part number.
Voltage levels
The MAX232 translates a TTL logic 0 input to between +3 and +15 V, and changes TTL logic 1 input to bet
|
https://en.wikipedia.org/wiki/Spontaneous%20absolute%20asymmetric%20synthesis
|
Spontaneous absolute asymmetric synthesis is a chemical phenomenon that stochastically generates chirality based on autocatalysis and small fluctuations in the ratio of enantiomers present in a racemic mixture. In certain reactions which initially do not contain chiral information, stochastically distributed enantiomeric excess can be observed. The phenomenon is different from chiral amplification, where enantiomeric excess is present from the beginning and not stochastically distributed. Hence, when the experiment is repeated many times, the average enantiomeric excess approaches 0%. The phenomenon has important implications concerning the origin of homochirality in nature.
|
https://en.wikipedia.org/wiki/Microscopic%20scale
|
The microscopic scale () is the scale of objects and events smaller than those that can easily be seen by the naked eye, requiring a lens or microscope to see them clearly. In physics, the microscopic scale is sometimes regarded as the scale between the macroscopic scale and the quantum scale. Microscopic units and measurements are used to classify and describe very small objects. One common microscopic length scale unit is the micrometre (also called a micron) (symbol: μm), which is one millionth of a metre.
History
Whilst compound microscopes were first developed in the 1590s, the significance of the microscopic scale was only truly established in the 1600s when Marcello Malphigi and Antonie van Leeuwenhoek microscopically observed frog lungs and microorganisms. As microbiology was established, the significance of making scientific observations at a microscopic level increased.
Published in 1665, Robert Hooke’s book Micrographia details his microscopic observations including fossils insects, sponges, and plants, which was possible through his development of the compound microscope. During his studies of cork, he discovered plant cells and coined the term ‘cell’.
Prior to the use of the micro- prefix, other terms were originally incorporated into the International metric system in 1795, such as centi- which represented a factor of 10^-2, and milli-, which represented a factor of 10^-3.
Over time the importance of measurements made at the microscopic scale grew, and an instrument named the Millionometre was developed by watch-making company owner Antoine LeCoultre in 1844. This instrument had the ability to precisely measure objects to the nearest micrometre.
The British Association for the Advancement of Science committee incorporated the micro- prefix into the newly established CGS system in 1873.
The micro- prefix was finally added to the official SI system in 1960, acknowledging measurements that were made at an even smaller level, denoting a factor of 10
|
https://en.wikipedia.org/wiki/Electrophoretic%20color%20marker
|
An electrophoretic color marker is a chemical used to monitor the progress of agarose gel electrophoresis and polyacrylamide gel electrophoresis (PAGE) since DNA, RNA, and most proteins are colourless. The color markers are made up of a mixture of dyes that migrate through the gel matrix alongside the sample of interest. They are typically designed to have different mobilities from the sample components and to generate colored bands that can be used to assess the migration and separation of sample components.
Color markers are often used as molecular weight standards, loading dyes, tracking dyes, or staining solutions. Molecular weight ladders are used to estimate the size of DNA and protein fragments by comparing their migration distance to that of the colored bands. DNA and protein standards are available commercially in a wide range of sizes, and are often provided with pre-stained or color-coded bands for easy identification. Loading dyes are usually added to the sample buffer before loading the sample onto the gel, and they migrate through the gel along with the sample to help track its progress during electrophoresis. Tracking dyes are added to the electrophoresis buffer rather to provide a visual marker of the buffer front. Staining solutions are applied after electrophoresis to visualize the sample bands, and are available in a range of colors.
Different types of electrophoretic color markers are available commercially, with varying numbers and types of dyes or pigments used in the mixture. Some markers generate a series of colored bands with known mobilities, while others produce a single band of a specific color that can be used as a reference point. They are widely used in research, clinical diagnostics, and forensic science.
Progress markers
Loading buffers often contain anionic dyes that are visible under the visible light spectrum, and are added to the gel before the nucleic acid. Tracking dyes should not be reactive so as not to alter the sample,
|
https://en.wikipedia.org/wiki/Free%20particle
|
In physics, a free particle is a particle that, in some sense, is not bound by an external force, or equivalently not in a region where its potential energy varies. In classical physics, this means the particle is present in a "field-free" space. In quantum mechanics, it means the particle is in a region of uniform potential, usually set to zero in the region of interest since the potential can be arbitrarily set to zero at any point in space.
Classical free particle
The classical free particle is characterized by a fixed velocity v. The momentum is given by
and the kinetic energy (equal to total energy) by
where m is the mass of the particle and v is the vector velocity of the particle.
Quantum free particle
Mathematical description
A free particle with mass in non-relativistic quantum mechanics is described by the free Schrödinger equation:
where ψ is the wavefunction of the particle at position r and time t. The solution for a particle with momentum p or wave vector k, at angular frequency ω or energy E, is given by a complex plane wave:
with amplitude A and has two different rules according to its mass:
if the particle has mass : (or equivalent ).
if the particle is a massless particle: .
The eigenvalue spectrum is infinitely degenerate since for each eigenvalue E>0, there corresponds an infinite number of eigenfunctions corresponding to different directions of .
The De Broglie relations: , apply. Since the potential energy is (stated to be) zero, the total energy E is equal to the kinetic energy, which has the same form as in classical physics:
As for all quantum particles free or bound, the Heisenberg uncertainty principles apply. It is clear that since the plane wave has definite momentum (definite energy), the probability of finding the particle's location is uniform and negligible all over the space. In other words, the wave function is not normalizable in a Euclidean space, these stationary states can not correspond to physical realiz
|
https://en.wikipedia.org/wiki/Babel%20function
|
The Babel function (also known as cumulative coherence) measures the maximum total coherence between a fixed atom and a collection of other atoms in a dictionary. The Babel function was conceived of in the context of signals for which there exists a sparse representation consisting of atoms or columns of a redundant dictionary matrix, A.
Definition and formulation
The Babel function of a dictionary with normalized columns is a real-valued function that is defined as
where are the columns (atoms) of the dictionary .
Special case
When p=1, the babel function is the mutual coherence.
Practical Applications
Li and Lin have used the Babel function to aid in creating effective dictionaries for Machine Learning applications.
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20logic%20topics
|
This is a list of mathematical logic topics.
For traditional syllogistic logic, see the list of topics in logic. See also the list of computability and complexity topics for more theory of algorithms.
Working foundations
Peano axioms
Giuseppe Peano
Mathematical induction
Structural induction
Recursive definition
Naive set theory
Element (mathematics)
Ur-element
Singleton (mathematics)
Simple theorems in the algebra of sets
Algebra of sets
Power set
Empty set
Non-empty set
Empty function
Universe (mathematics)
Axiomatization
Axiomatic system
Axiom schema
Axiomatic method
Formal system
Mathematical proof
Direct proof
Reductio ad absurdum
Proof by exhaustion
Constructive proof
Nonconstructive proof
Tautology
Consistency proof
Arithmetization of analysis
Foundations of mathematics
Formal language
Principia Mathematica
Hilbert's program
Impredicative
Definable real number
Algebraic logic
Boolean algebra (logic)
Dialectica space
categorical logic
Model theory
Finite model theory
Descriptive complexity theory
Model checking
Trakhtenbrot's theorem
Computable model theory
Tarski's exponential function problem
Undecidable problem
Institutional model theory
Institution (computer science)
Non-standard analysis
Non-standard calculus
Hyperinteger
Hyperreal number
Transfer principle
Overspill
Elementary Calculus: An Infinitesimal Approach
Criticism of non-standard analysis
Standard part function
Set theory
Forcing (mathematics)
Boolean-valued model
Kripke semantics
General frame
Predicate logic
First-order logic
Infinitary logic
Many-sorted logic
Higher-order logic
Lindström quantifier
Second-order logic
Soundness theorem
Gödel's completeness theorem
Original proof of Gödel's completeness theorem
Compactness theorem
Löwenheim–Skolem theorem
Skolem's paradox
Gödel's incompleteness theorems
Structure (mathematical logic)
Interpretation (logic)
Substructure (mathematics)
Elementary substructure
Skolem hull
Non-standard model
Atomic model (mathematical logic)
Prime model
Saturate
|
https://en.wikipedia.org/wiki/Pointwise
|
In mathematics, the qualifier pointwise is used to indicate that a certain property is defined by considering each value of some function An important class of pointwise concepts are the pointwise operations, that is, operations defined on functions by applying the operations to function values separately for each point in the domain of definition. Important relations can also be defined pointwise.
Pointwise operations
Formal definition
A binary operation on a set can be lifted pointwise to an operation on the set of all functions from to as follows: Given two functions and , define the function by
Commonly, o and O are denoted by the same symbol. A similar definition is used for unary operations o, and for operations of other arity.
Examples
where .
See also pointwise product, and scalar.
An example of an operation on functions which is not pointwise is convolution.
Properties
Pointwise operations inherit such properties as associativity, commutativity and distributivity from corresponding operations on the codomain.
If is some algebraic structure, the set of all functions to the carrier set of can be turned into an algebraic structure of the same type in an analogous way.
Componentwise operations
Componentwise operations are usually defined on vectors, where vectors are elements of the set for some natural number and some field . If we denote the -th component of any vector as , then componentwise addition is .
Componentwise operations can be defined on matrices. Matrix addition, where is a componentwise operation while matrix multiplication is not.
A tuple can be regarded as a function, and a vector is a tuple. Therefore, any vector corresponds to the function such that , and any componentwise operation on vectors is the pointwise operation on functions corresponding to those vectors.
Pointwise relations
In order theory it is common to define a pointwise partial order on functions. With A, B posets, the set of functions A → B ca
|
https://en.wikipedia.org/wiki/Fourth%20dimension%20in%20art
|
New possibilities opened up by the concept of four-dimensional space (and difficulties involved in trying to visualize it) helped inspire many modern artists in the first half of the twentieth century. Early Cubists, Surrealists, Futurists, and abstract artists took ideas from higher-dimensional mathematics and used them to radically advance their work.
Early influence
French mathematician Maurice Princet was known as "le mathématicien du cubisme" ("the mathematician of cubism"). An associate of the School of Paris—a group of avant-gardists including Pablo Picasso, Guillaume Apollinaire, Max Jacob, Jean Metzinger, and Marcel Duchamp—Princet is credited with introducing the work of Henri Poincaré and the concept of the "fourth dimension" to the cubists at the Bateau-Lavoir during the first decade of the 20th century.
Princet introduced Picasso to Esprit Jouffret's Traité élémentaire de géométrie à quatre dimensions (Elementary Treatise on the Geometry of Four Dimensions, 1903), a popularization of Poincaré's Science and Hypothesis in which Jouffret described hypercubes and other complex polyhedra in four dimensions and projected them onto the two-dimensional page. Picasso's Portrait of Daniel-Henry Kahnweiler in 1910 was an important work for the artist, who spent many months shaping it. The portrait bears similarities to Jouffret's work and shows a distinct movement away from the Proto-Cubist fauvism displayed in Les Demoiselles d'Avignon, to a more considered analysis of space and form.
Early cubist Max Weber wrote an article entitled "In The Fourth Dimension from a Plastic Point of View", for Alfred Stieglitz's July 1910 issue of Camera Work. In the piece, Weber states, "In plastic art, I believe, there is a fourth dimension which may be described as the consciousness of a great and overwhelming sense of space-magnitude in all directions at one time, and is brought into existence through the three known measurements."
Another influence on the School of Paris
|
https://en.wikipedia.org/wiki/Real-time%20path%20planning
|
Real-Time Path Planning is a term used in robotics that consists of motion planning methods that can adapt to real time changes in the environment. This includes everything from primitive algorithms that stop a robot when it approaches an obstacle to more complex algorithms that continuously takes in information from the surroundings and creates a plan to avoid obstacles.
These methods are different from something like a Roomba robot vacuum as the Roomba may be able to adapt to dynamic obstacles but it does not have a set target. A better example would be Embark self-driving semi-trucks that have a set target location and can also adapt to changing environments.
The targets of path planning algorithms are not limited to locations alone. Path planning methods can also create plans for stationary robots to change their poses. An example of this can be seen in various robotic arms, where path planning allows the robotic system to change its pose without colliding with itself.
As a subset of motion planning, it is an important part of robotics as it allows robots to find the optimal path to a target. This ability to find an optimal path also plays an important role in other fields such as video games and gene sequencing.
Concepts
In order to create a path from a target point to a goal point there must be classifications about the various areas within the simulated environment. This allows a path to be created in a 2D or 3D space where the robot can avoid obstacles.
Work Space
The work space is an environment that contains the robot and various obstacles. This environment can be either 2-dimensional or 3-dimensional.
Configuration Space
The configuration of a robot is determined by its current position and pose. The configuration space is the set of all configurations of the robot. By containing all the possible configurations of the robot, it also represents all transformations that can be applied to the robot.
Within the configuration sets there are additiona
|
https://en.wikipedia.org/wiki/Real-time%20clock
|
A real-time clock (RTC) is an electronic device (most often in the form of an integrated circuit) that measures the passage of time.
Although the term often refers to the devices in personal computers, servers and embedded systems, RTCs are present in almost any electronic device which needs to keep accurate time of day.
Terminology
The term real-time clock is used to avoid confusion with ordinary hardware clocks which are only signals that govern digital electronics, and do not count time in human units. RTC should not be confused with real-time computing, which shares its three-letter acronym but does not directly relate to time of day.
Purpose
Although keeping time can be done without an RTC, using one has benefits:
Low power consumption (important when running from alternate power)
Frees the main system for time-critical tasks
Sometimes more accurate than other methods
A GPS receiver can shorten its startup time by comparing the current time, according to its RTC, with the time at which it last had a valid signal. If it has been less than a few hours, then the previous ephemeris is still usable.
Some motherboards are made without real time clocks. The real time clock is omitted either out of the desire to save money.
Power source
RTCs often have an alternate source of power, so they can continue to keep time while the primary source of power is off or unavailable. This alternate source of power is normally a lithium battery in older systems, but some newer systems use a supercapacitor, because they are rechargeable and can be soldered. The alternate power source can also supply power to battery backed RAM.
Timing
Most RTCs use a crystal oscillator, but some have the option of using the power line frequency. The crystal frequency is usually 32.768 kHz, the same frequency used in quartz clocks and watches. Being exactly 215 cycles per second, it is a convenient rate to use with simple binary counter circuits. The low frequency saves power, while remain
|
https://en.wikipedia.org/wiki/Food%20processing
|
Food processing is the transformation of agricultural products into food, or of one form of food into other forms. Food processing takes many forms, from grinding grain into raw flour, home cooking, and complex industrial methods used in the making of convenience foods. Some food processing methods play important roles in reducing food waste and improving food preservation, thus reducing the total environmental impact of agriculture and improving food security.
The Nova classification groups food according to different food processing techniques.
Primary food processing is necessary to make most foods edible while secondary food processing turns ingredients into familiar foods, such as bread. Tertiary food processing results in ultra-processed foods and has been widely criticized for promoting overnutrition and obesity, containing too much sugar and salt, too little fiber, and otherwise being unhealthful in respect to dietary needs of humans and farm animals.
Processing levels
Primary food processing
Primary food processing turns agricultural products, such as raw wheat kernels or livestock, into something that can eventually be eaten. This category includes ingredients that are produced by ancient processes such as drying, threshing, winnowing and milling grain, shelling nuts, and butchering animals for meat. It also includes deboning and cutting meat, freezing and smoking fish and meat, extracting and filtering oils, canning food, preserving food through food irradiation, and candling eggs, as well as homogenizing and pasteurizing milk.
Contamination and spoilage problems in primary food processing can lead to significant public health threats, as the resulting foods are used so widely. However, many forms of processing contribute to improved food safety and longer shelf life before the food spoils. Commercial food processing uses control systems such as hazard analysis and critical control points (HACCP) and failure mode and effects analysis (FMEA) to
|
https://en.wikipedia.org/wiki/Type%20%28biology%29
|
In biology, a type is a particular specimen (or in some cases a group of specimens) of an organism to which the scientific name of that organism is formally associated. In other words, a type is an example that serves to anchor or centralizes the defining features of that particular taxon. In older usage (pre-1900 in botany), a type was a taxon rather than a specimen.
A taxon is a scientifically named grouping of organisms with other like organisms, a set that includes some organisms and excludes others, based on a detailed published description (for example a species description) and on the provision of type material, which is usually available to scientists for examination in a major museum research collection, or similar institution.
Type specimen
According to a precise set of rules laid down in the International Code of Zoological Nomenclature (ICZN) and the International Code of Nomenclature for algae, fungi, and plants (ICN), the scientific name of every taxon is almost always based on one particular specimen, or in some cases specimens. Types are of great significance to biologists, especially to taxonomists. Types are usually physical specimens that are kept in a museum or herbarium research collection, but failing that, an image of an individual of that taxon has sometimes been designated as a type. Describing species and appointing type specimens is part of scientific nomenclature and alpha taxonomy.
When identifying material, a scientist attempts to apply a taxon name to a specimen or group of specimens based on their understanding of the relevant taxa, based on (at least) having read the type description(s), preferably also based on an examination of all the type material of all of the relevant taxa. If there is more than one named type that all appear to be the same taxon, then the oldest name takes precedence and is considered to be the correct name of the material in hand. If on the other hand, the taxon appears never to have been named at all, th
|
https://en.wikipedia.org/wiki/Perceived%20performance
|
Perceived performance, in computer engineering, refers to how quickly a software feature appears to perform its task. The concept applies mainly to user acceptance aspects.
The amount of time an application takes to start up, or a file to download, is not made faster by showing a startup screen (see Splash screen) or a file progress dialog box. However, it satisfies some human needs: it appears faster to the user as well as providing a visual cue to let them know the system is handling their request.
In most cases, increasing real performance increases perceived performance, but when real performance cannot be increased due to physical limitations, techniques can be used to increase perceived performance at the cost of marginally decreasing real performance. For example, drawing and refreshing a progress bar while loading a file satisfies the user who is watching, but steals time from the process that is actually loading the file, but usually this is only a very small amount of time. All such techniques must exploit the inability of the user to accurately judge real performance, or they would be considered detrimental to performance.
Techniques for improving perceived performance may include more than just decreasing the delay between the user's request and visual feedback. Sometimes an increase in delay can be perceived as a performance improvement, such as when a variable controlled by the user is set to a running average of the users input. This can give the impression of smoother motion, but the controlled variable always reaches the desired value a bit late. Since it smooths out hi-frequency jitter, when the user is attempting to hold the value constant, they may feel like they are succeeding more readily. This kind of compromise would be appropriate for control of a sniper rifle in a video game. Another example may be doing trivial computation ahead of time rather than after a user triggers an action, such as pre-sorting a large list of data before a user w
|
https://en.wikipedia.org/wiki/Food%20science
|
Food science is the basic science and applied science of food; its scope starts at overlap with agricultural science and nutritional science and leads through the scientific aspects of food safety and food processing, informing the development of food technology.
Food science brings together multiple scientific disciplines. It incorporates concepts from fields such as chemistry, physics, physiology, microbiology, and biochemistry. Food technology incorporates concepts from chemical engineering, for example.
Activities of food scientists include the development of new food products, design of processes to produce these foods, choice of packaging materials, shelf-life studies, sensory evaluation of products using survey panels or potential consumers, as well as microbiological and chemical testing. Food scientists may study more fundamental phenomena that are directly linked to the production of food products and its properties.
Definition
The Institute of Food Technologists defines food science as "the discipline in which the engineering, biological, and physical sciences are used to study the nature of foods, the causes of deterioration, the principles underlying food processing, and the improvement of foods for the consuming public". The textbook Food Science defines food science in simpler terms as "the application of basic sciences and engineering to study the physical, chemical, and biochemical nature of foods and the principles of food processing".
Disciplines
Some of the subdisciplines of food science are described below.
Food chemistry
Food chemistry is the study of chemical processes and interactions of all biological and non-biological components of foods. The biological substances include such items as meat, poultry, lettuce, beer, and milk.
It is similar to biochemistry in its main components such as carbohydrates, lipids, and protein, but it also includes areas such as water, vitamins, minerals, enzymes, food additives, flavors, and colors. This
|
https://en.wikipedia.org/wiki/Conserved%20name
|
A conserved name or nomen conservandum (plural nomina conservanda, abbreviated as nom. cons.) is a scientific name that has specific nomenclatural protection. That is, the name is retained, even though it violates one or more rules which would otherwise prevent it from being legitimate. Nomen conservandum is a Latin term, meaning "a name to be conserved". The terms are often used interchangeably, such as by the International Code of Nomenclature for Algae, Fungi, and Plants (ICN), while the International Code of Zoological Nomenclature favours the term "conserved name".
The process for conserving botanical names is different from that for zoological names. Under the botanical code, names may also be "suppressed", nomen rejiciendum (plural nomina rejicienda or nomina utique rejicienda, abbreviated as nom. rej.), or rejected in favour of a particular conserved name, and combinations based on a suppressed name are also listed as “nom. rej.”.
Botany
Conservation
In botanical nomenclature, conservation is a nomenclatural procedure governed by Article 14 of the ICN. Its purpose is
"to avoid disadvantageous nomenclatural changes entailed by the strict application of the rules, and especially of the principle of priority [...]" (Art. 14.1).
Conservation is possible only for names at the rank of family, genus or species.
It may effect a change in original spelling, type, or (most commonly) priority.
Conserved spelling (orthographia conservanda, orth. cons.) allows spelling usage to be preserved even if the name was published with another spelling: Euonymus (not Evonymus), Guaiacum (not Guajacum), etc. (see orthographical variant).
Conserved types (typus conservandus, typ. cons.) are often made when it is found that a type in fact belongs to a different taxon from the description, when a name has subsequently been generally misapplied to a different taxon, or when the type belongs to a small group separate from the monophyletic bulk of a taxon.
Conservation of a nam
|
https://en.wikipedia.org/wiki/List%20of%20algebraic%20number%20theory%20topics
|
This is a list of algebraic number theory topics.
Basic topics
These topics are basic to the field, either as prototypical examples, or as basic objects of study.
Algebraic number field
Gaussian integer, Gaussian rational
Quadratic field
Cyclotomic field
Cubic field
Biquadratic field
Quadratic reciprocity
Ideal class group
Dirichlet's unit theorem
Discriminant of an algebraic number field
Ramification (mathematics)
Root of unity
Gaussian period
Important problems
Fermat's Last Theorem
Class number problem for imaginary quadratic fields
Stark–Heegner theorem
Heegner number
Langlands program
General aspects
Different ideal
Dedekind domain
Splitting of prime ideals in Galois extensions
Decomposition group
Inertia group
Frobenius automorphism
Chebotarev's density theorem
Totally real field
Local field
p-adic number
p-adic analysis
Adele ring
Idele group
Idele class group
Adelic algebraic group
Global field
Hasse principle
Hasse–Minkowski theorem
Galois module
Galois cohomology
Brauer group
Class field theory
Class field theory
Abelian extension
Kronecker–Weber theorem
Hilbert class field
Takagi existence theorem
Hasse norm theorem
Artin reciprocity
Local class field theory
Iwasawa theory
Iwasawa theory
Herbrand–Ribet theorem
Vandiver's conjecture
Stickelberger's theorem
Euler system
p-adic L-function
Arithmetic geometry
Arithmetic geometry
Complex multiplication
Abelian variety of CM-type
Chowla–Selberg formula
Hasse–Weil zeta function
Mathematics-related lists
|
https://en.wikipedia.org/wiki/Cactus%20graph
|
In graph theory, a cactus (sometimes called a cactus tree) is a connected graph in which any two simple cycles have at most one vertex in common. Equivalently, it is a connected graph in which every edge belongs to at most one simple cycle, or (for nontrivial cacti) in which every block (maximal subgraph without a cut-vertex) is an edge or a cycle.
Properties
Cacti are outerplanar graphs. Every pseudotree is a cactus. A nontrivial graph is a cactus if and only if every block is either a simple cycle or a single edge.
The family of graphs in which each component is a cactus is downwardly closed under graph minor operations. This graph family may be characterized by a single forbidden minor, the four-vertex diamond graph formed by removing an edge from the complete graph K4.
Triangular cactus
A triangular cactus is a special type of cactus graph such that each cycle has length three and each edge belongs to a cycle. For instance, the friendship graphs, graphs formed from a collection of triangles joined together at a single shared vertex, are triangular cacti. As well as being cactus graphs the triangular cacti are also block graphs and locally linear graphs.
Triangular cactuses have the property that they remain connected if any matching is removed from them; for a given number of vertices, they have the fewest possible edges with this property. Every tree with an odd number of vertices may be augmented to a triangular cactus by adding edges to it,
giving a minimal augmentation with the property of remaining connected after the removal of a matching.
The largest triangular cactus in any graph may be found in polynomial time using an algorithm for the matroid parity problem. Since triangular cactus graphs are planar graphs, the largest triangular cactus can be used as an approximation to the largest planar subgraph, an important subproblem in planarization. As an approximation algorithm, this method has approximation ratio 4/9, the best known for the maximum p
|
https://en.wikipedia.org/wiki/Hardware-based%20encryption
|
Hardware-based encryption is the use of computer hardware to assist software, or sometimes replace software, in the process of data encryption. Typically, this is implemented as part of the processor's instruction set. For example, the AES encryption algorithm (a modern cipher) can be implemented using the AES instruction set on the ubiquitous x86 architecture. Such instructions also exist on the ARM architecture. However, more unusual systems exist where the cryptography module is separate from the central processor, instead being implemented as a coprocessor, in particular a secure cryptoprocessor or cryptographic accelerator, of which an example is the IBM 4758, or its successor, the IBM 4764. Hardware implementations can be faster and less prone to exploitation than traditional software implementations, and furthermore can be protected against tampering.
History
Prior to the use of computer hardware, cryptography could be performed through various mechanical or electro-mechanical means. An early example is the Scytale used by the Spartans. The Enigma machine was an electro-mechanical system cipher machine notably used by the Germans in World War II. After World War II, purely electronic systems were developed. In 1987 the ABYSS (A Basic Yorktown Security System) project was initiated. The aim of this project was to protect against software piracy. However, the application of computers to cryptography in general dates back to the 1940s and Bletchley Park, where the Colossus computer was used to break the encryption used by German High Command during World War II. The use of computers to encrypt, however, came later. In particular, until the development of the integrated circuit, of which the first was produced in 1960, computers were impractical for encryption, since, in comparison to the portable form factor of the Enigma machine, computers of the era took the space of an entire building. It was only with the development of the microcomputer that computer encr
|
https://en.wikipedia.org/wiki/Index%20of%20accounting%20articles
|
This page is an index of accounting topics.
A
Accounting ethics - Accounting information system - Accounting research - Activity-Based Costing - Assets
B
Balance sheet
- Big Four auditors
- Bond
- Bookkeeping
- Book value
C
Cash-basis accounting
- Cash-basis versus accrual-basis accounting
- Cash flow statement
- Certified General Accountant
- Certified Management Accountants
- Certified Public Accountant
- Chartered accountant
- Chart of accounts
- Common stock
- Comprehensive income
- Construction accounting
- Convention of conservatism
- Convention of disclosure
- Cost accounting
- Cost of capital
- Cost of goods sold
- Creative accounting
- Credit
- Credit note
- Current asset
- Current liability
D
Debitcapital reserve
- Debit note
- Debt
- Deficit (disambiguation)
- Depreciation
- Diluted earnings per share
- Dividend
- Double-entry bookkeeping system
- Dual aspect
E
E-accounting
- EBIT
- EBITDA
- Earnings per share
- Engagement Letter
- Entity concept
- Environmental accounting
- Expense
- Equity
- Equivalent Annual Cost
F
Financial Accounting Standards Board
- Financial accountancy
- Financial audit
- Financial reports
- Financial statements
- Fixed assets
- Fixed assets management
- Forensic accounting
- Fraud deterrence
- Free cash flow
- Fund accounting
G
Gain
- General ledger
- Generally Accepted Accounting Principles
- Going concern
- Goodwill
- Governmental Accounting Standards Board
H
Historical cost - History of accounting
I
Income
- Income statement
- Institute of Chartered Accountants in England and Wales
- Institute of Chartered Accountants of Scotland
- Institute of Management Accountants
- Intangible asset
- Interest
- Internal audit
- International Accounting Standards Board
- International Accounting Standards Committee
- International Accounting Standards
- International Federation of Accountants
- International Financial Reporting Standards
- Inventory
- Investment
- Invoices
- Indian Accounting Standards
J
Job costing
- Journal
L
|
https://en.wikipedia.org/wiki/Scaffolding%20%28bioinformatics%29
|
Scaffolding is a technique used in bioinformatics. It is defined as follows:
Link together a non-contiguous series of genomic sequences into a scaffold, consisting of sequences separated by gaps of known length. The sequences that are linked are typically contiguous sequences corresponding to read overlaps.When creating a draft genome, individual reads of DNA are second assembled into contigs, which, by the nature of their assembly, have gaps between them. The next step is to then bridge the gaps between these contigs to create a scaffold. This can be done using either optical mapping or mate-pair sequencing.
Assembly software
The sequencing of the Haemophilus influenzae genome marked the advent of scaffolding. That project generated a total of 140 contigs, which were oriented and linked using paired end reads. The success of this strategy prompted the creation of the software, Grouper, which was included in genome assemblers. Until 2001, this was the only scaffolding software. After the Human Genome Project and Celera proved that it was possible to create a large draft genome, several other similar programs were created. Bambus was created in 2003 and was a rewrite of the original grouper software, but afforded researchers the ability to adjust scaffolding parameters. This software also allowed for optional use of other linking data, such as contig order in a reference genome.
Algorithms used by assembly software are very diverse, and can be classified as based on iterative marker ordering, or graph based. Graph based applications have the capacity to order and orient over 10,000 markers, compared to the maximum 3000 markers capable of iterative marker applications. Algorithms can be further classified as greedy, non greedy, conservative, or non conservative. Bambus uses a greedy algorithm, defined as such because it joins together contigs with the most links first. The algorithm used by Bambus 2 removes repetitive contigs before orienting and ordering them in
|
https://en.wikipedia.org/wiki/Table%20of%20Newtonian%20series
|
In mathematics, a Newtonian series, named after Isaac Newton, is a sum over a sequence written in the form
where
is the binomial coefficient and is the falling factorial. Newtonian series often appear in relations of the form seen in umbral calculus.
List
The generalized binomial theorem gives
A proof for this identity can be obtained by showing that it satisfies the differential equation
The digamma function:
The Stirling numbers of the second kind are given by the finite sum
This formula is a special case of the kth forward difference of the monomial xn evaluated at x = 0:
A related identity forms the basis of the Nörlund–Rice integral:
where is the Gamma function and is the Beta function.
The trigonometric functions have umbral identities:
and
The umbral nature of these identities is a bit more clear by writing them in terms of the falling factorial . The first few terms of the sin series are
which can be recognized as resembling the Taylor series for sin x, with (s)n standing in the place of xn.
In analytic number theory it is of interest to sum
where B are the Bernoulli numbers. Employing the generating function its Borel sum can be evaluated as
The general relation gives the Newton series
where is the Hurwitz zeta function and the Bernoulli polynomial. The series does not converge, the identity holds formally.
Another identity is
which converges for . This follows from the general form of a Newton series for equidistant nodes (when it exists, i.e. is convergent)
See also
Binomial transform
List of factorial and binomial topics
Nörlund–Rice integral
Carlson's theorem
|
https://en.wikipedia.org/wiki/Tuple
|
In mathematics, a tuple is a finite sequence or ordered list of numbers or, more generally, mathematical objects, which are called the elements of the tuple. An -tuple is a tuple of elements, where is a non-negative integer. There is only one 0-tuple, called the empty tuple. A 1-tuple and a 2-tuple are commonly called respectively a singleton and an ordered pair.
Tuple may be formally defined from ordered pairs by recurrence by starting from ordered pairs; indeed, a -tuple can be identified with the ordered pair of its first elements and its th element.
Tuples are usually written by listing the elements within parentheses "", separated by a comma and a space; for example, denotes a 5-tuple. Sometimes other symbols are used to surround the elements, such as square brackets "[ ]" or angle brackets "⟨ ⟩". Braces "{ }" are used to specify arrays in some programming languages but not in mathematical expressions, as they are the standard notation for sets. The term tuple can often occur when discussing other mathematical objects, such as vectors.
In computer science, tuples come in many forms. Most typed functional programming languages implement tuples directly as product types, tightly associated with algebraic data types, pattern matching, and destructuring assignment. Many programming languages offer an alternative to tuples, known as record types, featuring unordered elements accessed by label. A few programming languages combine ordered tuple product types and unordered record types into a single construct, as in C structs and Haskell records. Relational databases may formally identify their rows (records) as tuples.
Tuples also occur in relational algebra; when programming the semantic web with the Resource Description Framework (RDF); in linguistics; and in philosophy.
Etymology
The term originated as an abstraction of the sequence: single, couple/double, triple, quadruple, quintuple, sextuple, septuple, octuple, ..., ‑tuple, ..., where the prefixes are
|
https://en.wikipedia.org/wiki/List%20of%20wireless%20sensor%20nodes
|
A sensor node, also known as a mote (chiefly in North America), is a node in a sensor network that is capable of performing some processing, gathering sensory information and communicating with other connected nodes in the network. A mote is a node but a node is not always a mote.
List of Wireless Sensor Nodes
See also
Wireless sensor network
Sensor node
Mesh networking
Sun SPOT
Embedded computer
Embedded system
Mobile ad hoc network (MANETS)
Smartdust
Sensor Web
|
https://en.wikipedia.org/wiki/Quantum%20biology
|
Quantum biology is the study of applications of quantum mechanics and theoretical chemistry to aspects of biology that cannot be accurately described by the classical laws of physics. An understanding of fundamental quantum interactions is important because they determine the properties of the next level of organization in biological systems.
Many biological processes involve the conversion of energy into forms that are usable for chemical transformations, and are quantum mechanical in nature. Such processes involve chemical reactions, light absorption, formation of excited electronic states, transfer of excitation energy, and the transfer of electrons and protons (hydrogen ions) in chemical processes, such as photosynthesis, olfaction and cellular respiration. Quantum biology may use computations to model biological interactions in light of quantum mechanical effects. Quantum biology is concerned with the influence of non-trivial quantum phenomena, which can be explained by reducing the biological process to fundamental physics, although these effects are difficult to study and can be speculative.
History
Quantum biology is an emerging field, in the sense that most current research is theoretical and subject to questions that require further experimentation. Though the field has only recently received an influx of attention, it has been conceptualized by physicists throughout the 20th century. It has been suggested that quantum biology might play a critical role in the future of the medical world. Early pioneers of quantum physics saw applications of quantum mechanics in biological problems. Erwin Schrödinger's 1944 book What Is Life? discussed applications of quantum mechanics in biology. Schrödinger introduced the idea of an "aperiodic crystal" that contained genetic information in its configuration of covalent chemical bonds. He further suggested that mutations are introduced by "quantum leaps". Other pioneers Niels Bohr, Pascual Jordan, and Max Delbrück argu
|
https://en.wikipedia.org/wiki/Shared%20memory
|
In computer science, shared memory is memory that may be simultaneously accessed by multiple programs with an intent to provide communication among them or avoid redundant copies. Shared memory is an efficient means of passing data between programs. Depending on context, programs may run on a single processor or on multiple separate processors.
Using memory for communication inside a single program, e.g. among its multiple threads, is also referred to as shared memory.
In hardware
In computer hardware, shared memory refers to a (typically large) block of random access memory (RAM) that can be accessed by several different central processing units (CPUs) in a multiprocessor computer system.
Shared memory systems may use:
uniform memory access (UMA): all the processors share the physical memory uniformly;
non-uniform memory access (NUMA): memory access time depends on the memory location relative to a processor;
cache-only memory architecture (COMA): the local memories for the processors at each node is used as cache instead of as actual main memory.
A shared memory system is relatively easy to program since all processors share a single view of data and the communication between processors can be as fast as memory accesses to the same location. The issue with shared memory systems is that many CPUs need fast access to memory and will likely cache memory, which has two complications:
access time degradation: when several processors try to access the same memory location it causes contention. Trying to access nearby memory locations may cause false sharing. Shared memory computers cannot scale very well. Most of them have ten or fewer processors;
lack of data coherence: whenever one cache is updated with information that may be used by other processors, the change needs to be reflected to the other processors, otherwise the different processors will be working with incoherent data. Such cache coherence protocols can, when they work well, provide extremely hig
|
https://en.wikipedia.org/wiki/Thermotolerance
|
Thermotolerance is the ability of an organism to survive high temperatures. An organism's natural tolerance of heat is their basal thermotolerance. Meanwhile, acquired thermotolerance is defined as an enhanced level of thermotolerance after exposure to a heat stress.
In plants
Multiple factors contribute to thermotolerance including signaling molecules like abscisic acid, salicylic acid, and pathways like the ethylene signaling pathway and heat stress response pathway.
The various heat stress response pathways enhance thermotolerance. The heat stress response in plants is mediated by heat shock transcription factors (HSF) and is well conserved across eukaryotes. HSFs are essential in plants’ ability to both sense and respond to stress. The HSFs, which are divided into three families (A, B, and C), encode the expression of heat shock proteins (HSP). Past studies have found that transcriptional activators HsfA1 and HsfB1 are the main positive regulators of heat stress response genes in Arabidopsis thaliana. The general pathway to thermotolerance is characterized by sensing of heat stress, activation of HSFs, upregulation of heat response, and return to the non-stressed state.
In 2011, while studying heat stress A. thaliana, Ikeda et al. concluded that the early response is regulated by HsfA1 and the extended response is regulated by HsfA2. They used RT-PCR to analyze the expression of HS-inducible genes of mutant (ectopic and nonfunctional HsfB1) and wild type plants. Plants with mutant HsfB1 had lower acquired thermotolerance, based on both lower expression of heat stress genes and visibly altered phenotypes. With these results they concluded that class A HSFs positively regulated the heat stress response while class B HSFs repressed the expression of HSF genes. Therefore, both were necessary for plants to return to non-stressed conditions and acquired thermotolerance.
In animals
|
https://en.wikipedia.org/wiki/Eyeball%20network
|
Eyeball network is a slang term used by network engineers and architects that refers to an access network whose primary users use the network to “look at things” (browse the Internet, read email, etc.) and consume content, as opposed to a network that may be used primarily to generate its own data, or “content networks/providers”.
The term “eyeball network” is often overheard in conversations and seen in articles that discuss peering relationships between other networks, as well as net neutrality issues.
An example of an eyeball network would be any given ISP that provides internet connectivity to end-users – The ISP may peer with Google (which is a content provider) where the end users consume content serviced/provided by Google, in this case the ISP is just an “eyeball network” providing a means for the end user to reach Google provided actual content.
However, it is to be noted that not all ISPs are eyeball networks, they can be pure transit providers. With Tier 2 networks and lower, they can serve as both an eyeball network and a transit provider, depending on their business model. In the modern day ecosystem where peering is given priority, the lines are blurred between the different types of networks as ultimately any given network must be able to reach every other given network on the internet at large.
|
https://en.wikipedia.org/wiki/Generalized%20signal%20averaging
|
Within signal processing, in many cases only one image with noise is available, and averaging is then realized in a local neighbourhood. Results are acceptable if the noise is smaller in size than the smallest objects of interest in the image, but blurring of edges is a serious disadvantage. In the case of smoothing within a single image, one has to assume that there are no changes in the gray levels of the underlying image data. This assumption is clearly violated at locations of image edges, and edge blurring is a direct consequence of violating the assumption.
Description
Averaging is a special case of discrete convolution. For a 3 by 3 neighbourhood, the convolution mask M is:
The significance of the central pixel may be increased, as it approximates the properties of noise with a Gaussian probability distribution:
A suitable page for beginners about matrices is at:
https://web.archive.org/web/20060819141930/http://www.gamedev.net/reference/programming/features/imageproc/page2.asp
The whole article starts on page: https://web.archive.org/web/20061019072001/http://www.gamedev.net/reference/programming/features/imageproc/
|
https://en.wikipedia.org/wiki/Molluscivore
|
A molluscivore is a carnivorous animal that specialises in feeding on molluscs such as gastropods, bivalves, brachiopods and cephalopods. Known molluscivores include numerous predatory (and often cannibalistic) molluscs, (e.g.octopuses, murexes, decollate snails and oyster drills), arthropods such as crabs and firefly larvae, and, vertebrates such as fish, birds and mammals. Molluscivory is performed in a variety ways with some animals highly adapted to this method of feeding behaviour. A similar behaviour, durophagy, describes the feeding of animals that consume hard-shelled or exoskeleton bearing organisms, such as corals, shelled molluscs, or crabs.
Description
Molluscivory can be performed in several ways:
In some cases, the mollusc prey are simply swallowed entire, including the shell, whereupon the prey is killed through suffocation and or exposure to digestive enzymes. Only cannibalistic sea slugs, snail-eating cone shells of the taxon Coninae, and some sea anemones use this method.
One method, used especially by vertebrate molluscivores, is to break the shell, either by exerting force on the shell until it breaks, often by biting the shell, like with oyster crackers, mosasaurs, and placodonts, or hammering at the shell, e.g. oystercatchers and crabs, or by simply dashing the mollusc on a rock (e.g. song thrushes, gulls, and sea otters).
Another method is to remove the shell from the prey. Molluscs are attached to their shell by strong muscular ligaments, making the shell's removal difficult. Molluscivorous birds, such as oystercatchers and the Everglades snail kite, insert their elongate beak into the shell to sever these attachment ligaments, facilitating removal of the prey. The carnivorous terrestrial pulmonate snail known as the "decollate snail" ("decollate" being a synonym for "decapitate") uses a similar method: it reaches into the opening of the prey's shell and bites through the muscles in the prey's neck, whereupon it immediately begins d
|
https://en.wikipedia.org/wiki/Digital%20room%20correction
|
Digital room correction (or DRC) is a process in the field of acoustics where digital filters designed to ameliorate unfavorable effects of a room's acoustics are applied to the input of a sound reproduction system. Modern room correction systems produce substantial improvements in the time domain and frequency domain response of the sound reproduction system.
History
The use of analog filters, such as equalizers, to normalize the frequency response of a playback system has a long history; however, analog filters are very limited in their ability to correct the distortion found in many rooms. Although digital implementations of the equalizers have been available for some time, digital room correction is usually used to refer to the construction of filters which attempt to invert the impulse response of the room and playback system, at least in part. Digital correction systems are able to use acausal filters, and are able to operate with optimal time resolution, optimal frequency resolution, or any desired compromise along the Gabor limit. Digital room correction is a fairly new area of study which has only recently been made possible by the computational power of modern CPUs and DSPs.
Operation
The configuration of a digital room correction system begins with measuring the impulse response of the room at a reference listening position, and sometimes at additional locations for each of the loudspeakers. Then, computer software is used to compute a FIR filter, which reverses the effects of the room and linear distortion in the loudspeakers. In low performance conditions, a few IIR peaking filters are used instead of FIR filters, which require convolution, a relatively computation-heavy operation. Finally, the calculated filter is loaded into a computer or other room correction device which applies the filter in real time. Because most room correction filters are acausal, there is some delay. Most DRC systems allow the operator to control the added delay through
|
https://en.wikipedia.org/wiki/Code
|
In communications and information processing, code is a system of rules to convert information—such as a letter, word, sound, image, or gesture—into another form, sometimes shortened or secret, for communication through a communication channel or storage in a storage medium. An early example is an invention of language, which enabled a person, through speech, to communicate what they thought, saw, heard, or felt to others. But speech limits the range of communication to the distance a voice can carry and limits the audience to those present when the speech is uttered. The invention of writing, which converted spoken language into visual symbols, extended the range of communication across space and time.
The process of encoding converts information from a source into symbols for communication or storage. Decoding is the reverse process, converting code symbols back into a form that the recipient understands, such as English or/and Spanish.
One reason for coding is to enable communication in places where ordinary plain language, spoken or written, is difficult or impossible. For example, semaphore, where the configuration of flags held by a signaler or the arms of a semaphore tower encodes parts of the message, typically individual letters, and numbers. Another person standing a great distance away can interpret the flags and reproduce the words sent.
Theory
In information theory and computer science, a code is usually considered as an algorithm that uniquely represents symbols from some source alphabet, by encoded strings, which may be in some other target alphabet. An extension of the code for representing sequences of symbols over the source alphabet is obtained by concatenating the encoded strings.
Before giving a mathematically precise definition, this is a brief example. The mapping
is a code, whose source alphabet is the set and whose target alphabet is the set . Using the extension of the code, the encoded string 0011001 can be grouped into codewords a
|
https://en.wikipedia.org/wiki/DeWitt%20notation
|
Physics often deals with classical models where the dynamical variables are a collection of functions
{φα}α over a d-dimensional space/spacetime manifold M where α is the "flavor" index. This involves functionals over the φs, functional derivatives, functional integrals, etc. From a functional point of view this is equivalent to working with an infinite-dimensional smooth manifold where its points are an assignment of a function for each α, and the procedure is in analogy with differential geometry where the coordinates for a point x of the manifold M are φα(x).
In the DeWitt notation''' (named after theoretical physicist Bryce DeWitt), φα(x) is written as φi where i is now understood as an index covering both α and x.
So, given a smooth functional A, A,i stands for the functional derivative
as a functional of φ''. In other words, a "1-form" field over the infinite dimensional "functional manifold".
In integrals, the Einstein summation convention is used. Alternatively,
|
https://en.wikipedia.org/wiki/Food%20Valley
|
Food Valley is a region in the Netherlands where international food companies, research institutes, and Wageningen University and Research Centre are concentrated. The Food Valley area is the home of a large number of food multinationals and within the Food Valley about 15,000 professionals are active in food related sciences and technological development. Far more are involved in the manufacturing of food products. Food Valley, with the city of Wageningen as its center, is intended to form a dynamic heart of knowledge for the international food industry.
Within this region, Foodvalley NL is intended to create conditions so that food manufacturers and knowledge institutes can work together in developing new and innovating food concepts.
Current research about the Food Valley
The Food Valley as a region has been the subject of study by several human geographers. Even before the Food Valley was established as an organisation in 2004 and as a region in 2011 Frank Kraak and Frits Oevering made a SWOT analysis of the region using an Evolutionary economics framework and compared it with similar regions in Canada, Denmark, Italy and Sweden. A similar study was done by Floris Wieberdink. The study utilised Geomarketing concepts in the WERV, the predecessor of the Regio Food Valley. Geijer and Van der Velden studied the economic development of the Regio Food Valley using statistical data.
Discussion
The research performed in the Food Valley has generated some discussion about the influence of culture on economic growth. Wieberdink argued that culture and habitat are not spatially bounded, but historically. More recently a study about the Food Valley argued that culture and habitat are in fact spatially bounded. Both studies, however, recommend the Regio Food Valley to promote its distinct culture.
See also
|
https://en.wikipedia.org/wiki/Lemniscate
|
In algebraic geometry, a lemniscate is any of several figure-eight or -shaped curves. The word comes from the Latin meaning "decorated with ribbons", from the Greek meaning "ribbon", or which alternatively may refer to the wool from which the ribbons were made.
Curves that have been called a lemniscate include three quartic plane curves: the hippopede or lemniscate of Booth, the lemniscate of Bernoulli, and the lemniscate of Gerono. The study of lemniscates (and in particular the hippopede) dates to ancient Greek mathematics, but the term "lemniscate" for curves of this type comes from the work of Jacob Bernoulli in the late 17th century.
History and examples
Lemniscate of Booth
The consideration of curves with a figure-eight shape can be traced back to Proclus, a Greek Neoplatonist philosopher and mathematician who lived in the 5th century AD. Proclus considered the cross-sections of a torus by a plane parallel to the axis of the torus. As he observed, for most such sections the cross section consists of either one or two ovals; however, when the plane is tangent to the inner surface of the torus, the cross-section takes on a figure-eight shape, which Proclus called a horse fetter (a device for holding two feet of a horse together), or "hippopede" in Greek. The name "lemniscate of Booth" for this curve dates to its study by the 19th-century mathematician James Booth.
The lemniscate may be defined as an algebraic curve, the zero set of the quartic polynomial when the parameter d is negative (or zero for the special case where the lemniscate becomes a pair of externally tangent circles). For positive values of d one instead obtains the oval of Booth.
Lemniscate of Bernoulli
In 1680, Cassini studied a family of curves, now called the Cassini oval, defined as follows: the locus of all points, the product of whose distances from two fixed points, the curves' foci, is a constant. Under very particular circumstances (when the half-distance between the points is
|
https://en.wikipedia.org/wiki/Facultative
|
Facultative means "optional" or "discretionary" (antonym obligate), used mainly in biology in phrases such as:
Facultative (FAC), facultative wetland (FACW), or facultative upland (FACU): wetland indicator statuses for plants
Facultative anaerobe, an organism that can use oxygen but also has anaerobic methods of energy production. It can survive in either environment
Facultative biotroph, an organism, often a fungus, that can live as a saprotroph but also form mutualisms with other organisms at different times of its life cycle.
Facultative biped, an animal that is capable of walking or running on two legs as well as walking or running on four limbs or more, as appropriate
Facultative carnivore, a carnivore that does not depend solely on animal flesh for food but also can subsist on non-animal food. Compare this with the term omnivore
Facultative heterochromatin, tightly packed but non-repetitive DNA in the form of Heterochromatin, but which can lose its condensed structure and become transcriptionally active
Facultative lagoon, a type of stabilization pond used in biological treatment of industrial and domestic wastewater
Facultative parasite, a parasite that can complete its life cycle without depending on a host
Facultative photoperiodic plant, a plant that will eventually flower regardless of night length but is more likely to flower under appropriate light conditions.
Facultative saprophyte, lives on dying, rather than dead, plant material
facultative virus
See also
(antonym) Obligate
Opportunism (Biology)
Biology terminology
|
https://en.wikipedia.org/wiki/Trust%20on%20first%20use
|
Trust on first use (TOFU), or trust upon first use (TUFU), is an authentication scheme used by client software which needs to establish a trust relationship with an unknown or not-yet-trusted endpoint. In a TOFU model, the client will try to look up the endpoint's identifier, usually either the public identity key of the endpoint, or the fingerprint of said identity key, in its local trust database. If no identifier exists yet for the endpoint, the client software will either prompt the user to confirm they have verified the purported identifier is authentic, or if manual verification is not assumed to be possible in the protocol, the client will simply trust the identifier which was given and record the trust relationship into its trust database. If in a subsequent connection a different identifier is received from the opposing endpoint, the client software will consider it to be untrusted.
TOFU implementations
In the SSH protocol, most client software (though not all) will, upon connecting to a not-yet-trusted server, display the server's public key fingerprint, and prompt the user to verify they have indeed authenticated it using an authenticated channel. The client will then record the trust relationship into its trust database. New identifier will cause a blocking warning that requires manual removal of the currently stored identifier.
The XMPP client Conversations uses Blind Trust Before Verification, where all identifiers are blindly trusted until the user demonstrates will and ability to authenticate endpoints by scanning the QR-code representation of the identifier. After the first identifier has been scanned, the client will display a shield symbol for messages from authenticated endpoints, and red background for others.
In Signal the endpoints initially blindly trust the identifier and display non-blocking warnings when it changes. The identifier can be verified either by scanning a QR-code, or by exchanging the decimal representation of the identifie
|
https://en.wikipedia.org/wiki/Square%20root%20of%202
|
The square root of 2 (approximately 1.4142) is a positive real number that, when multiplied by itself, equals the number 2. It may be written in mathematics as or . It is an algebraic number, and therefore not a transcendental number. Technically, it should be called the principal square root of 2, to distinguish it from the negative number with the same property.
Geometrically, the square root of 2 is the length of a diagonal across a square with sides of one unit of length; this follows from the Pythagorean theorem. It was probably the first number known to be irrational. The fraction (≈ 1.4142857) is sometimes used as a good rational approximation with a reasonably small denominator.
Sequence in the On-Line Encyclopedia of Integer Sequences consists of the digits in the decimal expansion of the square root of 2, here truncated to 65 decimal places:
History
The Babylonian clay tablet YBC 7289 (–1600 BC) gives an approximation of in four sexagesimal figures, , which is accurate to about six decimal digits, and is the closest possible three-place sexagesimal representation of :
Another early approximation is given in ancient Indian mathematical texts, the Sulbasutras (–200 BC), as follows: Increase the length [of the side] by its third and this third by its own fourth less the thirty-fourth part of that fourth. That is,
This approximation is the seventh in a sequence of increasingly accurate approximations based on the sequence of Pell numbers, which can be derived from the continued fraction expansion of . Despite having a smaller denominator, it is only slightly less accurate than the Babylonian approximation.
Pythagoreans discovered that the diagonal of a square is incommensurable with its side, or in modern language, that the square root of two is irrational. Little is known with certainty about the time or circumstances of this discovery, but the name of Hippasus of Metapontum is often mentioned. For a while, the Pythagoreans treated as an official s
|
https://en.wikipedia.org/wiki/Minimal%20counterexample
|
In mathematics, a minimal counterexample is the smallest example which falsifies a claim, and a proof by minimal counterexample is a method of proof which combines the use of a minimal counterexample with the ideas of proof by induction and proof by contradiction. More specifically, in trying to prove a proposition P, one first assumes by contradiction that it is false, and that therefore there must be at least one counterexample. With respect to some idea of size (which may need to be chosen carefully), one then concludes that there is such a counterexample C that is minimal. In regard to the argument, C is generally something quite hypothetical (since the truth of P excludes the possibility of C), but it may be possible to argue that if C existed, then it would have some definite properties which, after applying some reasoning similar to that in an inductive proof, would lead to a contradiction, thereby showing that the proposition P is indeed true.
If the form of the contradiction is that we can derive a further counterexample D, that is smaller than C in the sense of the working hypothesis of minimality, then this technique is traditionally called proof by infinite descent. In which case, there may be multiple and more complex ways to structure the argument of the proof.
The assumption that if there is a counterexample, there is a minimal counterexample, is based on a well-ordering of some kind. The usual ordering on the natural numbers is clearly possible, by the most usual formulation of mathematical induction; but the scope of the method can include well-ordered induction of any kind.
Examples
The minimal counterexample method has been much used in the classification of finite simple groups. The Feit–Thompson theorem, that finite simple groups that are not cyclic groups have even order, was based on the hypothesis of some, and therefore some minimal, simple group G of odd order. Every proper subgroup of G can be assumed a solvable group, meaning that m
|
https://en.wikipedia.org/wiki/Set-builder%20notation
|
In set theory and its applications to logic, mathematics, and computer science, set-builder notation is a mathematical notation for describing a set by enumerating its elements, or stating the properties that its members must satisfy.
Defining sets by properties is also known as set comprehension, set abstraction or as defining a set's intension.
Sets defined by enumeration
A set can be described directly by enumerating all of its elements between curly brackets, as in the following two examples:
is the set containing the four numbers 3, 7, 15, and 31, and nothing else.
is the set containing , , and , and nothing else (there is no order among the elements of a set).
This is sometimes called the "roster method" for specifying a set.
When it is desired to denote a set that contains elements from a regular sequence, an ellipsis notation may be employed, as shown in the next examples:
is the set of integers between 1 and 100 inclusive.
is the set of natural numbers.
is the set of all integers.
There is no order among the elements of a set (this explains and validates the equality of the last example), but with the ellipses notation, we use an ordered sequence before (or after) the ellipsis as a convenient notational vehicle for explaining which elements are in a set. The first few elements of the sequence are shown, then the ellipses indicate that the simplest interpretation should be applied for continuing the sequence. Should no terminating value appear to the right of the ellipses, then the sequence is considered to be unbounded.
In general, denotes the set of all natural numbers such that . Another notation for is the bracket notation . A subtle special case is , in which is equal to the empty set . Similarly, denotes the set of all for .
In each preceding example, each set is described by enumerating its elements. Not all sets can be described in this way, or if they can, their enumeration may be too long or too complicated to be useful.
|
https://en.wikipedia.org/wiki/Intel%20HEX
|
Intel hexadecimal object file format, Intel hex format or Intellec Hex is a file format that conveys binary information in ASCII text form, making it possible to store on non-binary media such as paper tape, punch cards, etc., to display on text terminals or be printed on line-oriented printers. The format is commonly used for programming microcontrollers, EPROMs, and other types of programmable logic devices and hardware emulators. In a typical application, a compiler or assembler converts a program's source code (such as in C or assembly language) to machine code and outputs it into a HEX file. Some also use it as a container format holding packets of stream data. Common file extensions used for the resulting files are .HEX or .H86. The HEX file is then read by a programmer to write the machine code into a PROM or is transferred to the target system for loading and execution.
History
The Intel hex format was originally designed for Intel's Intellec Microcomputer Development Systems (MDS) in 1973 in order to load and execute programs from paper tape. It was also used to specify memory contents to Intel for ROM production, which previously had to be encoded in the much less efficient BNPF (Begin-Negative-Positive-Finish) format. In 1973, Intel's "software group" consisted only of Bill Byerly and Ken Burget, and Gary Kildall as an external consultant doing business as Microcomputer Applications Associates (MAA) and founding Digital Research in 1974. Beginning in 1975, the format was utilized by Intellec Series II ISIS-II systems supporting diskette drives, with files using the file extension HEX. Many PROM and EPROM programming devices accept this format.
Format
Intel HEX consists of lines of ASCII text that are separated by line feed or carriage return characters or both. Each text line contains uppercase hexadecimal characters that encode multiple binary numbers. The binary numbers may represent data, memory addresses, or other values, depending on their position
|
https://en.wikipedia.org/wiki/Substrate%20coupling
|
In an integrated circuit, a signal can couple from one node to another via the substrate. This phenomenon is referred to as substrate coupling or substrate noise coupling.
The push for reduced cost, more compact circuit boards, and added customer features has provided
incentives for the inclusion of analog functions on primarily digital MOS integrated circuits (ICs) forming
mixed-signal ICs. In these systems, the speed of digital circuits is constantly increasing, chips are
becoming more densely packed, interconnect layers are added, and analog resolution is increased. In addition, recent increase in wireless applications and its growing market are introducing a new set of aggressive design goals for realizing mixed-signal systems.
Here, the designer integrates radio frequency
(RF) analog and base band digital circuitry on a single chip.
The goal is to make single-chip radio frequency
integrated circuits (RFICs) on silicon, where all the blocks are fabricated on the same chip.
One of the advantages of this integration is low power dissipation for portability due to a reduction in the number of package pins and associated bond wire capacitance.
Another reason that an integrated solution offers lower power consumption is that routing high-frequency signals off-chip often requires a 50Ω impedance match, which can result in higher power dissipation.
Other advantages include improved high-frequency performance due to reduced package interconnect parasitics, higher system reliability, smaller package count, and higher integration of RF components with VLSI-compatible digital circuits.
In fact, the single-chip transceiver is now a reality.
The design of such systems, however, is a complicated task. There are two main challenges in realizing
mixed-signal ICs. The first challenging task, specific to RFICs, is to fabricate good on-chip passive elements
such as high-Q inductors. The second challenging task, applicable to any mixed-signal IC and the subject
of this chap
|
https://en.wikipedia.org/wiki/Background%20debug%20mode%20interface
|
Background debug mode (BDM) interface is an electronic interface that allows debugging of embedded systems. Specifically, it provides in-circuit debugging functionality in microcontrollers. It requires a single wire and specialized electronics in the system being debugged. It appears in many Freescale Semiconductor products.
The interface allows a Host to manage and query a target. Specialized hardware is required in the target device. No special hardware is required in the host; a simple bidirectional I/O pin is sufficient.
I/O signals
The signals used by BDM to communicate data to and from the target are initiated by the host processor. The host negates the transmission line, and then either
Asserts the line sooner, to output a 1,
Asserts the line later, to output a 0,
Tri-states its output, allowing the target to drive the line. The host can sense a 1 or 0 as an input value.
At the start of the next bit time, the host negates the transmission line, and the process repeats. Each bit is communicated in this manner.
In other words, the increasing complexity of today's software and hardware designs is leading to some fresh approaches to debugging. Silicon manufacturers offer more and more on-chip debugging features for emulation of new processors.
This capability, implemented in various processors under such names as background debug mode (BDM), JTAG and on-chip in-circuit emulation, puts basic debugging functions on the chip itself. With a BDM (1 wire interface) or JTAG (standard JTAG) debug port, you control and monitor the microcontroller solely through the stable on-chip debugging services.
This debugging mode runs even when the target system crashes and enables developers to continue investigating the cause of the crash.
Microcontroller application development
A good development tool environment is important to reduce total development time and cost. Users want to debug their application program under conditions that imitate the actual setup of the
|
https://en.wikipedia.org/wiki/List%20of%20vector%20spaces%20in%20mathematics
|
This is a list of vector spaces in abstract mathematics, by Wikipedia page.
Banach space
Besov space
Bochner space
Dual space
Euclidean space
Fock space
Fréchet space
Hardy space
Hilbert space
Hölder space
LF-space
Lp space
Minkowski space
Montel space
Morrey–Campanato space
Orlicz space
Riesz space
Schwartz space
Sobolev space
Tsirelson space
Linear algebra
Mathematics-related lists
|
https://en.wikipedia.org/wiki/QED%20vacuum
|
The QED vacuum or quantum electrodynamic vacuum is the field-theoretic vacuum of quantum electrodynamics. It is the lowest energy state (the ground state) of the electromagnetic field when the fields are quantized. When Planck's constant is hypothetically allowed to approach zero, QED vacuum is converted to classical vacuum, which is to say, the vacuum of classical electromagnetism.
Another field-theoretic vacuum is the QCD vacuum of the Standard Model.
Fluctuations
The QED vacuum is subject to fluctuations about a dormant zero average-field condition; Here is a description of the quantum vacuum:
Virtual particles
It is sometimes attempted to provide an intuitive picture of virtual particles based upon the Heisenberg energy-time uncertainty principle:
(where and are energy and time variations, and the Planck constant divided by 2) arguing along the lines that the short lifetime of virtual particles allows the "borrowing" of large energies from the vacuum and thus permits particle generation for short times.
This interpretation of the energy-time uncertainty relation is not universally accepted, however. One issue is the use of an uncertainty relation limiting measurement accuracy as though a time uncertainty determines a "budget" for borrowing energy . Another issue is the meaning of "time" in this relation, because energy and time (unlike position and momentum , for example) do not satisfy a canonical commutation relation (such as ). Various schemes have been advanced to construct an observable that has some kind of time interpretation, and yet does satisfy a canonical commutation relation with energy. The many approaches to the energy-time uncertainty principle are a continuing subject of study.
Quantization of the fields
The Heisenberg uncertainty principle does not allow a particle to exist in a state in which the particle is simultaneously at a fixed location, say the origin of coordinates, and has also zero momentum. Instead the particle has a
|
https://en.wikipedia.org/wiki/ESD%20simulator
|
An ESD simulator, also known as an ESD gun, is a handheld unit used to test the immunity of devices to electrostatic discharge (ESD). These simulators are used in special electromagnetic compatibility (EMC) laboratories. ESD pulses are fast, high-voltage pulses created when two objects with different electrical charges come into close proximity or contact. Recreating them in a test environment helps to verify that the device under test is immune to static electricity discharges.
ESD testing is necessary to receive a CE mark, and for most suppliers of components for motor vehicles as part of required electromagnetic compatibility testing. It is often useful to automate these tests to eliminate the human factor.
There are three distinct test models for electrostatic discharge: human-body, machine, and charged-devices models. The human-body model emulates the action of a human body discharging static electricity, the machine model simulates static discharge from a machine, and the charged-device model simulates the charging and discharging events that occur in production processes and equipment.
Many ESD guns have interchangeable modules containing different discharge Networks or RC Modules (Specific resistance and capacitance values) to simulate different discharges. These modules typically slide into the handle of the pistol portion of the ESD simulator, much like loading some handguns. They change the characteristics of the waveshape discharged from the pistol and are called out in general standards like IEC 61000-4-2, SAE J113 and industry specific standards like ISO 10605. Resistance is referred to in ohms (Ω), capacitance is referred to in picofarad (pF or "puff"). The most commonly used discharge network is for IEC 61000-4-2 and ISO 10605, expressed as 150pF/330Ω. There are over 50 combinations of resistance and capacitance depending on the standards and the applicable electronics.
Test standards
Standards that require ESD testing include:
ISO 10605
Ford
|
https://en.wikipedia.org/wiki/Beyond%20CMOS
|
Beyond CMOS refers to the possible future digital logic technologies beyond the CMOS scaling limits which limits device density and speeds due to heating effects.
Beyond CMOS is the name of one of the 7 focus groups in ITRS 2.0 (2013) and in its successor, the International Roadmap for Devices and Systems.
CPUs using CMOS were released from 1986 (e.g. 12 MHz Intel 80386). As CMOS transistor dimensions were shrunk the clock speeds also increased. Since about 2004 CMOS CPU clock speeds have leveled off at about 3.5 GHz.
CMOS devices sizes continue to shrink – see Intel tick–tock and ITRS :
22 nanometer Ivy Bridge in 2012
first 14 nanometer processors shipped in Q4 2014.
In May 2015, Samsung Electronics showed a 300 mm wafer of 10 nanometer FinFET chips.
It is not yet clear if CMOS transistors will still work below 3 nm. See 3 nanometer.
Comparisons of technology
About 2010 the Nanoelectronic Research Initiative (NRI) studied various circuits in various technologies.
Nikonov benchmarked (theoretically) many technologies in 2012, and updated it in 2014. The 2014 benchmarking included 11 electronic, 8 spintronic, 3 orbitronic, 2 ferroelectric, and 1 straintronics technology.
The 2015 ITRS 2.0 report included a detailed chapter on Beyond CMOS, covering RAM and logic gates.
Some areas of investigation
Magneto-Electric Spin-Orbit logic
tunnel junction devices, eg Tunnel field-effect transistor
indium antimonide transistors
carbon nanotube FET, eg CNT Tunnel field-effect transistor
graphene nanoribbons
molecular electronics
spintronics — many variants
future low-energy electronics technologies, ultra-low dissipation conduction paths, including
topological materials
exciton superfluids
photonics and optical computing
superconducting computing
rapid single-flux quantum (RSFQ)
Superconducting computing and RSFQ
Superconducting computing includes several beyond-CMOS technologies that use superconducting devices, namely Josephson junctions, for electronic
|
https://en.wikipedia.org/wiki/Biologist
|
A biologist is a scientist who conducts research in biology. Biologists are interested in studying life on Earth, whether it is an individual cell, a multicellular organism, or a community of interacting populations. They usually specialize in a particular branch (e.g., molecular biology, zoology, and evolutionary biology) of biology and have a specific research focus (e.g., studying malaria or cancer).
Biologists who are involved in basic research have the aim of advancing knowledge about the natural world. They conduct their research using the scientific method, which is an empirical method for testing hypotheses. Their discoveries may have applications for some specific purpose such as in biotechnology, which has the goal of developing medically useful products for humans.
In modern times, most biologists have one or more academic degrees such as a bachelor's degree plus an advanced degree like a master's degree or a doctorate. Like other scientists, biologists can be found working in different sectors of the economy such as in academia, nonprofits, private industry, or government.
History
Francesco Redi, the founder of biology, is recognized to be one of the greatest biologists of all time. Robert Hooke, an English natural philosopher, coined the term cell, suggesting plant structure's resemblance to honeycomb cells.
Charles Darwin and Alfred Wallace independently formulated the theory of evolution by natural selection, which was described in detail in Darwin's book On the Origin of Species, which was published in 1859. In it, Darwin proposed that the features of all living things, including humans, were shaped by natural processes of descent with accumulated modification leading to divergence over long periods of time. The theory of evolution in its current form affects almost all areas of biology. Separately, Gregor Mendel formulated in the principles of inheritance in 1866, which became the basis of modern genetics.
In 1953, James D. Watson and Francis
|
https://en.wikipedia.org/wiki/OpenVNet
|
The OpenVNet adds a Network Virtualization layer on top of the existing physical network and enables data center network administrators to tremendously simplify the creation and operation of multi-tenant networks. It is based on edge overlay network architecture and provides all the necessary components for network virtualization such as SDN controller, virtual switch, virtual router, and powerful APIs.
The OpenVNet project started in April 2013. Almost part of the implementation had already done in the Wakame-vdc project in the beginning of 2012.
See also
Open vSwitch
|
https://en.wikipedia.org/wiki/Beetle%20%28ASIC%29
|
The Beetle ASIC is an analog readout chip. It is developed for the LHCb experiment at CERN.
Overview
The chip integrates 128 channels with low-noise charge-sensitive pre-amplifiers and shapers. The pulse shape can be chosen such that it complies with LHCb specifications: a peaking time of 25 ns with a remainder of the peak voltage after 25 ns of less than 30%. A comparator per channel with configurable polarity provides a binary signal. Four adjacent comparator channels are being ORed and brought off chip via LVDS drivers.
Either the shaper or comparator output is sampled with the LHC bunch-crossing frequency of 40 MHz into an analog pipeline. This ring buffer has a programmable latency of a maximum of 160 sampling intervals and an integrated derandomising buffer of 16 stages. For analogue readout data is multiplexed with up to 40 MHz onto one or four ports. A binary readout mode operates at up to 80 MHz output rate on two ports. Current drivers bring the serialised data off chip.
The chip can accept trigger rates up to 1.1 MHz to perform a dead-timeless readout within 900 ns per trigger. For testability and calibration purposes, a charge injector with adjustable pulse height is implemented. The bias settings and various other parameters can be controlled via a standard I²C-interface. The chip is radiation hardened to an accumulated dose of more than 100 Mrad. Robustness against single event upset is achieved by redundant logic.
External links
Beetle - a readout chip for LHCb
The Large Hadron Collider beauty experiment
Application-specific integrated circuits
CERN
|
https://en.wikipedia.org/wiki/Tiger-BASIC
|
Tiger-BASIC is a high speed multitasking BASIC dialect (List of BASIC dialects) to program microcontrollers of the BASIC-Tiger family. Tiger-BASIC and the integrated development environment which goes with it, were developed by Wilke-Technology (Aachen, Germany).
External links
Wilke-Technology
BASIC programming language
Embedded systems
|
https://en.wikipedia.org/wiki/System%20console
|
One meaning of system console, computer console, root console, operator's console, or simply console is the text entry and display device for system administration messages, particularly those from the BIOS or boot loader, the kernel, from the init system and from the system logger. It is a physical device consisting of a keyboard and a screen, and traditionally is a text terminal, but may also be a graphical terminal. System consoles are generalized to computer terminals, which are abstracted respectively by virtual consoles and terminal emulators. Today communication with system consoles is generally done abstractly, via the standard streams (stdin, stdout, and stderr), but there may be system-specific interfaces, for example those used by the system kernel.
Another, older, meaning of system console, computer console, hardware console, operator's console or simply console is a hardware component used by an operator to control the hardware, typically some combination of front panel, keyboard/printer and keyboard/display.
History
Prior to the development of alphanumeric CRT system consoles, some computers such as the IBM 1620 had console typewriters and front panels while the very first programmable computer, the Manchester Baby, used a combination of electromechanical switches and a CRT to provide console functions—the CRT displaying memory contents in binary by mirroring the machine's Williams-Kilburn tube CRT-based RAM.
Some early operating systems supported either a single keyboard/print or keyboard/display device for controlling the OS. Some also supported a single alternate console, and some supported a hardcopy console for retaining a record of commands, responses and other console messages. However, in the late 1960s it became common for operating systems to support many more consoles than 3, and operating systems began appearing in which the console was simply any terminal with a privileged user logged on.
On early minicomputers, the console was a seri
|
https://en.wikipedia.org/wiki/Syntrophy
|
In biology, syntrophy, synthrophy, or cross-feeding (from Greek syn meaning together, trophe meaning nourishment) is the phenomenon of one species feeding on the metabolic products of another species to cope up with the energy limitations by electron transfer. In this type of biological interaction, metabolite transfer happens between two or more metabolically diverse microbial species that live in close proximity to each other. The growth of one partner depends on the nutrients, growth factors, or substrates provided by the other partner. Thus, syntrophism can be considered as an obligatory interdependency and a mutualistic metabolism between two different bacterial species.
Microbial syntrophy
Syntrophy is often used synonymously for mutualistic symbiosis especially between at least two different bacterial species. Syntrophy differs from symbiosis in a way that syntrophic relationship is primarily based on closely linked metabolic interactions to maintain thermodynamically favorable lifestyle in a given environment. Syntrophy plays an important role in a large number of microbial processes especially in oxygen limited environments, methanogenic environments and anaerobic systems. In anoxic or methanogenic environments such as wetlands, swamps, paddy fields, landfills, digestive tract of ruminants, and anerobic digesters syntrophy is employed to overcome the energy constraints as the reactions in these environments proceed close to thermodynamic equilibrium.
Mechanism of microbial syntrophy
The main mechanism of syntrophy is removing the metabolic end products of one species so as to create an energetically favorable environment for another species. This obligate metabolic cooperation is required to facilitate the degradation of complex organic substrates under anaerobic conditions. Complex organic compounds such as ethanol, propionate, butyrate, and lactate cannot be directly used as substrates for methanogenesis by methanogens. On the other hand, fermentation
|
https://en.wikipedia.org/wiki/Emery%27s%20rule
|
In 1909, the entomologist Carlo Emery noted that social parasites among insects (e.g., kleptoparasites) tend to be parasites of species or genera to which they are closely related. Over time, this pattern has been recognized in many additional cases, and generalized to what is now known as Emery's rule. The pattern is best known for various taxa of Hymenoptera. For example, the social wasp Dolichovespula adulterina parasitizes other members of its genus such as Dolichovespula norwegica and Dolichovespula arenaria. Emery's rule is also applicable to members of other kingdoms such as fungi, red algae, and mistletoe. The significance and general relevance of this pattern are still a matter of some debate, as a great many exceptions exist, though a common explanation for the phenomenon when it occurs is that the parasites may have started as facultative parasites within the host species itself (such forms of intraspecific parasitism are well-known, even in some species of bees), but later became reproductively isolated and split off from the ancestral species, a form of sympatric speciation.
When a parasitic species is a sister taxon to its host in a phylogenetic sense, the relationship is considered to be in "strict" adherence to Emery's rule. When the parasite is a close relative of the host but not its sister species, the relationship is in "loose" adherence to the rule.
|
https://en.wikipedia.org/wiki/Living%20systems
|
Living systems are open self-organizing life forms that interact with their environment. These systems are maintained by flows of information, energy and matter. Multiple theories of living systems have been proposed. Such theories attempt to map general principles for how all living systems work.
Context
Some scientists have proposed in the last few decades that a general theory of living systems is required to explain the nature of life. Such a general theory would arise out of the ecological and biological sciences and attempt to map general principles for how all living systems work. Instead of examining phenomena by attempting to break things down into components, a general living systems theory explores phenomena in terms of dynamic patterns of the relationships of organisms with their environment.
Theories
Miller's open systems
James Grier Miller's living systems theory is a general theory about the existence of all living systems, their structure, interaction, behavior and development, intended to formalize the concept of life. According to Miller's 1978 book Living Systems, such a system must contain each of twenty "critical subsystems" defined by their functions. Miller considers living systems as a type of system. Below the level of living systems, he defines space and time, matter and energy, information and entropy, levels of organization, and physical and conceptual factors, and above living systems ecological, planetary and solar systems, galaxies, etc. Miller's central thesis is that the multiple levels of living systems (cells, organs, organisms, groups, organizations, societies, supranational systems) are open systems composed of critical and mutually-dependent subsystems that process inputs, throughputs, and outputs of energy and information. Seppänen (1998) says that Miller applied general systems theory on a broad scale to describe all aspects of living systems. Bailey states that Miller's theory is perhaps the "most integrative" social s
|
https://en.wikipedia.org/wiki/Generalized%20pencil-of-function%20method
|
Generalized pencil-of-function method (GPOF), also known as matrix pencil method, is a signal processing technique for estimating a signal or extracting information with complex exponentials. Being similar to Prony and original pencil-of-function methods, it is generally preferred to those for its robustness and computational efficiency.
The method was originally developed by Yingbo Hua and Tapan Sarkar for estimating the behaviour of electromagnetic systems by its transient response, building on Sarkar's past work on the original pencil-of-function method. The method has a plethora of applications in electrical engineering, particularly related to problems in computational electromagnetics, microwave engineering and antenna theory.
Method
Mathematical basis
A transient electromagnetic signal can be represented as:
where
is the observed time-domain signal,
is the signal noise,
is the actual signal,
are the residues (),
are the poles of the system, defined as ,
by the identities of Z-transform,
are the damping factors and
are the angular frequencies.
The same sequence, sampled by a period of , can be written as the following:
,
Generalized pencil-of-function estimates the optimal and 's.
Noise-free analysis
For the noiseless case, two matrices, and , are produced:
where is defined as the pencil parameter. and can be decomposed into the following matrices:
where
and are diagonal matrices with sequentially-placed and values, respectively.
If , the generalized eigenvalues of the matrix pencil
yield the poles of the system, which are . Then, the generalized eigenvectors can be obtained by the following identities:
where the denotes the Moore–Penrose inverse, also known as the pseudo-inverse. Singular value decomposition can be employed to compute the pseudo-inverse.
Noise filtering
If noise is present in the system, and are combined in a general data matrix, :
where is the noisy data. For efficient fil
|
https://en.wikipedia.org/wiki/Ordinal%20notation
|
In mathematical logic and set theory, an ordinal notation is a partial function mapping the set of all finite sequences of symbols, themselves members of a finite alphabet, to a countable set of ordinals. A Gödel numbering is a function mapping the set of well-formed formulae (a finite sequence of symbols on which the ordinal notation function is defined) of some formal language to the natural numbers. This associates each well-formed formula with a unique natural number, called its Gödel number. If a Gödel numbering is fixed, then the subset relation on the ordinals induces an ordering on well-formed formulae which in turn induces a well-ordering on the subset of natural numbers. A recursive ordinal notation must satisfy the following two additional properties:
the subset of natural numbers is a recursive set
the induced well-ordering on the subset of natural numbers is a recursive relation
There are many such schemes of ordinal notations, including schemes by Wilhelm Ackermann, Heinz Bachmann, Wilfried Buchholz, Georg Cantor, Solomon Feferman, Gerhard Jäger, Isles, Pfeiffer, Wolfram Pohlers, Kurt Schütte, Gaisi Takeuti (called ordinal diagrams), Oswald Veblen. Stephen Cole Kleene has a system of notations, called Kleene's O, which includes ordinal notations but it is not as well behaved as the other systems described here.
Usually one proceeds by defining several functions from ordinals to ordinals and representing each such function by a symbol. In many systems, such as Veblen's well known system, the functions are normal functions, that is, they are strictly increasing and continuous in at least one of their arguments, and increasing in other arguments. Another desirable property for such functions is that the value of the function is greater than each of its arguments, so that an ordinal is always being described in terms of smaller ordinals. There are several such desirable properties. Unfortunately, no one system can have all of them since they contra
|
https://en.wikipedia.org/wiki/List%20of%20refractive%20indices
|
Many materials have a well-characterized refractive index, but these indices often depend strongly upon the frequency of light, causing optical dispersion. Standard refractive index measurements are taken at the "yellow doublet" sodium D line, with a wavelength (λ) of 589 nanometers.
There are also weaker dependencies on temperature, pressure/stress, etc., as well on precise material compositions (presence of dopants, etc.); for many materials and typical conditions, however, these variations are at the percent level or less. Thus, it is especially important to cite the source for an index measurement if precision is required.
In general, an index of refraction is a complex number with both a real and imaginary part, where the latter indicates the strength of absorption loss at a particular wavelength—thus, the imaginary part is sometimes called the extinction coefficient . Such losses become particularly significant, for example, in metals at short (e.g. visible) wavelengths, and must be included in any description of the refractive index.
List
See also
Sellmeier equation
Corrective lens#Ophthalmic material property tables
Optical properties of water and ice
|
https://en.wikipedia.org/wiki/Cellular%20architecture
|
Cellular architecture is a type of computer architecture prominent in parallel computing. Cellular architectures are relatively new, with IBM's Cell microprocessor being the first one to reach the market. Cellular architecture takes multi-core architecture design to its logical conclusion, by giving the programmer the ability to run large numbers of concurrent threads within a single processor. Each 'cell' is a compute node containing thread units, memory, and communication. Speed-up is achieved by exploiting thread-level parallelism inherent in many applications.
Cell, a cellular architecture containing 9 cores, is the processor used in the PlayStation 3. Another prominent cellular architecture is Cyclops64, a massively parallel architecture currently under development by IBM.
Cellular architectures follow the low-level programming paradigm, which exposes the programmer to much of the underlying hardware. This allows the programmer to greatly optimize their code for the platform, but at the same time makes it more difficult to develop software.
See also
Cellular automaton
External links
Cellular architecture builds next generation supercomputers
ORNL, IBM, and the Blue Gene Project
Energy, IBM are partners in biological supercomputing project
Cell-based Architecture
Parallel computing
Computer architecture
Classes of computers
|
https://en.wikipedia.org/wiki/Hardware%20reset
|
A hardware reset or hard reset of a computer system is a hardware operation that re-initializes the core hardware components of the system, thus ending all current software operations in the system. This is typically, but not always, followed by booting of the system into firmware that re-initializes the rest of the system, and restarts the operating system.
Hardware resets are an essential part of the power-on process, but may also be triggered without power cycling the system by direct user intervention via a physical reset button, watchdog timers, or by software intervention that, as its last action, activates the hardware reset line (e.g, in a fatal error where the computer crashes).
User initiated hard resets can be used to reset the device if the software hangs, crashes, or is otherwise unresponsive. However, data may become corrupted if this occurs. Generally, a hard reset is initiated by pressing a dedicated reset button, or holding a combination of buttons on some mobile devices. Devices may not have a dedicated Reset button, but have the user hold the power button to cut power, which the user can then turn the computer back on. On some systems (e.g, the PlayStation 2 video game console), pressing and releasing the power button initiates a hard reset, and holding the button turns the system off.
Hardware reset in 80x86 IBM PC
The 8086 microprocessors provide RESET pin that is used to do the hardware reset. When a HIGH is applied to the pin, the CPU immediately stops, and sets the major registers to these values:
The CPU uses the values of CS and IP registers to find the location of the next instruction to execute. Location of next instruction is calculated using this simple equation:
Location of next instruction = (CS<<4) + (IP)
This implies that after the hardware reset, the CPU will start execution at the physical address 0xFFFF0. In IBM PC compatible computers, This address maps to BIOS ROM. The memory word at 0xFFFF0 usually contains a JMP ins
|
https://en.wikipedia.org/wiki/Feng%27s%20classification
|
Tse-yun Feng suggested the use of degree of parallelism to classify various computer architecture. It is based on sequential and parallel operations at a bit and word level.
About degree of parallelism
Maximum degree of parallelism
The maximum number of binary digits that can be processed within a unit time by a computer system is called the maximum parallelism degree P. If a processor is processing P bits in unit time, then P is called the maximum
degree of parallelism.
Average degree of parallelism
Let i = 1, 2, 3, ..., T be the different timing instants and P1, P2, ..., PT be the corresponding bits processed.
Then,
Processor utilization
Processor utilization is defined as
The maximum degree of parallelism depends on the structure of the arithmetic and logic unit. Higher degree of parallelism indicates a highly parallel ALU or processing element. Average parallelism depends on both the hardware and the software. Higher average parallelism can be achieved through concurrent programs.
Types of classification
According to Feng's classification, computer architecture can be classified into four. The classification is based on the way contents stored in memory are processed. The contents can be either data or instructions.
Word serial bit serial (WSBS)
Word serial bit parallel (WSBP)
Word parallel bit serial (WPBS)
Word parallel bit parallel (WPBP)
Word serial bit serial (WSBS)
One bit of one selected word is processed at a time. This represents serial processing and needs maximum processing time.
Word serial bit parallel (WSBP)
It is found in most existing computers and has been called "word slice" processing because one word of one bit is processed at a time. All bits of a selected word are processed at a time. Bit parallel means all bits of a word.
Word parallel bit serial (WPBS)
It has been called bit slice processing because m-bit slice is processed at a time. Word parallel signifies selection of all words. It can be considered as one bit
|
https://en.wikipedia.org/wiki/Peripheral%20DMA%20controller
|
A peripheral DMA controller (PDC) is a feature found in modern microcontrollers. This is typically a FIFO with automated control features for driving implicitly included modules in a microcontroller such as UARTs.
This takes a large burden from the operating system and reduces the number of interrupts required to service and control these type of functions.
See also
Direct memory access (DMA)
Autonomous peripheral operation
|
https://en.wikipedia.org/wiki/Flexible%20electronics
|
Flexible electronics, also known as flex circuits, is a technology for assembling electronic circuits by mounting electronic devices on flexible plastic substrates, such as polyimide, PEEK or transparent conductive polyester film. Additionally, flex circuits can be screen printed silver circuits on polyester. Flexible electronic assemblies may be manufactured using identical components used for rigid printed circuit boards, allowing the board to conform to a desired shape, or to flex during its use.
Manufacturing
Flexible printed circuits (FPC) are made with a photolithographic technology. An alternative way of making flexible foil circuits or flexible flat cables (FFCs) is laminating very thin (0.07 mm) copper strips in between two layers of PET. These PET layers, typically 0.05 mm thick, are coated with an adhesive which is thermosetting, and will be activated during the lamination process. FPCs and FFCs have several advantages in many applications:
Tightly assembled electronic packages, where electrical connections are required in 3 axes, such as cameras (static application).
Electrical connections where the assembly is required to flex during its normal use, such as folding cell phones (dynamic application).
Electrical connections between sub-assemblies to replace wire harnesses, which are heavier and bulkier, such as in cars, rockets and satellites.
Electrical connections where board thickness or space constraints are driving factors.
Advantage of FPCs
Potential to replace multiple rigid boards or connectors
Single-sided circuits are ideal for dynamic or high-flex applications
Stacked FPCs in various configurations
Disadvantages of FPCs
Cost increase over rigid PCBs
Increased risk of damage during handling or use
More difficult assembly process
Repair and rework is difficult or impossible
Generally worse panel utilization resulting in increased cost
Applications
Flex circuits are often used as connectors in various applications where flexibility
|
https://en.wikipedia.org/wiki/Quantitative%20biology
|
Quantitative biology is an umbrella term encompassing the use of mathematical, statistical or computational techniques to study life and living organisms. The central theme and goal of quantitative biology is the creation of predictive models based on fundamental principles governing living systems.
The subfields of biology that employ quantitative approaches include:
Mathematical and theoretical biology
Computational biology
Bioinformatics
Biostatistics
Systems biology
Population biology
Synthetic biology
Epidemiology
|
https://en.wikipedia.org/wiki/Tiller%20%28botany%29
|
A tiller is a shoot that arises from the base of a grass plant. The term refers to all shoots that grow after the initial parent shoot grows from a seed. Tillers are segmented, each segment possessing its own two-part leaf. They are involved in vegetative propagation and, in some cases, also seed production.
"Tillering" refers to the production of side shoots and is a property possessed by many species in the grass family. This enables them to produce multiple stems (tillers) starting from the initial single seedling. This ensures the formation of dense tufts and multiple seed heads. Tillering rates are heavily influenced by soil water quantity. When soil moisture is low, grasses tend to develop more sparse and deep root systems (as opposed to dense, lateral systems). Thus, in dry soils, tillering is inhibited: the lateral nature of tillering is not supported by lateral root growth.
See also
Crown (botany)
|
https://en.wikipedia.org/wiki/Blackman%27s%20theorem
|
Blackman's theorem is a general procedure for calculating the change in an impedance due to feedback in a circuit. It was published by Ralph Beebe Blackman in 1943, was connected to signal-flow analysis by John Choma, and was made popular in the extra element theorem by R. D. Middlebrook and the asymptotic gain model of Solomon Rosenstark. Blackman's approach leads to the formula for the impedance Z between two selected terminals of a negative feedback amplifier as Blackman's formula:
where ZD = impedance with the feedback disabled, TSC = loop transmission with a small-signal short across the selected terminal pair, and TOC = loop transmission with an open circuit across the terminal pair. The loop transmission also is referred to as the return ratio. Blackman's formula can be compared with Middlebrook's result for the input impedance Zin of a circuit based upon the extra-element theorem:
where:
is the impedance of the extra element; is the input impedance with removed (or made infinite); is the impedance seen by the extra element with the input shorted (or made zero); is the impedance seen by the extra element with the input open (or made infinite).
Blackman's formula also can be compared with Choma's signal-flow result:
where is the value of under the condition that a selected parameter P is set to zero, return ratio is evaluated with zero excitation and is for the case of short-circuited source resistance. As with the extra-element result, differences are in the perspective leading to the formula.
See also
Mason's gain formula
Further reading
|
https://en.wikipedia.org/wiki/Supergolden%20ratio
|
In mathematics, two quantities are in the supergolden ratio if their quotient equals the unique real solution to the equation This solution is commonly denoted The name supergolden ratio results of a analogy with the golden ratio , which is the positive root of the equation
Using formulas for the cubic equation, one can show that
or, using the hyperbolic cosine,
The decimal expansion of this number begins as 1.465571231876768026656731... ().
Properties
Many properties of the supergolden ratio are closely related to golden ratio . For example, while we have for the golden ratio, the inverse square of the supergolden ratio obeys . Additionally, the supergolden ratio can be expressed in terms of itself as the infinite geometric series
in comparison to the golden ratio identity
The supergolden ratio is also the fourth smallest Pisot number, which means that its algebraic conjugates are both smaller than 1 in absolute value.
Supergolden sequence
The supergolden sequence, also known as the Narayana's cows sequence, is a sequence where the ratio between consecutive terms approaches the supergolden ratio. The first three terms are each one, and each term after that is calculated by adding the previous term and the term two places before that; that is, , with . The first values are 1, 1, 1, 2, 3, 4, 6, 9, 13, 19, 28, 41, 60, 88, 129, 189, 277, 406, 595… ().
Supergolden rectangle
A supergolden rectangle is a rectangle whose side lengths are in a ratio. When a square with the same side length as the shorter side of the rectangle is removed from one side of the rectangle, the sides of the resulting rectangle will be in a ratio. This rectangle can be divided into two more supergolden rectangles with opposite orientations and areas in a ratio. The larger rectangle has a diagonal of length times the short side of the original rectangle, and which is perpendicular to the diagonal of the original rectangle.
In addition, if the line segment that separates the
|
https://en.wikipedia.org/wiki/L%C3%A9vy%27s%20constant
|
In mathematics Lévy's constant (sometimes known as the Khinchin–Lévy constant) occurs in an expression for the asymptotic behaviour of the denominators of the convergents of continued fractions.
In 1935, the Soviet mathematician Aleksandr Khinchin showed that the denominators qn of the convergents of the continued fraction expansions of almost all real numbers satisfy
Soon afterward, in 1936, the French mathematician Paul Lévy found the explicit expression for the constant, namely
The term "Lévy's constant" is sometimes used to refer to (the logarithm of the above expression), which is approximately equal to 1.1865691104… The value derives from the asymptotic expectation of the logarithm of the ratio of successive denominators, using the Gauss-Kuzmin distribution. In particular, the ratio has the asymptotic density function
for and zero otherwise. This gives Lévy's constant as
.
The base-10 logarithm of Lévy's constant, which is approximately 0.51532041…, is half of the reciprocal of the limit in Lochs' theorem.
See also
Khinchin's constant
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.