source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Automotive%20navigation%20system
|
An automotive navigation system is part of the automobile controls or a third party add-on used to find direction in an automobile. It typically uses a satellite navigation device to get its position data which is then correlated to a position on a road. When directions are needed routing can be calculated. On the fly traffic information (road closures, congestion) can be used to adjust the route.
Dead reckoning using distance data from sensors attached to the drivetrain, an accelerometer, a gyroscope, and a magnetometer can be used for greater reliability, as GNSS signal loss and/or multipath can occur due to urban canyons or tunnels.
Mathematically, automotive navigation is based on the shortest path problem, within graph theory, which examines how to identify the path that best meets some criteria (shortest, cheapest, fastest, etc.) between two points in a large network.
Automotive navigation systems are crucial for the development of self-driving cars.
History
Automotive navigation systems represent a convergence of a number of diverse technologies, many of which have been available for many years, but were too costly or inaccessible. Limitations such as batteries, display, and processing power had to be overcome before the product became commercially viable.
1961: Hidetsugu Yagi designed a wireless-based navigation system. This design was still primitive and intended for military-use.
1966: General Motors Research (GMR) was working on a non-satellite-based navigation and assistance system called DAIR (Driver Aid, Information & Routing). After initial tests GM found that it was not a scalable or practical way to provide navigation assistance. Decades later, however, the concept would be reborn as OnStar (founded 1996).
1973: Japan's Ministry of International Trade and Industry (MITI) and Fuji Heavy Industries sponsored CATC (Comprehensive Automobile Traffic Control), a Japanese research project on automobile navigation systems.
1979: MITI established JSK (A
|
https://en.wikipedia.org/wiki/Chamfer
|
A chamfer or is a transitional edge between two faces of an object. Sometimes defined as a form of bevel, it is often created at a 45° angle between two adjoining right-angled faces.
Chamfers are frequently used in machining, carpentry, furniture, concrete formwork, mirrors, and to facilitate assembly of many mechanical engineering designs.
Terminology
In machining the word bevel is not used to refer to a chamfer. Machinists use chamfers to "ease" otherwise sharp edges, both for safety and to prevent damage to the edges.
A chamfer may sometimes be regarded as a type of bevel, and the terms are often used interchangeably.
In furniture-making, a lark's tongue is a chamfer which ends short of a piece in a gradual outward curve, leaving the remainder of the edge as a right angle. Chamfers may be formed in either inside or outside adjoining faces of an object or room.
By comparison, a fillet is the rounding-off of an interior corner, and a round (or radius) the rounding of an outside one.
Carpentry and furniture
Chamfers are used in furniture such as counters and table tops to ease their edges to keep people from bruising themselves in the otherwise sharp corner. When the edges are rounded instead, they are called bullnosed. Special tools such as chamfer mills and chamfer planes are sometimes used.
Architecture
Chamfers are commonly used in architecture, both for functional and aesthetic reasons. For example, the base of the Taj Mahal is a cube with chamfered corners, thereby creating an octagonal architectural footprint. Its great gate is formed of chamfered base stones and chamfered corbels for a balcony or equivalent cornice towards the roof.
Urban planning
Many city blocks in Barcelona, Valencia and various other cities in Spain, and street corners (curbs) in Ponce, Puerto Rico, are chamfered. The chamfering was designed as an embellishment and a modernization of urban space in Barcelona's mid-19th century Eixample or Expansion District, where the bui
|
https://en.wikipedia.org/wiki/Host%E2%80%93pathogen%20interaction
|
The host–pathogen interaction is defined as how microbes or viruses sustain themselves within host organisms on a molecular, cellular, organismal or population level. This term is most commonly used to refer to disease-causing microorganisms although they may not cause illness in all hosts. Because of this, the definition has been expanded to how known pathogens survive within their host, whether they cause disease or not.
On the molecular and cellular level, microbes can infect the host and divide rapidly, causing disease by being there and causing a homeostatic imbalance in the body, or by secreting toxins which cause symptoms to appear. Viruses can also infect the host with virulent DNA, which can affect normal cell processes (transcription, translation, etc.), protein folding, or evading the immune response.
Pathogenicity
Pathogen history
One of the first pathogens observed by scientists was Vibrio cholerae, described in detail by Filippo Pacini in 1854. His initial findings were just drawings of the bacteria but, up until 1880, he published many other papers concerning the bacteria. He described how it causes diarrhea as well as developed effective treatments against it. Most of these findings went unnoticed until Robert Koch rediscovered the organism in 1884 and linked it to the disease.
was discovered by Leeuwenhoeck in the 1600s< but was not found to be pathogenic until the 1970s, when an EPA-sponsored symposium was held following a large outbreak in Oregon involving the parasite. Since then, many other organisms have been identified as pathogens, such as H. pylori and E. coli, which have allowed scientists to develop antibiotics to combat these harmful microorganisms.
Types of pathogens
Pathogens include bacteria, fungi, protozoa, helminths, and viruses.
Each of these different types of organisms can then be further classified as a pathogen based on its mode of transmission. This includes the following: food borne, airborne, waterborne, blood-bor
|
https://en.wikipedia.org/wiki/Front%20%28physics%29
|
In physics, a front can be understood as an interface between two different possible states (either stable or unstable) in a physical system. For example, a weather front is the interface between two different density masses of air, in combustion where the flame is the interface between burned and unburned material or in population dynamics where the front is the interface between populated and unpopulated places. Fronts can be static or mobile depending on the conditions of the system, and the causes of the motion can be the variation of a free energy, where the most energetically favorable state invades the less favorable one, according to Pomeau or shape induced motion due to non-variation dynamics in the system, according to Alvarez-Socorro, Clerc, González-Cortés and Wilson.
From a mathematical point of view, fronts are solutions of spatially extended systems connecting two steady states, and from dynamical systems point of view, a front corresponds to a heteroclinic orbit of the system in the co-mobile frame (or proper frame).
Fronts connecting stable - unstable homogeneous states
The most simple example of front solution connecting a homogeneous stable state with a homogeneous unstable state can be shown in the one-dimensional Fisher–Kolmogorov equation:
that describes a simple model for the density of population. This equation has two steady states, , and . This solution corresponds to extinction and saturation of population. Observe that this model is spatially-extended, because it includes a diffusion term given by the second derivative. The state is stable as a simple linear analysis can show and the state is unstable. There exist a family of front solutions connecting with , and such solution are propagative. Particularly, there exist one solution of the form , with is a velocity that only depends on and
|
https://en.wikipedia.org/wiki/List%20of%20statistics%20articles
|
0–9
1.96
2SLS (two-stage least squares) redirects to instrumental variable
3SLS – see three-stage least squares
68–95–99.7 rule
100-year flood
A
A priori probability
Abductive reasoning
Absolute deviation
Absolute risk reduction
Absorbing Markov chain
ABX test
Accelerated failure time model
Acceptable quality limit
Acceptance sampling
Accidental sampling
Accuracy and precision
Accuracy paradox
Acquiescence bias
Actuarial science
Adapted process
Adaptive estimator
Additive Markov chain
Additive model
Additive smoothing
Additive white Gaussian noise
Adjusted Rand index – see Rand index (subsection)
ADMB software
Admissible decision rule
Age adjustment
Age-standardized mortality rate
Age stratification
Aggregate data
Aggregate pattern
Akaike information criterion
Algebra of random variables
Algebraic statistics
Algorithmic inference
Algorithms for calculating variance
All models are wrong
All-pairs testing
Allan variance
Alignments of random points
Almost surely
Alpha beta filter
Alternative hypothesis
Analyse-it – software
Analysis of categorical data
Analysis of covariance
Analysis of molecular variance
Analysis of rhythmic variance
Analysis of variance
Analytic and enumerative statistical studies
Ancestral graph
Anchor test
Ancillary statistic
ANCOVA redirects to Analysis of covariance
Anderson–Darling test
ANOVA
ANOVA on ranks
ANOVA–simultaneous component analysis
Anomaly detection
Anomaly time series
Anscombe transform
Anscombe's quartet
Antecedent variable
Antithetic variates
Approximate Bayesian computation
Approximate entropy
Arcsine distribution
Area chart
Area compatibility factor
ARGUS distribution
Arithmetic mean
Armitage–Doll multistage model of carcinogenesis
Arrival theorem
Artificial neural network
Ascertainment bias
ASReml software
Association (statistics)
Association mapping
Association scheme
Assumed mean
Astrostatistics
Asymptotic distribution
Asymptotic equipartition property (information theory)
Asymptotic normality redirects to Asymptotic dis
|
https://en.wikipedia.org/wiki/In-phase%20and%20quadrature%20components
|
A sinusoid with modulation can be decomposed into, or synthesized from, two amplitude-modulated sinusoids that are in quadrature phase, i.e., with a phase offset of one-quarter cycle (90 degrees or /2 radians). All three sinusoids have the same center frequency. The two amplitude-modulated sinusoids are known as the in-phase (I) and quadrature (Q) components, which describes their relationships with the amplitude- and phase-modulated carrier.
Or in other words, it is possible to create an arbitrarily phase-shifted sine wave, by mixing together two sine waves that are 90° out of phase in different proportions.
The implication is that the modulations in some signal can be treated separately from the carrier wave of the signal. This has extensive use in many radio and signal processing applications. I/Q data is used to represent the modulations of some carrier, independent of that carrier's frequency.
Orthogonality
In vector analysis, a vector with polar coordinates and Cartesian coordinates can be represented as the sum of orthogonal components: Similarly in trigonometry, the angle sum identity expresses:
And in functional analysis, when is a linear function of some variable, such as time, these components are sinusoids, and they are orthogonal functions. A phase-shift of changes the identity to:
,
in which case is the in-phase component. In both conventions is the in-phase amplitude modulation, which explains why some authors refer to it as the actual in-phase component.
Narrowband signal model
In an angle modulation application, with carrier frequency φ is also a time-variant function, giving:
When all three terms above are multiplied by an optional amplitude function, the left-hand side of the equality is known as the amplitude/phase form, and the right-hand side is the quadrature-carrier or IQ form.
Because of the modulation, the components are no longer completely orthogonal functions. But when and are slowly varying functions compared
|
https://en.wikipedia.org/wiki/Census%20of%20Diversity%20of%20Abyssal%20Marine%20Life
|
The Census of Diversity of Abyssal Marine Life (CeDAMar) is a field project of the Census of Marine Life that studies the species diversity of one of the largest and most inaccessible environments on the planet, the abyssal plain. CeDAMar uses data to create an estimation of global species diversity and provide a better understanding of the history of deep-sea fauna, including its present diversity and dependence on environmental parameters. CeDAMar initiatives aim to identify centers of high biodiversity useful for planning both commercial and conservation efforts, and are able to be used in future studies on the effects of climate change on the deep sea.
As of May 2009, participation by upwards of 56 institutions in 17 countries has resulted in the publication of nearly 300 papers. Results of CeDAMar-related research were also published in a 2010 textbook on deep-sea biodiversity by Michael Rex and Ron Etter, members of CeDAMar's Scientific Steering Committee.()
CeDAMar is led by Dr. Pedro Martinez Arbizu of Germany and Dr. Craig Smith, USA.
External links
Census of Diversity of Abyssal Marine Life
Census of Antarctic Marine Life official web site
|
https://en.wikipedia.org/wiki/Autocorrelator
|
A real time interferometric autocorrelator is an electronic tool used to examine the autocorrelation of, among other things, optical beam intensity and spectral components through examination of variable beam path differences. See Optical autocorrelation.
Description
In an interferometric autocorrelator, the input beam is split into a fixed path beam and a variable path beam using a standard beamsplitter. The fixed path beam travels a known and constant distance, whereas the variable path beam has its path length changed via rotating mirrors or other path changing mechanisms. At the end of the two paths, the beams are ideally parallel, but slightly separated, and using a correctly positioned lens, the two beams are crossed inside a second-harmonic generating (SHG) crystal. The autocorrelation term of the output is then passed into a photomultiplying tube (PMT) and measured.
Details
Considering the input beam as a single pulse with envelope , the constant fixed path distance as , and the variable path distance as a function of time , the input to the SHG can be viewed as
This comes from being the speed of light and being the time for the beam to travel the given path. In general, SHG produces output proportional to the square of the input, which in this case is
The first two terms are based only on the fixed and variable paths respectively, but the third term is based on the difference between them, as is evident in
The PMT used is assumed to be much slower than the envelope function , so it effectively integrates the incoming signal
Since both the fixed path and variable path terms are not dependent on each other, they would constitute a background "noise" in examination of the autocorrelation term and would ideally be removed first. This can be accomplished by examining the momentum vectors
If the fixed and variable momentum vectors are assumed to be of approximately equal magnitude, the second harmonic momentum vector will fall geometrically between
|
https://en.wikipedia.org/wiki/Champernowne%20constant
|
In mathematics, the Champernowne constant is a transcendental real constant whose decimal expansion has important properties. It is named after economist and mathematician D. G. Champernowne, who published it as an undergraduate in 1933.
For base 10, the number is defined by concatenating representations of successive integers:
.
Champernowne constants can also be constructed in other bases, similarly, for example:
.
The Champernowne word or Barbier word is the sequence of digits of C10 obtained by writing it in base 10 and juxtaposing the digits:
More generally, a Champernowne sequence (sometimes also called a Champernowne word) is any sequence of digits obtained by concatenating all finite digit-strings (in any given base) in some recursive order.
For instance, the binary Champernowne sequence in shortlex order is
where spaces (otherwise to be ignored) have been inserted just to show the strings being concatenated.
Properties
A real number x is said to be normal if its digits in every base follow a uniform distribution: all digits being equally likely, all pairs of digits equally likely, all triplets of digits equally likely, etc. x is said to be normal in base b if its digits in base b follow a uniform distribution.
If we denote a digit string as [a0, a1, …], then, in base 10, we would expect strings [0], [1], [2], …, [9] to occur 1/10 of the time, strings [0,0], [0,1], …, [9,8], [9,9] to occur 1/100 of the time, and so on, in a normal number.
Champernowne proved that is normal in base 10, while Nakai and Shiokawa proved a more general theorem, a corollary of which is that is normal in base for any b. It is an open problem whether is normal in bases .
Kurt Mahler showed that the constant is transcendental.
The irrationality measure of is , and more generally for any base .
The Champernowne word is a disjunctive sequence.
Series
The definition of the Champernowne constant immediately gives rise to an infinite series representation invol
|
https://en.wikipedia.org/wiki/7-Chlorokynurenic%20acid
|
7-Chlorokynurenic acid (7-CKA) is a tool compound that acts as a potent and selective competitive antagonist of the glycine site of the NMDA receptor. It produces ketamine-like rapid antidepressant effects in animal models of depression. However, 7-CKA is unable to cross the blood-brain-barrier, and for this reason, is unsuitable for clinical use. As a result, a centrally-penetrant prodrug of 7-CKA, 4-chlorokynurenine (AV-101), has been developed for use in humans, and is being studied in clinical trials as a potential treatment for major depressive disorder, and anti-nociception. In addition to antagonizing the NMDA receptor, 7-CKA also acts as a potent inhibitor of the reuptake of glutamate into synaptic vesicles (or as a vesicular glutamate reuptake inhibitor), an action that it mediates via competitive blockade of vesicular glutamate transporters (Ki = 0.59 mM).
See also
5,7-Dichlorokynurenic acid
Evans blue
Kynurenic acid
Xanthurenic acid
|
https://en.wikipedia.org/wiki/Karp%27s%2021%20NP-complete%20problems
|
In computational complexity theory, Karp's 21 NP-complete problems are a set of computational problems which are NP-complete. In his 1972 paper, "Reducibility Among Combinatorial Problems", Richard Karp used Stephen Cook's 1971 theorem that the boolean satisfiability problem is NP-complete (also called the Cook-Levin theorem) to show that there is a polynomial time many-one reduction from the boolean satisfiability problem to each of 21 combinatorial and graph theoretical computational problems, thereby showing that they are all NP-complete. This was one of the first demonstrations that many natural computational problems occurring throughout computer science are computationally intractable, and it drove interest in the study of NP-completeness and the P versus NP problem.
The problems
Karp's 21 problems are shown below, many with their original names. The nesting indicates the direction of the reductions used. For example, Knapsack was shown to be NP-complete by reducing Exact cover to Knapsack.
Satisfiability: the boolean satisfiability problem for formulas in conjunctive normal form (often referred to as SAT)
0–1 integer programming (A variation in which only the restrictions must be satisfied, with no optimization)
Clique (see also independent set problem)
Set packing
Vertex cover
Set covering
Feedback node set
Feedback arc set
Directed Hamilton circuit (Karp's name, now usually called Directed Hamiltonian cycle)
Undirected Hamilton circuit (Karp's name, now usually called Undirected Hamiltonian cycle)
Satisfiability with at most 3 literals per clause (equivalent to 3-SAT)
Chromatic number (also called the Graph Coloring Problem)
Clique cover
Exact cover
Hitting set
Steiner tree
3-dimensional matching
Knapsack (Karp's definition of Knapsack is closer to Subset sum)
Job sequencing
Partition
Max cut
Approximations
As time went on it was discovered that many of the problems can be solved efficiently if restricted to special cases, or can
|
https://en.wikipedia.org/wiki/Memory%20controller
|
A memory controller is a digital circuit that manages the flow of data going to and from a computer's main memory. A memory controller can be a separate chip or integrated into another chip, such as being placed on the same die or as an integral part of a microprocessor; in the latter case, it is usually called an integrated memory controller (IMC). A memory controller is sometimes also called a memory chip controller (MCC) or a memory controller unit (MCU).
Memory controllers contain the logic necessary to read and write to DRAM, and to "refresh" the DRAM. Without constant refreshes, DRAM will lose the data written to it as the capacitors leak their charge within a fraction of a second. Some memory controllers include error detection and correction hardware.
A common form of memory controller is the memory management unit (MMU) which in many operating systems implements virtual addressing.
History
Most modern desktop or workstation microprocessors use an integrated memory controller (IMC), including microprocessors from Intel, AMD, and those built around the ARM architecture.
Prior to K8 (circa 2003), AMD microprocessors had a memory controller implemented on their motherboard's northbridge. In K8 and later, AMD employed an integrated memory controller. Likewise, until Nehalem (circa 2008), Intel microprocessors used memory controllers implemented on the motherboard's northbridge. Nehalem and later switched to an integrated memory controller.
Other examples of microprocessors that use integrated memory controllers include NVIDIA's Fermi, IBM's POWER5, and Sun Microsystems's UltraSPARC T1.
While an integrated memory controller has the potential to increase the system's performance, such as by reducing memory latency, it locks the microprocessor to a specific type (or types) of memory, forcing a redesign in order to support newer memory technologies. When DDR2 SDRAM was introduced, AMD released new Athlon 64 CPUs. These new models, with a DDR2 controller, us
|
https://en.wikipedia.org/wiki/Biocommunication%20%28science%29
|
In the study of the biological sciences, biocommunication is any specific type of communication within (intraspecific) or between (interspecific) species of plants, animals, fungi, protozoa and microorganisms. Communication basically means sign-mediated interactions following three levels of (syntactic, pragmatic and semantic) rules. Signs in most cases are chemical molecules (semiochemicals), but also tactile, or as in animals also visual and auditive. Biocommunication of animals may include vocalizations (as between competing bird species), or pheromone production (as between various species of insects), chemical signals between plants and animals (as in tannin production used by vascular plants to warn away insects), and chemically mediated communication between plants and within plants.
Biocommunication of fungi demonstrates that mycelia communication integrates interspecific sign-mediated interactions between fungal organisms soil bacteria and plant root cells without which plant nutrition could not be organized. Biocommunication of Ciliates identifies the various levels and motifs of communication in these unicellular eukaryotes. Biocommunication of Archaea represents keylevels of sign-mediated interactions in the evolutionarily oldest akaryotes. Biocommunication of Phages demonstrates that the most abundant living agents on this planet coordinate and organize by sign-mediated interactions. Biocommunication is the essential tool to coordinate behavior of various cell types of immune systems.
Biocommunication, biosemiotics and linguistics
Biocommunication theory may be considered to be a branch of biosemiotics. Whereas Biosemiotics studies the production and interpretation of signs and codes, biocommunication theory investigates concrete interactions mediated by signs. Accordingly, syntactic, semantic, and pragmatic aspects of biocommunication processes are distinguished. Biocommunication specific to animals (animal communication) is considered a branch of
|
https://en.wikipedia.org/wiki/Load%E2%80%93store%20unit
|
In computer engineering, a load–store unit (LSU) is a specialized execution unit responsible for executing all load and store instructions, generating virtual addresses of load and store operations and loading data from memory or storing it back to memory from registers.
The load–store unit usually includes a queue which acts as a waiting area for memory instructions, and the unit itself operates independently of other processor units.
Load–store units may also be used in vector processing, and in such cases the term "load–store vector" may be used.
Some load–store units are also capable of executing simple fixed-point and/or integer operations.
See also
Address-generation unit
Arithmetic–logic unit
Floating-point unit
Load–store architecture
|
https://en.wikipedia.org/wiki/Sodium%20in%20biology
|
Sodium ions () are necessary in small amounts for some types of plants, but sodium as a nutrient is more generally needed in larger amounts by animals, due to their use of it for generation of nerve impulses and for maintenance of electrolyte balance and fluid balance. In animals, sodium ions are necessary for the aforementioned functions and for heart activity and certain metabolic functions. The health effects of salt reflect what happens when the body has too much or too little sodium.
Characteristic concentrations of sodium in model organisms are: 10 mM in E. coli, 30 mM in budding yeast, 10 mM in mammalian cell and 100 mM in blood plasma.
Sodium distribution in species
Humans
The minimum physiological requirement for sodium is between 115 and 500 mg per day depending on sweating due to physical activity, and whether the person is adapted to the climate. Sodium chloride is the principal source of sodium in the diet, and is used as seasoning and preservative, such as for pickling and jerky; most of it comes from processed foods. The Adequate Intake for sodium is 1.2 to 1.5 g per day, but on average people in the United States consume 3.4 g per day, the minimum amount that promotes hypertension. Note that salt contains about 39.3% sodium by massthe rest being chlorine and other trace chemicals; thus the Tolerable Upper Intake Level of 2.3 g sodium would be about 5.9 g of salt—about 1 teaspoon. The average daily excretion of sodium is between 40 and 220 mEq.
Normal serum sodium levels are between approximately 135 and 145 mEq/L (135 to 145 mmol/L). A serum sodium level of less than 135 mEq/L qualifies as hyponatremia, which is considered severe when the serum sodium level is below 125 mEq/L.
The renin–angiotensin system and the atrial natriuretic peptide indirectly regulate the amount of signal transduction in the human central nervous system, which depends on sodium ion motion across the nerve cell membrane, in all nerves. Sodium is thus important in neuron
|
https://en.wikipedia.org/wiki/Proof%20without%20words
|
In mathematics, a proof without words (or visual proof) is an illustration of an identity or mathematical statement which can be demonstrated as self-evident by a diagram without any accompanying explanatory text. Such proofs can be considered more elegant than formal or mathematically rigorous proofs due to their self-evident nature. When the diagram demonstrates a particular case of a general statement, to be a proof, it must be generalisable.
A proof without words is not the same as a mathematical proof, because it omits the details of the logical argument it illustrates. However, it can provide valuable intuitions to the viewer that can help them formulate or better understand a true proof.
Examples
Sum of odd numbers
The statement that the sum of all positive odd numbers up to 2n − 1 is a perfect square—more specifically, the perfect square n2—can be demonstrated by a proof without words.
In one corner of a grid, a single block represents 1, the first square. That can be wrapped on two sides by a strip of three blocks (the next odd number) to make a 2 × 2 block: 4, the second square. Adding a further five blocks makes a 3 × 3 block: 9, the third square. This process can be continued indefinitely.
Pythagorean theorem
The Pythagorean theorem that can be proven without words.
One method of doing so is to visualise a larger square of sides , with four right-angled triangles of sides , and in its corners, such that the space in the middle is a diagonal square with an area of . The four triangles can be rearranged within the larger square to split its unused space into two squares of and .
Jensen's inequality
Jensen's inequality can also be proven graphically. A dashed curve along the X axis is the hypothetical distribution of X, while a dashed curve along the Y axis is the corresponding distribution of Y values. The convex mapping Y(X) increasingly "stretches" the distribution for increasing values of X.
Usage
Mathematics Magazine and the College Math
|
https://en.wikipedia.org/wiki/Integrated%20circuit%20layout
|
In integrated circuit design, integrated circuit (IC) layout, also known IC mask layout or mask design, is the representation of an integrated circuit in terms of planar geometric shapes which correspond to the patterns of metal, oxide, or semiconductor layers that make up the components of the integrated circuit. Originally the overall process was called tapeout, as historically early ICs used graphical black crepe tape on mylar media for photo imaging (erroneously believed to reference magnetic data—the photo process greatly predated magnetic media).
When using a standard process—where the interaction of the many chemical, thermal, and photographic variables is known and carefully controlled—the behaviour of the final integrated circuit depends largely on the positions and interconnections of the geometric shapes. Using a computer-aided layout tool, the layout engineer—or layout technician—places and connects all of the components that make up the chip such that they meet certain criteria—typically: performance, size, density, and manufacturability. This practice is often subdivided between two primary layout disciplines: analog and digital.
The generated layout must pass a series of checks in a process known as physical verification. The most common checks in this verification process are
Design rule checking (DRC),
Layout versus schematic (LVS),
parasitic extraction,
antenna rule checking, and
electrical rule checking (ERC).
When all verification is complete, layout post processing is applied where the data is also translated into an industry-standard format, typically GDSII, and sent to a semiconductor foundry. The milestone completion of the layout process of sending this data to the foundry is now colloquially called "tapeout". The foundry converts the data into mask data and uses it to generate the photomasks used in a photolithographic process of semiconductor device fabrication.
In the earlier, simpler, days of IC design, layout was done by
|
https://en.wikipedia.org/wiki/Replicate%20%28biology%29
|
In the biological sciences, replicates are an experimental units that are treated identically. Replicates are an essential component of experimental design because they provide an estimate of between sample error. Without replicates, scientists are unable to assess whether observed treatment effects are due to the experimental manipulation or due to random error. There are also analytical replicates which is when an exact copy of a sample is analyzed, such as a cell, organism or molecule, using exactly the same procedure. This is done in order to check for analytical error. In the absence of this type of error replicates should yield the same result. However, analytical replicates are not independent and cannot be used in tests of the hypothesis because they are still the same sample.
See also
Self-replication
Fold change
|
https://en.wikipedia.org/wiki/Plate%20notation
|
In Bayesian inference, plate notation is a method of representing variables that repeat in a graphical model. Instead of drawing each repeated variable individually, a plate or rectangle is used to group variables into a subgraph that repeat together, and a number is drawn on the plate to represent the number of repetitions of the subgraph in the plate. The assumptions are that the subgraph is duplicated that many times, the variables in the subgraph are indexed by the repetition number, and any links that cross a plate boundary are replicated once for each subgraph repetition.
Example
In this example, we consider Latent Dirichlet allocation, a Bayesian network that models how documents in a corpus are topically related. There are two variables not in any plate; α is the parameter of the uniform Dirichlet prior on the per-document topic distributions, and β is the parameter of the uniform Dirichlet prior on the per-topic word distribution.
The outermost plate represents all the variables related to a specific document, including , the topic distribution for document i. The M in the corner of the plate indicates that the variables inside are repeated M times, once for each document. The inner plate represents the variables associated with each of the words in document i: is the topic distribution for the jth word in document i, and is the actual word used.
The N in the corner represents the repetition of the variables in the inner plate times, once for each word in document i. The circle representing the individual words is shaded, indicating that each is observable, and the other circles are empty, indicating that the other variables are latent variables. The directed edges between variables indicate dependencies between the variables: for example, each depends on and β.
Extensions
A number of extensions have been created by various authors to express more information than simply the conditional relationships. However, few of these have become s
|
https://en.wikipedia.org/wiki/Commutative%20diagram
|
In mathematics, and especially in category theory, a commutative diagram is a diagram such that all directed paths in the diagram with the same start and endpoints lead to the same result. It is said that commutative diagrams play the role in category theory that equations play in algebra.
Description
A commutative diagram often consists of three parts:
objects (also known as vertices)
morphisms (also known as arrows or edges)
paths or composites
Arrow symbols
In algebra texts, the type of morphism can be denoted with different arrow usages:
A monomorphism may be labeled with a or a .
An epimorphism may be labeled with a .
An isomorphism may be labeled with a .
The dashed arrow typically represents the claim that the indicated morphism exists (whenever the rest of the diagram holds); the arrow may be optionally labeled as .
If the morphism is in addition unique, then the dashed arrow may be labeled or .
The meanings of different arrows are not entirely standardized: the arrows used for monomorphisms, epimorphisms, and isomorphisms are also used for injections, surjections, and bijections, as well as the cofibrations, fibrations, and weak equivalences in a model category.
Verifying commutativity
Commutativity makes sense for a polygon of any finite number of sides (including just 1 or 2), and a diagram is commutative if every polygonal subdiagram is commutative.
Note that a diagram may be non-commutative, i.e., the composition of different paths in the diagram may not give the same result.
Examples
Example 1
In the left diagram, which expresses the first isomorphism theorem, commutativity of the triangle means that . In the right diagram, commutativity of the square means .
Example 2
In order for the diagram below to commute, three equalities must be satisfied:
Here, since the first equality follows from the last two, it suffices to show that (2) and (3) are true in order for the diagram to commute. However, since equality (3) generally d
|
https://en.wikipedia.org/wiki/Interposer
|
An interposer is an electrical interface routing between one socket or connection to another. The purpose of an interposer is to spread a connection to a wider pitch or to reroute a connection to a different connection.
Interposer comes from the Latin word "interpōnere", meaning "to put between". They are often used in BGA packages, multi-chip modules and high bandwidth memory.
A common example of an interposer is an integrated circuit die to BGA, such as in the Pentium II. This is done through various substrates, both rigid and flexible, most commonly FR4 for rigid, and polyimide for flexible. Silicon and glass are also evaluated as an integration method. Interposer stacks are also a widely accepted, cost-effective alternative to 3D ICs. There are already several products with interposer technology in the market, notably the AMD Fiji/Fury GPU, and the Xilinx Virtex-7 FPGA. In 2016, CEA Leti demonstrated their second generation 3D-NoC technology which combines small dies ("chiplets"), fabricated at the FDSOI 28 nm node, on a 65 nm CMOS interposer.
Another example of an interposer would be the adapter used to plug a SATA drive into a SAS backplane with redundant ports. While SAS drives have two ports that can be used to connect to redundant paths or storage controllers, SATA drives only have a single port. Directly, they can only connect to a single controller or path. SATA drives can be connected to nearly all SAS backplanes without adapters, but using an interposer with a port switching logic allows providing path redundancy.
See also
Die preparation
Integrated circuit
Semiconductor fabrication
|
https://en.wikipedia.org/wiki/Log%20analysis
|
In computer log management and intelligence, log analysis (or system and network log analysis) is an art and science seeking to make sense of computer-generated records (also called log or audit trail records). The process of creating such records is called data logging.
Typical reasons why people perform log analysis are:
Compliance with security policies
Compliance with audit or regulation
System troubleshooting
Forensics (during investigations or in response to a subpoena)
Security incident response
Understanding online user behavior
Logs are emitted by network devices, operating systems, applications and all manner of intelligent or programmable devices. A stream of messages in time sequence often comprises a log. Logs may be directed to files and stored on disk or directed as a network stream to a log collector.
Log messages must usually be interpreted concerning the internal state of its source (e.g., application) and announce security-relevant or operations-relevant events (e.g., a user login, or a systems error).
Logs are often created by software developers to aid in the debugging of the operation of an application or understanding how users are interacting with a system, such as a search engine. The syntax and semantics of data within log messages are usually application or vendor-specific. The terminology may also vary; for example, the authentication of a user to an application may be described as a log in, a logon, a user connection or an authentication event. Hence, log analysis must interpret messages within the context of an application, vendor, system or configuration to make useful comparisons to messages from different log sources.
Log message format or content may not always be fully documented. A task of the log analyst is to induce the system to emit the full range of messages to understand the complete domain from which the messages must be interpreted.
A log analyst may map varying terminology from different log sources into a uni
|
https://en.wikipedia.org/wiki/Oxford%20Dictionary%20of%20Biology
|
Oxford Dictionary of Biology (often abbreviated to ODB) is a multiple editions dictionary published by the English Oxford University Press. With more than 5,500 entries, it contains comprehensive information in English on topics relating to biology, biophysics, and biochemistry. The first edition was published in 1985 as A Concise Dictionary of Biology. The seventh edition, A Dictionary of Biology, was published in 2015 and it was edited by Robert Hine and Elizabeth Martin.
Robert Hine studied at King's College London and University of Aberdeen and since 1984 he has contributed to numerous journals and books.
Digital and on-line availability
The sixth and seventh editions of the ODB are available online for members of subscribed institutions and for subscribed individuals via Oxford Reference.
Editions
The first edition of Oxford Dictionary of Biology was first published in 1985 and the seventh edition in 2015.
|
https://en.wikipedia.org/wiki/Almost%20all
|
In mathematics, the term "almost all" means "all but a negligible quantity". More precisely, if is a set, "almost all elements of " means "all elements of but those in a negligible subset of ". The meaning of "negligible" depends on the mathematical context; for instance, it can mean finite, countable, or null.
In contrast, "almost no" means "a negligible quantity"; that is, "almost no elements of " means "a negligible quantity of elements of ".
Meanings in different areas of mathematics
Prevalent meaning
Throughout mathematics, "almost all" is sometimes used to mean "all (elements of an infinite set) except for finitely many". This use occurs in philosophy as well. Similarly, "almost all" can mean "all (elements of an uncountable set) except for countably many".
Examples:
Almost all positive integers are greater than 1012.
Almost all prime numbers are odd (2 is the only exception).
Almost all polyhedra are irregular (as there are only nine exceptions: the five platonic solids and the four Kepler–Poinsot polyhedra).
If P is a nonzero polynomial, then P(x) ≠ 0 for almost all x (if not all x).
Meaning in measure theory
When speaking about the reals, sometimes "almost all" can mean "all reals except for a null set". Similarly, if S is some set of reals, "almost all numbers in S" can mean "all numbers in S except for those in a null set". The real line can be thought of as a one-dimensional Euclidean space. In the more general case of an n-dimensional space (where n is a positive integer), these definitions can be generalised to "all points except for those in a null set" or "all points in S except for those in a null set" (this time, S is a set of points in the space). Even more generally, "almost all" is sometimes used in the sense of "almost everywhere" in measure theory, or in the closely related sense of "almost surely" in probability theory.
Examples:
In a measure space, such as the real line, countable sets are null. The set of rational numbers is
|
https://en.wikipedia.org/wiki/Interface%20logic%20model
|
In electronics, the interface logic model (ILM) is a technique to model blocks in hierarchal VLSI implementation flow. It is a gate level model of a physical block where only the connections from the inputs to the first stage of flip-flops, and the connections from the last stage of flip-flops to the outputs are in the model, including the flip-flops and the clock tree driving these flip-flops. All other internal flip-flop to flip-flop paths are stripped out of the ILM.
The advantage of ILM is that the entire path (clock to clock path) is visible at top level for interface nets, unlike traditional block-based hierarchal implementation flow. This gives better accuracy in analysis for interface nets at negligible additional memory and runtime overhead.
|
https://en.wikipedia.org/wiki/Waveform%20shaping
|
Waveform shaping in electronics is the modification of the shape of an electronic waveform. It is in close connection with waveform diversity and waveform design, which are extensively studied in signal processing. Shaping the waveforms
are of particular interest in active sensing (radar, sonar) for better detection performance, as well as communication schemes (CDMA, frequency hopping), and biology (for animal stimuli design).
See also Modulation, Pulse compression, Spread spectrum, Transmit diversity, Ambiguity function, Autocorrelation, and Cross-correlation.
Further reading
Hao He, Jian Li, and Petre Stoica. Waveform design for active sensing systems: a computational approach. Cambridge University Press, 2012.
Solomon W. Golomb, and Guang Gong. Signal design for good correlation: for wireless communication, cryptography, and radar. Cambridge University Press, 2005.
M. Soltanalian. Signal Design for Active Sensing and Communications. Uppsala Dissertations from the Faculty of Science and Technology (printed by Elanders Sverige AB), 2014.
Nadav Levanon, and Eli Mozeson. Radar signals. Wiley. com, 2004.
Jian Li, and Petre Stoica, eds. Robust adaptive beamforming. New Jersey: John Wiley, 2006.
Fulvio Gini, Antonio De Maio, and Lee Patton, eds. Waveform design and diversity for advanced radar systems. Institution of engineering and technology, 2012.
Mark R. Bell, "Information theory and radar waveform design." IEEE Transactions on Information Theory, 39.5 (1993): 1578–1597.
Robert Calderbank, S. Howard, and Bill Moran. "Waveform diversity in radar signal processing." IEEE Signal Processing Magazine, 26.1 (2009): 32–41.
Augusto Aubry, Antonio De Maio, Bo Jiang, and Shuzhong Zhang. "Ambiguity function shaping for cognitive radar via complex quartic optimization." IEEE Transactions on Signal Processing 61 (2013): 5603–5619.
John J. Benedetto, Ioannis Konstantinidis, and Muralidhar Rangaswamy. "Phase-coded waveforms and their design." IEEE Signal Processing
|
https://en.wikipedia.org/wiki/Nomen%20novum
|
In biological nomenclature, a nomen novum (Latin for "new name"), new replacement name (or replacement name, new substitute name, substitute name) is a scientific name that is created specifically to replace another scientific name, but only when this other name cannot be used for technical, nomenclatural reasons (for example because it is a homonym: it is spelled the same as an existing, older name). It does not apply when a name is changed for taxonomic reasons (representing a change in scientific insight). It is frequently abbreviated, e.g. nomen nov., nom. nov..
Zoology
In zoology establishing a new replacement name is a nomenclatural act and it must be expressly proposed to substitute a previously established and available name.
Often, the older name cannot be used because another animal was described earlier with exactly the same name. For example, Lindholm discovered in 1913 that a generic name Jelskia established by Bourguignat in 1877 for a European freshwater snail could not be used because another author Taczanowski had proposed the same name in 1871 for a spider. So Lindholm proposed a new replacement name Borysthenia. This is an objective synonym of Jelskia Bourguignat, 1877, because he has the same type species, and is used today as Borysthenia.
Also, for names of species new replacement names are often necessary. New replacement names have been proposed since more than 100 years ago. In 1859 Bourguignat saw that the name Bulimus cinereus Mortillet, 1851 for an Italian snail could not be used because Reeve had proposed exactly the same name in 1848 for a completely different Bolivian snail. Since it was understood even then that the older name always has priority, Bourguignat proposed a new replacement name Bulimus psarolenus, and also added a note why this was necessary. The Italian snail is known until today under the name Solatopupa psarolena (Bourguignat, 1859).
A new replacement name must obey certain rules; not all of these are well known.
|
https://en.wikipedia.org/wiki/List%20of%20textbooks%20in%20thermodynamics%20and%20statistical%20mechanics
|
A list of notable textbooks in thermodynamics and statistical mechanics, arranged by category and date.
Only or mainly thermodynamics
Both thermodynamics and statistical mechanics
2e Kittel, Charles; and Kroemer, Herbert (1980) New York: W.H. Freeman
2e (1988) Chichester: Wiley , .
(1990) New York: Dover
Statistical mechanics
. 2e (1936) Cambridge: University Press; (1980) Cambridge University Press.
; (1979) New York: Dover
Vol. 5 of the Course of Theoretical Physics. 3e (1976) Translated by J.B. Sykes and M.J. Kearsley (1980) Oxford : Pergamon Press.
. 3e (1995) Oxford: Butterworth-Heinemann
. 2e (1987) New York: Wiley
. 2e (1988) Amsterdam: North-Holland . 2e (1991) Berlin: Springer Verlag ,
; (2005) New York: Dover
2e (2000) Sausalito, Calif.: University Science
2e (1998) Chichester: Wiley
Specialized topics
Kinetic theory
Vol. 10 of the Course of Theoretical Physics (3rd Ed). Translated by J.B. Sykes and R.N. Franklin (1981) London: Pergamon ,
Quantum statistical mechanics
Mathematics of statistical mechanics
Translated by G. Gamow (1949) New York: Dover
. Reissued (1974), (1989); (1999) Singapore: World Scientific
; (1984) Cambridge: University Press . 2e (2004) Cambridge: University Press
Miscellaneous
(available online here)
Historical
(1896, 1898) Translated by Stephen G. Brush (1964) Berkeley: University of California Press; (1995) New York: Dover
Translated by J. Kestin (1956) New York: Academic Press.
German Encyclopedia of Mathematical Sciences. Translated by Michael J. Moravcsik (1959) Ithaca: Cornell University Press; (1990) New York: Dover
See also
List of textbooks on classical mechanics and quantum mechanics
List of textbooks in electromagnetism
List of books on general relativity
Further reading
|
https://en.wikipedia.org/wiki/Bit%20banging
|
In computer engineering and electrical engineering, bit banging is a "term of art" for any method of data transmission that employs software as a substitute for dedicated hardware to generate transmitted signals or process received signals. Software directly sets and samples the states of GPIOs (e.g., pins on a microcontroller), and is responsible for meeting all timing requirements and protocol sequencing of the signals. In contrast to bit banging, dedicated hardware (e.g., UART, SPI, I²C) satisfies these requirements and, if necessary, provides a data buffer to relax software timing requirements. Bit banging can be implemented at very low cost, and is commonly used in some embedded systems.
Bit banging allows a device to implement different protocols with minimal or no hardware changes. In some cases, bit banging is made feasible by newer, faster processors because more recent hardware operates much more quickly than hardware did when standard communications protocols were created.
C code example
The following C language code example transmits a byte of data on an SPI bus.
// transmit byte serially, MSB first
void send_8bit_serial_data(unsigned char data)
{
int i;
// select device (active low)
output_low(SD_CS);
// send bits 7..0
for (i = 0; i < 8; i++)
{
// consider leftmost bit
// set line high if bit is 1, low if bit is 0
if (data & 0x80)
output_high(SD_DI);
else
output_low(SD_DI);
// pulse the clock state to indicate that bit value should be read
output_low(SD_CLK);
delay();
output_high(SD_CLK);
// shift byte left so next bit will be leftmost
data <<= 1;
}
// deselect device
output_high(SD_CS);
}
Considerations
The question whether to deploy bit banging or not is a trade-off between load, performance and reliability on one hand, and the availability of a hardware alternative on the other. The software emulation process consumes more
|
https://en.wikipedia.org/wiki/Runtime%20application%20self-protection
|
Runtime application self-protection (RASP) is a security technology that uses runtime instrumentation to detect and block computer attacks by taking advantage of information from inside the running software. The technology differs from perimeter-based protections such as firewalls, that can only detect and block attacks by using network information without contextual awareness. RASP technology is said to improve the security of software by monitoring its inputs, and blocking those that could allow attacks, while protecting the runtime environment from unwanted changes and tampering. RASP-protected applications rely less on external devices like firewalls to provide runtime security protection. When a threat is detected RASP can prevent exploitation and possibly take other actions, including terminating a user's session, shutting the application down, alerting security personnel and sending a warning to the user. RASP aims to close the gap left by application security testing and network perimeter controls, neither of which have enough insight into real-time data and event flows to either prevent vulnerabilities slipping through the review process or block new threats that were unforeseen during development.
Implementation
RASP can be integrated as a framework or module that runs in conjunction with a program's codes, libraries and system calls. The technology can also be implemented as a virtualization. RASP is similar to interactive application security testing (IAST), the key difference is that IAST is focused on identifying vulnerabilities within the applications and RASPs are focused protecting against cybersecurity attacks that may take advantages of those vulnerabilities or other attack vectors.
Deployment options
RASP solutions can be deployed in two different ways: monitor or protection mode. In monitor mode, the RASP solution reports on web application attacks but does not block any attack. In protection mode, the RASP solution reports and blocks web a
|
https://en.wikipedia.org/wiki/Packet%20processing
|
In digital communications networks, packet processing refers to the wide variety of algorithms that are applied to a packet of data or information as it moves through the various network elements of a communications network. With the increased performance of network interfaces, there is a corresponding need for faster packet processing.
There are two broad classes of packet processing algorithms that align with the standardized network subdivision of control plane and data plane. The algorithms are applied to either:
Control information contained in a packet which is used to transfer the packet safely and efficiently from origin to destination
or
The data content (frequently called the payload) of the packet which is used to provide some content-specific transformation or take a content-driven action.
Within any network enabled device (e.g. router, switch, network element or terminal such as a computer or smartphone) it is the packet processing subsystem that manages the traversal of the multi-layered network or protocol stack from the lower, physical and network layers all the way through to the application layer.
History
The history of packet processing is the history of the Internet and packet switching. Packet processing milestones include:
1962–1968: Early research into packet switching
1969: 1st two nodes of ARPANET connected; 15 sites connected by end of 1971 with email as a new application
1973: Packet switched voice connections over ARPANET with Network Voice Protocol. File Transfer Protocol (FTP) specified
1974: Transmission Control Protocol (TCP) specified
1979: VoIP – NVP running on early versions of IP
1981: IP and TCP standardized
1982: TCP/IP standardized
1991: World Wide Web (WWW) released by CERN, authored by Tim Berners-Lee
1998: IPv6 first published
Historical references and timeline can be found in the External Resources section below.
Communications models
For networks to succeed it is necessary to have a unifying standard
|
https://en.wikipedia.org/wiki/Classification%20of%20Fatou%20components
|
In mathematics, Fatou components are components of the Fatou set. They were named after Pierre Fatou.
Rational case
If f is a rational function
defined in the extended complex plane, and if it is a nonlinear function (degree > 1)
then for a periodic component of the Fatou set, exactly one of the following holds:
contains an attracting periodic point
is parabolic
is a Siegel disc: a simply connected Fatou component on which f(z) is analytically conjugate to a Euclidean rotation of the unit disc onto itself by an irrational rotation angle.
is a Herman ring: a double connected Fatou component (an annulus) on which f(z) is analytically conjugate to a Euclidean rotation of a round annulus, again by an irrational rotation angle.
Attracting periodic point
The components of the map contain the attracting points that are the solutions to . This is because the map is the one to use for finding solutions to the equation by Newton–Raphson formula. The solutions must naturally be attracting fixed points.
Herman ring
The map
and t = 0.6151732... will produce a Herman ring. It is shown by Shishikura that the degree of such map must be at least 3, as in this example.
More than one type of component
If degree d is greater than 2 then there is more than one critical point and then can be more than one type of component
Transcendental case
Baker domain
In case of transcendental functions there is another type of periodic Fatou components, called Baker domain: these are "domains on which the iterates tend to an essential singularity (not possible for polynomials and rational functions)" one example of such a function is:
Wandering domain
Transcendental maps may have wandering domains: these are Fatou components that are not eventually periodic.
See also
No-wandering-domain theorem
Montel's theorem
John Domains
Basins of attraction
|
https://en.wikipedia.org/wiki/Square%20root%20of%203
|
The square root of 3 is the positive real number that, when multiplied by itself, gives the number 3. It is denoted mathematically as or . It is more precisely called the principal square root of 3 to distinguish it from the negative number with the same property. The square root of 3 is an irrational number. It is also known as Theodorus' constant, after Theodorus of Cyrene, who proved its irrationality.
, its numerical value in decimal notation had been computed to at least ten billion digits. Its decimal expansion, written here to 65 decimal places, is given by :
The fraction (...) can be used as a good approximation. Despite having a denominator of only 56, it differs from the correct value by less than (approximately , with a relative error of ). The rounded value of is correct to within 0.01% of the actual value.
The fraction (...) is accurate to .
Archimedes reported a range for its value: .
The lower limit is an accurate approximation for to (six decimal places, relative error ) and the upper limit to (four decimal places, relative error ).
Expressions
It can be expressed as the continued fraction .
So it is true to say:
then when :
It can also be expressed by generalized continued fractions such as
which is evaluated at every second term.
Geometry and trigonometry
The square root of 3 can be found as the leg length of an equilateral triangle that encompasses a circle with a diameter of 1.
If an equilateral triangle with sides of length 1 is cut into two equal halves, by bisecting an internal angle across to make a right angle with one side, the right angle triangle's hypotenuse is length one, and the sides are of length and . From this, , , and .
The square root of 3 also appears in algebraic expressions for various other trigonometric constants, including the sines of 3°, 12°, 15°, 21°, 24°, 33°, 39°, 48°, 51°, 57°, 66°, 69°, 75°, 78°, 84°, and 87°.
It is the distance between parallel sides of a regular hexagon with sides of
|
https://en.wikipedia.org/wiki/Microbivory
|
Microbivory (adj. microbivorous, microbivore) is a feeding behavior consisting of eating microbes (especially bacteria) practiced by animals of the mesofauna, microfauna and meiofauna.
Microbivorous animals include some soil nematodes, springtails or flies such as Drosophila sharpi. A well known example of microbivorous nematodes is the model roundworm Caenorhabditis elegans which is maintained in culture in labs on agar plates, fed with the 'OP50' Escherichia coli strain of bacteria.
In food webs of ecosystems, microbivores can be distinguished from detritivores, generally thought playing the roles of decomposers, as they don't consume decaying dead matter but only living microorganisms.
Use of term in robotics
There is also use of the term 'microbivore' to qualify the concept of robots autonomously finding their energy in the production of bacteria. Robert Freitas has also proposed microbivore robots that would attack pathogens in the manner of white blood cells.
See also
Bacterivore
|
https://en.wikipedia.org/wiki/Biological%20constraints
|
Biological constraints are factors which make populations resistant to evolutionary change. One proposed definition of constraint is "A property of a trait that, although possibly adaptive in the environment in which it originally evolved, acts to place limits on the production of new phenotypic variants." Constraint has played an important role in the development of such ideas as homology and body plans.
Types of constraint
Any aspect of an organism that has not changed over a certain period of time could be considered to provide evidence for "constraint" of some sort. To make the concept more useful, it is therefore necessary to divide it into smaller units. First, one can consider the pattern of constraint as evidenced by phylogenetic analysis and the use of phylogenetic comparative methods; this is often termed phylogenetic inertia, or phylogenetic constraint. It refers to the tendency of related taxa sharing traits based on phylogeny. Charles Darwin spoke of this concept in his 1859 book "On the Origin of Species", as being "Unity of Type" and went on to explain the phenomenon as existing because organisms do not start over from scratch, but have characteristics that are built upon already existing ones that were inherited from their ancestors; and these characteristics likely limit the amount of evolution seen in that new taxa due to these constraints.
If one sees particular features of organisms that have not changed over rather long periods of time (many generations), then this could suggest some constraint on their ability to change (evolve). However, it is not clear that mere documentation of lack of change in a particular character is good evidence for constraint in the sense of the character being unable to change. For example, long-term stabilizing selection related to stable environments might cause stasis. It has often been considered more fruitful, to consider constraint in its causal sense: what are the causes of lack of change?
Stabilizing s
|
https://en.wikipedia.org/wiki/List%20of%20sums%20of%20reciprocals
|
In mathematics and especially number theory, the sum of reciprocals generally is computed for the reciprocals of some or all of the positive integers (counting numbers)—that is, it is generally the sum of unit fractions. If infinitely many numbers have their reciprocals summed, generally the terms are given in a certain sequence and the first n of them are summed, then one more is included to give the sum of the first n+1 of them, etc.
If only finitely many numbers are included, the key issue is usually to find a simple expression for the value of the sum, or to require the sum to be less than a certain value, or to determine whether the sum is ever an integer.
For an infinite series of reciprocals, the issues are twofold: First, does the sequence of sums diverge—that is, does it eventually exceed any given number—or does it converge, meaning there is some number that it gets arbitrarily close to without ever exceeding it? (A set of positive integers is said to be large if the sum of its reciprocals diverges, and small if it converges.) Second, if it converges, what is a simple expression for the value it converges to, is that value rational or irrational, and is that value algebraic or transcendental?
Finitely many terms
The harmonic mean of a set of positive integers is the number of numbers times the reciprocal of the sum of their reciprocals.
The optic equation requires the sum of the reciprocals of two positive integers a and b to equal the reciprocal of a third positive integer c. All solutions are given by a = mn + m2, b = mn + n2, c = mn. This equation appears in various contexts in elementary geometry.
The Fermat–Catalan conjecture concerns a certain Diophantine equation, equating the sum of two terms, each a positive integer raised to a positive integer power, to a third term that is also a positive integer raised to a positive integer power (with the base integers having no prime factor in common). The conjecture asks whether the equation has an infi
|
https://en.wikipedia.org/wiki/Perturbation%20theory
|
In mathematics and applied mathematics, perturbation theory comprises methods for finding an approximate solution to a problem, by starting from the exact solution of a related, simpler problem. A critical feature of the technique is a middle step that breaks the problem into "solvable" and "perturbative" parts. In perturbation theory, the solution is expressed as a power series in a small parameter The first term is the known solution to the solvable problem. Successive terms in the series at higher powers of usually become smaller. An approximate 'perturbation solution' is obtained by truncating the series, usually by keeping only the first two terms, the solution to the known problem and the 'first order' perturbation correction.
Perturbation theory is used in a wide range of fields, and reaches its most sophisticated and advanced forms in quantum field theory. Perturbation theory (quantum mechanics) describes the use of this method in quantum mechanics. The field in general remains actively and heavily researched across multiple disciplines.
Description
Perturbation theory develops an expression for the desired solution in terms of a formal power series known as a perturbation series in some "small" parameter, that quantifies the deviation from the exactly solvable problem. The leading term in this power series is the solution of the exactly solvable problem, while further terms describe the deviation in the solution, due to the deviation from the initial problem. Formally, we have for the approximation to the full solution a series in the small parameter (here called ), like the following:
In this example, would be the known solution to the exactly solvable initial problem, and the terms represent the first-order, second-order, third-order, and higher-order terms, which may be found iteratively by a mechanistic but increasingly difficult procedure. For small these higher-order terms in the series generally (but not always) become successively small
|
https://en.wikipedia.org/wiki/Individuation
|
The principle of individuation, or , describes the manner in which a thing is identified as distinct from other things.
The concept appears in numerous fields and is encountered in works of Leibniz, Carl Jung, Gunther Anders, Gilbert Simondon, Bernard Stiegler, Friedrich Nietzsche, Arthur Schopenhauer, David Bohm, Henri Bergson, Gilles Deleuze, and Manuel DeLanda.
Usage
The word individuation occurs with different meanings and connotations in different fields.
In philosophy
Philosophically, "individuation" expresses the general idea of how a thing is identified as an individual thing that "is not something else". This includes how an individual person is held to be different from other elements in the world and how a person is distinct from other persons. By the seventeenth century, philosophers began to associate the question of individuation or what brings about individuality at any one time with the question of identity or what constitutes sameness at different points in time.
In Jungian psychology
In analytical psychology, individuation is the process by which the individual self develops out of an undifferentiated unconscious – seen as a developmental psychic process during which innate elements of personality, the components of the immature psyche, and the experiences of the person's life become, if the process is more or less successful, integrated over time into a well-functioning whole. Other psychoanalytic theorists describe it as the stage where an individual transcends group attachment and narcissistic self-absorption.
In the news industry
The news industry has begun using the term individuation to denote new printing and on-line technologies that permit mass customization of the contents of a newspaper, a magazine, a broadcast program, or a website so that its contents match each user's unique interests. This differs from the traditional mass-media practice of producing the same contents for all readers, viewers, listeners, or on-line users.
Com
|
https://en.wikipedia.org/wiki/Davydov%20soliton
|
In quantum biology, the Davydov soliton (after the Soviet Ukrainian physicist Alexander Davydov) is a quasiparticle representing an excitation propagating along the self-trapped amide I groups within the α-helices of proteins. It is a solution of the Davydov Hamiltonian.
The Davydov model describes the interaction of the amide I vibrations with the hydrogen bonds that stabilize the α-helices of proteins. The elementary excitations within the α-helix are given by the phonons which correspond to the deformational oscillations of the lattice, and the excitons which describe the internal amide I excitations of the peptide groups. Referring to the atomic structure of an α-helix region of protein the mechanism that creates the Davydov soliton (polaron, exciton) can be described as follows: vibrational energy of the C=O stretching (or amide I) oscillators that is localized on the α-helix acts through a phonon coupling effect to distort the structure of the α-helix, while the helical distortion reacts again through phonon coupling to trap the amide I oscillation energy and prevent its dispersion. This effect is called self-localization or self-trapping. Solitons in which the energy is distributed in a fashion preserving the helical symmetry are dynamically unstable, and such symmetrical solitons once formed decay rapidly when they propagate. On the other hand, an asymmetric soliton which spontaneously breaks the local translational and helical symmetries possesses the lowest energy and is a robust localized entity.
Davydov Hamiltonian
Davydov Hamiltonian is formally similar to the Fröhlich-Holstein Hamiltonian for the interaction of electrons with a polarizable lattice. Thus the Hamiltonian of the energy operator is
where is the exciton Hamiltonian, which describes the motion of the amide I excitations between adjacent sites; is the phonon Hamiltonian, which describes
the vibrations of the lattice; and is the interaction Hamiltonian, which describes the interaction
|
https://en.wikipedia.org/wiki/Pre-charge
|
Pre-charge of the powerline voltages in a high voltage DC application is a preliminary mode which limits the inrush current during the power up procedure.
A high-voltage system with a large capacitive load can be exposed to high electric current during initial turn-on. This current, if not limited, can cause considerable stress or damage to the system components. In some applications, the occasion to activate the system is a rare occurrence, such as in commercial utility power distribution. In other systems such as vehicle applications, pre-charge will occur with each use of the system, multiple times per day. Precharging is implemented to increase the lifespan of electronic components and increase reliability of the high voltage system.
Background: inrush currents into capacitors
Inrush currents into capacitive components are a key concern in power-up stress to components. When DC input power is applied to a capacitive load, the step response of the voltage input will cause the input capacitor to charge. The capacitor charging starts with an inrush current and ends with an exponential decay down to the steady state condition. When the magnitude of the inrush peak is very large compared to the maximum rating of the components, then component stress is to be expected.
The current into a capacitor is known to be : the peak inrush current will depend upon the capacitance C and the rate of change of the voltage (dV/dT). The inrush current will increase as the capacitance value increases, and the inrush current will increase as the voltage of the power source increases. This second parameter is of primary concern in high voltage power distribution systems. By their nature, high voltage power sources will deliver high voltage into the distribution system. Capacitive loads will then be subject to high inrush currents upon power-up. The stress to the components must be understood and minimized.
The objective of a pre-charge function is to limit the magnitude of the inru
|
https://en.wikipedia.org/wiki/Hand-waving
|
Hand-waving (with various spellings) is a pejorative label for attempting to be seen as effective – in word, reasoning, or deed – while actually doing nothing effective or substantial. It is often applied to debating techniques that involve fallacies, misdirection and the glossing over of details. It is also used academically to indicate unproven claims and skipped steps in proofs (sometimes intentionally, as in lectures and instructional materials), with some specific meanings in particular fields, including literary criticism, speculative fiction, mathematics, logic, science and engineering.
The term can additionally be used in work situations, when attempts are made to display productivity or assure accountability without actually resulting in them. The term can also be used as a self-admission of, and suggestion to defer discussion about, an allegedly unimportant weakness in one's own argument's evidence, to forestall an opponent dwelling on it. In debate competition, certain cases of this form of hand-waving may be explicitly permitted.
Hand-waving is an idiomatic metaphor, derived in part from the use of excessive gesticulation, perceived as unproductive, distracting or nervous, in communication or other effort. The term also evokes the sleight-of-hand distraction techniques of stage magic, and suggests that the speaker or writer seems to believe that if they, figuratively speaking, simply wave their hands, no one will notice or speak up about the holes in the reasoning. This implication of misleading intent has been reinforced by the pop-culture influence of the Star Wars franchise, in which mystically powerful hand-waving is fictionally used for mind control, and some uses of the term in public discourse are explicit Star Wars references.
Actual hand-waving motions may be used either by a speaker to indicate a desire to avoid going into details, or by critics to indicate that they believe the proponent of an argument is engaging in a verbal hand-wave in
|
https://en.wikipedia.org/wiki/Del
|
Del, or nabla, is an operator used in mathematics (particularly in vector calculus) as a vector differential operator, usually represented by the nabla symbol ∇. When applied to a function defined on a one-dimensional domain, it denotes the standard derivative of the function as defined in calculus. When applied to a field (a function defined on a multi-dimensional domain), it may denote any one of three operations depending on the way it is applied: the gradient or (locally) steepest slope of a scalar field (or sometimes of a vector field, as in the Navier–Stokes equations); the divergence of a vector field; or the curl (rotation) of a vector field.
Del is a very convenient mathematical notation for those three operations (gradient, divergence, and curl) that makes many equations easier to write and remember. The del symbol (or nabla) can be formally defined as a three-dimensional vector operator whose three components are the corresponding partial derivative operators. As a vector operator, it can act on scalar and vector fields in three different ways, giving rise to three different differential operations: first, it can act on scalar fields by a "formal" scalar multiplication—to give a vector field called the gradient; second, it can act on vector fields by a "formal" dot product—to give a scalar field called the divergence; and lastly, it can act on vector fields by a "formal" cross product—to give a vector field called the curl. These "formal" products do not necessarily commute with other operators or products. These three uses, detailed below, are summarized as:
Gradient:
Divergence:
Curl:
Definition
In the Cartesian coordinate system with coordinates and standard basis , del is a vector operator whose components are the partial derivative operators ; that is,
Where the expression in parentheses is a row vector. In three-dimensional Cartesian coordinate system with coordinates and standard basis or unit vectors of axes , del is written as
As
|
https://en.wikipedia.org/wiki/Hardware-in-the-loop%20simulation
|
Hardware-in-the-loop (HIL) simulation, HWIL, or HITL, is a technique that is used in the development and testing of complex real-time embedded systems. HIL simulation provides an effective testing platform by adding the complexity of the process-actuator system, known as a plant, to the test platform. The complexity of the plant under control is included in testing and development by adding a mathematical representation of all related dynamic systems. These mathematical representations are referred to as the "plant simulation". The embedded system to be tested interacts with this plant simulation.
How HIL works
HIL simulation must include electrical emulation of sensors and actuators. These electrical emulations act as the interface between the plant simulation and the embedded system under test. The value of each electrically emulated sensor is controlled by the plant simulation and is read by the embedded system under test (feedback). Likewise, the embedded system under test implements its control algorithms by outputting actuator control signals. Changes in the control signals result in changes to variable values in the plant simulation.
For example, a HIL simulation platform for the development of automotive anti-lock braking systems may have mathematical representations for each of the following subsystems in the plant simulation:
Vehicle dynamics, such as suspension, wheels, tires, roll, pitch and yaw;
Dynamics of the brake system's hydraulic components;
Road characteristics.
Uses
In many cases, the most effective way to develop an embedded system is to connect the embedded system to the real plant. In other cases, HIL simulation is more efficient. The metric of development and testing efficiency is typically a formula that includes the following factors:
1. Cost
2. Duration
3. Safety
4. Feasibility
The cost of the approach should be a measure of the cost of all tools and effort. The duration of development and testing affects the time-to-market for
|
https://en.wikipedia.org/wiki/Electrochromatography
|
Electrochromatography is a chemical separation technique in analytical chemistry, biochemistry and molecular biology used to resolve and separate mostly large biomolecules such as proteins. It is a combination of size exclusion chromatography (gel filtration chromatography) and gel electrophoresis. These separation mechanisms operate essentially in superposition along the length of a gel filtration column to which an axial electric field gradient has been added. The molecules are separated by size due to the gel filtration mechanism and by electrophoretic mobility due to the gel electrophoresis mechanism. Additionally there are secondary chromatographic solute retention mechanisms.
Capillary electrochromatography
Capillary electrochromatography (CEC) is an electrochromatography technique in which the liquid mobile phase is driven through a capillary containing the chromatographic stationary phase by electroosmosis. It is a combination of high-performance liquid chromatography and capillary electrophoresis. The capillaries is packed with HPLC stationary phase and a high voltage is applied to achieve separation is achieved by electrophoretic migration of the analyte and differential partitioning in the stationary phase.
See also
Chromatography
Protein electrophoresis
Electrofocusing
Two-dimensional gel electrophoresis
Temperature gradient gel electrophoresis
|
https://en.wikipedia.org/wiki/List%20of%20homological%20algebra%20topics
|
This is a list of homological algebra topics, by Wikipedia page.
Basic techniques
Cokernel
Exact sequence
Chain complex
Differential module
Five lemma
Short five lemma
Snake lemma
Nine lemma
Extension (algebra)
Central extension
Splitting lemma
Projective module
Injective module
Projective resolution
Injective resolution
Koszul complex
Exact functor
Derived functor
Ext functor
Tor functor
Filtration (abstract algebra)
Spectral sequence
Abelian category
Triangulated category
Derived category
Applications
Group cohomology
Galois cohomology
Lie algebra cohomology
Sheaf cohomology
Whitehead problem
Homological conjectures in commutative algebra
Homological algebra
|
https://en.wikipedia.org/wiki/Lorenzo%27s%20oil
|
Lorenzo’s oil is liquid solution, made of 4 parts glycerol trioleate and 1 part glycerol trierucate, which are the triacylglycerol forms of oleic acid and erucic acid. It is prepared from olive oil and rapeseed oil.
It is used in the investigational treatment of asymptomatic patients with adrenoleukodystrophy (ALD), a nervous system disorder.
The development of the oil was led by Augusto and Michaela Odone after their son Lorenzo was diagnosed with the disease in 1984, at the age of five. Lorenzo was predicted to die within a few years. His parents sought experimental treatment options, and the initial formulation of the oil was developed by retired British scientist Don Suddaby (formerly of Croda International). Suddaby and his colleague, Keith Coupland, received U.S. Patent No. 5,331,009 for the oil. The royalties received by Augusto were paid to The Myelin Project which he and Michaela founded to further research treatments for ALD and similar disorders. The Odones and their invention obtained widespread publicity in 1992 because of the film Lorenzo's Oil.
Research on the effectiveness of Lorenzo's Oil has seen mixed results, with possible benefit for asymptomatic ALD patients but of unpredictable or no benefit to those with symptoms, suggesting its possible role as a preventative measure in families identified as ALD dominant. Lorenzo Odone died on May 30, 2008, at the age of 30; he was bedridden with paralysis and died from aspiration pneumonia, likely caused by having inhaled food.
Treatment costs
Lorenzo's oil costs approximately $400 USD for a month's treatment.
Proposed mechanism of action
The mixture of fatty acids purportedly reduces the levels of very long chain fatty acids (VLCFAs), which are elevated in ALD. It does so by competitively inhibiting the enzyme that forms VLCFAs.
Effectiveness
Lorenzo's oil, in combination with a diet low in VLCFA, has been investigated for its possible effects on the progression of ALD. Clinical results have been mi
|
https://en.wikipedia.org/wiki/Noether%27s%20theorem
|
Noether's theorem or Noether's first theorem states that every differentiable symmetry of the action of a physical system with conservative forces has a corresponding conservation law. The theorem was proven by mathematician Emmy Noether in 1915 and published in 1918. The action of a physical system is the integral over time of a Lagrangian function, from which the system's behavior can be determined by the principle of least action. This theorem only applies to continuous and smooth symmetries over physical space.
Noether's theorem is used in theoretical physics and the calculus of variations. It reveals the fundamental relation between the symmetries of a physical system and the conservation laws. It also made modern theoretical physicists much more focused on symmetries of physical systems. A generalization of the formulations on constants of motion in Lagrangian and Hamiltonian mechanics (developed in 1788 and 1833, respectively), it does not apply to systems that cannot be modeled with a Lagrangian alone (e.g., systems with a Rayleigh dissipation function). In particular, dissipative systems with continuous symmetries need not have a corresponding conservation law.
Briefly, the relationships between symmetries and conservation laws are as follows:
1) Uniformity of space distance-wise ⟹ conservation of linear momentum;
2) Isotropy of space direction-wise ⟹ conservation of angular momentum;
3) Uniformity of time ⟹ conservation of energy
Basic illustrations and background
As an illustration, if a physical system behaves the same regardless of how it is oriented in space (that is, it's invariant), its Lagrangian is symmetric under continuous rotation: from this symmetry, Noether's theorem dictates that the angular momentum of the system be conserved, as a consequence of its laws of motion. The physical system itself need not be symmetric; a jagged asteroid tumbling in space conserves angular momentum despite its asymmetry. It is the laws of its motion that
|
https://en.wikipedia.org/wiki/NAT%20traversal%20with%20session%20border%20controllers
|
Network address translators (NAT) are used to overcome the lack of IPv4 address availability by hiding an enterprise or even an operator's network behind one or few IP addresses. The devices behind the NAT use private IP addresses that are not routable in the public Internet.
The Session Initiation Protocol (SIP) has established itself as the de facto standard for voice over IP (VoIP) communication. In order to establish a call, a caller sends a SIP message, which contains its own IP address. The callee is supposed to reply back with a SIP message destined to the IP addresses included in the received SIP message. This will obviously not work if the caller is behind a NAT and is using a private IP address.
Probably the single biggest mistake in SIP design was ignoring the existence of NATs. This error came from a belief in IETF leadership that IP address space would be exhausted more rapidly and would necessitate global upgrade to IPv6 and eliminate the need for NATs. The SIP standard has assumed that NATs do not exist, an assumption, which turned out to be a failure. SIP simply didn't work for the majority of Internet users who are behind NATs. At the same time it became apparent that the standardization life-cycle is slower than how the market ticks: Session Border Controllers (SBC) were born, and began to fix what the standards failed to do: NAT traversal.
In case a user agent is located behind a NAT then it will use a private IP address as its contact address in the contact and via headers as well as the SDP part. This information would then be useless for anyone trying to contact this user agent from the public Internet.
There are different NAT traversal solutions such as STUN, TURN and ICE. Which solution to use depends on the behavior of the NAT and the call scenario. When using an SBC to solve the NAT traversal issues the most common approach for SBC is to act as the public interface of the user agents. This is achieved by replacing the user agent's cont
|
https://en.wikipedia.org/wiki/Undescribed%20taxon
|
In taxonomy, an undescribed taxon is a taxon (for example, a species) that has been discovered, but not yet formally described and named. The various Nomenclature Codes specify the requirements for a new taxon to be validly described and named. Until such a description has been published, the taxon has no formal or official name, although a temporary, informal name is often used. A published scientific name may not fulfil the requirements of the Codes for various reasons. For example, if the taxon was not adequately described, its name is called a nomen nudum. It is possible for a taxon to be "undescribed" for an extensive period of time, even if unofficial descriptions are published.
An undescribed species may be referred to with the genus name, followed by "sp"., but this abbreviation is also used to label specimens or images that are too incomplete to be identified at the species level. In some cases, there is more than one undescribed species in a genus. In this case, these are often referred to by a number or letter. In the shark genus Pristiophorus, for example, there were, for some time, four undescribed species, informally named Pristiophorus sp. A, B, C and D. (In 2008, sp. A was described as Pristiophorus peroniensis and sp. B as P. delicatus.) When a formal description for species C or D is published, its temporary name will be replaced with a proper binomial name.
Provisional names in bacteriology
In bacteriology, a valid publication of a name requires the deposition of the bacteria in a Bacteriology Culture Collection. Species for which this is impossible cannot receive a valid binomial name; these species are classified as Candidatus.
Provisional names in botany
A provisional name for a species may consist of the number or of some other designation of a specimen in a herbarium or other collection. It may also consist of the genus name followed by such a specimen identifier or by a provisional specific epithet which is enclosed by quotation marks. In
|
https://en.wikipedia.org/wiki/Prime%20constant
|
The prime constant is the real number whose th binary digit is 1 if is prime and 0 if is composite or 1.
In other words, is the number whose binary expansion corresponds to the indicator function of the set of prime numbers. That is,
where indicates a prime and is the characteristic function of the set of prime numbers.
The beginning of the decimal expansion of ρ is:
The beginning of the binary expansion is:
Irrationality
The number can be shown to be irrational. To see why, suppose it were rational.
Denote the th digit of the binary expansion of by . Then since is assumed rational, its binary expansion is eventually periodic, and so there exist positive integers and such that
for all and all .
Since there are an infinite number of primes, we may choose a prime . By definition we see that . As noted, we have for all . Now consider the case . We have , since is composite because . Since we see that is irrational.
|
https://en.wikipedia.org/wiki/List%20of%20recreational%20number%20theory%20topics
|
This is a list of recreational number theory topics (see number theory, recreational mathematics). Listing here is not pejorative: many famous topics in number theory have origins in challenging problems posed purely for their own sake.
See list of number theory topics for pages dealing with aspects of number theory with more consolidated theories.
Number sequences
Integer sequence
Fibonacci sequence
Golden mean base
Fibonacci coding
Lucas sequence
Padovan sequence
Figurate numbers
Polygonal number
Triangular number
Square number
Pentagonal number
Hexagonal number
Heptagonal number
Octagonal number
Nonagonal number
Decagonal number
Centered polygonal number
Centered square number
Centered pentagonal number
Centered hexagonal number
Tetrahedral number
Pyramidal number
Triangular pyramidal number
Square pyramidal number
Pentagonal pyramidal number
Hexagonal pyramidal number
Heptagonal pyramidal number
Octahedral number
Star number
Perfect number
Quasiperfect number
Almost perfect number
Multiply perfect number
Hyperperfect number
Semiperfect number
Primitive semiperfect number
Unitary perfect number
Weird number
Untouchable number
Amicable number
Sociable number
Abundant number
Deficient number
Amenable number
Aliquot sequence
Super-Poulet number
Lucky number
Powerful number
Primeval number
Palindromic number
Telephone number
Triangular square number
Harmonic divisor number
Sphenic number
Smith number
Double Mersenne number
Zeisel number
Heteromecic number
Niven numbers
Superparticular number
Highly composite number
Highly totient number
Practical number
Juggler sequence
Look-and-say sequence
Digits
Polydivisible number
Automorphic number
Armstrong number
Self number
Harshad number
Keith number
Kaprekar number
Digit sum
Persistence of a number
Perfect digital invariant
Happy number
Perfect digit-to-digit invariant
Factorion
Emirp
Palindromic prime
Home prime
Normal number
Stoneham number
Champernowne constant
Absolutely normal number
Repunit
Repdigit
Prime and r
|
https://en.wikipedia.org/wiki/Telematic%20control%20unit
|
A telematic control unit (TCU) in the automobile industry is the embedded system on board a vehicle that wirelessly connects the vehicle to cloud services or other vehicles via V2X standards over a cellular network. The TCU collects telemetry data from the vehicle, such as position, speed, engine data, connectivity quality, etc., from various sub-systems over data and control busses. It may also provide in-vehicle connectivity via Wifi and Bluetooth and implements the eCall function when applicable.
In the automotive domain, a TCU can also be a transmission control unit.
A TCU consists of:
a satellite navigation (GNSS) unit, which keeps track of the latitude and longitude values of the vehicle;
an external interface for mobile communication (GSM, GPRS, Wi-Fi, WiMax, LTE or 5G), which provides the tracked values to a centralized geographical information system (GIS) database server;
an electronic processing unit;
a microcontroller, microprocessor, or field programmable gate array (FPGA) which processes the information and acts as an interface to the GPS;
a mobile communication unit;
memory for saving GPS values in mobile-free zones or to intelligently store information about the vehicle's sensor data.
battery module
See also
Telematics
Auto parts
Embedded systems
External links
What is a Telematics Control Unit & How does it Work?
Automotive telematics control unit (TCU) architecture
What is a Telematics Control Unit (TCU)?
|
https://en.wikipedia.org/wiki/Food%20engineering
|
Food engineering is a scientific, academic, and professional field that interprets and applies principles of engineering, science, and mathematics to food manufacturing and operations, including the processing, production, handling, storage, conservation, control, packaging and distribution of food products. Given its reliance on food science and broader engineering disciplines such as electrical, mechanical, civil, chemical, industrial and agricultural engineering, food engineering is considered a multidisciplinary and narrow field.
Due to the complex nature of food materials, food engineering also combines the study of more specific chemical and physical concepts such as biochemistry, microbiology, food chemistry, thermodynamics, transport phenomena, rheology, and heat transfer. Food engineers apply this knowledge to the cost-effective design, production, and commercialization of sustainable, safe, nutritious, healthy, appealing, affordable and high-quality ingredients and foods, as well as to the development of food systems, machinery, and instrumentation.
History
Although food engineering is a relatively recent and evolving field of study, it is based on long-established concepts and activities. The traditional focus of food engineering was preservation, which involved stabilizing and sterilizing foods, preventing spoilage, and preserving nutrients in food for prolonged periods of time. More specific traditional activities include food dehydration and concentration, protective packaging, canning and freeze-drying . The development of food technologies were greatly influenced and urged by wars and long voyages, including space missions, where long-lasting and nutritious foods were essential for survival. Other ancient activities include milling, storage, and fermentation processes. Although several traditional activities remain of concern and form the basis of today’s technologies and innovations, the focus of food engineering has recently shifted to food qua
|
https://en.wikipedia.org/wiki/Washout%20filter
|
In signal processing, a washout filter is a stable high pass filter with zero static gain. This leads to the filtering of lower frequency inputs signals, leaving the steady state output unaffected by unwanted low frequency inputs.
General Background
The common transfer function for a washout filter is:
Where is the input variable, is the output of the function for the filter, and the frequency of the filter is set in the denominator. This filter will only produce a non-zero output only during transient periods when the input signal is of higher frequency and not in a constant steady state value. Conversely, the filter will “wash out” sensed input signals that is of lower frequency (constant steady-state signal). [C.K. Wang]
Flight Control Application
Yaw Control System
In modern swept wing aircraft, yaw damping control systems are used to dampen and stabilize the Dutch-roll motion of an aircraft in flight. However, when a pilot inputs a command to yaw the aircraft for maneuvering (such as steady turns), the rudder becomes a single control surface that functions to dampen the Dutch-roll motion and yaw the aircraft. The result is a suppressed yaw rate and more required input from the pilot to counter the suppression. [C.K. Wang]
To counter the yaw command suppression, the installation of washout filters before the yaw dampers and rudder actuators will allow the yaw damper feedback loop in the control system to filter out the low frequency signals or state inputs. In the case of a steady turn during flight, the low frequency signal is the pilot command and the washout filter will allow the turn command signal to not be dampened by the yaw damper in the feedback circuit. [C.K. Wang] An example of this use of can be located at Yaw Damper Design for a 747® Jet Aircraft.
|
https://en.wikipedia.org/wiki/Laplace%20limit
|
In mathematics, the Laplace limit is the maximum value of the eccentricity for which a solution to Kepler's equation, in terms of a power series in the eccentricity, converges. It is approximately
0.66274 34193 49181 58097 47420 97109 25290.
Kepler's equation M = E − ε sin E relates the mean anomaly M with the eccentric anomaly E for a body moving in an ellipse with eccentricity ε. This equation cannot be solved for E in terms of elementary functions, but the Lagrange reversion theorem gives the solution as a power series in ε:
or in general
Laplace realized that this series converges for small values of the eccentricity, but diverges for any value of M other than a multiple of π if the eccentricity exceeds a certain value that does not depend on M. The Laplace limit is this value. It is the radius of convergence of the power series.
It is given by the solution to the transcendental equation
No closed-form expression or infinite series is known for the Laplace limit.
History
Laplace calculated the value 0.66195 in 1827. The Italian astronomer Francesco Carlini found the limit 0.66 five years before Laplace. Cauchy in the 1829 gave the precise value 0.66274.
See also
Orbital eccentricity
|
https://en.wikipedia.org/wiki/Linear%20time-invariant%20system
|
In system analysis, among other fields of study, a linear time-invariant (LTI) system is a system that produces an output signal from any input signal subject to the constraints of linearity and time-invariance; these terms are briefly defined below. These properties apply (exactly or approximately) to many important physical systems, in which case the response of the system to an arbitrary input can be found directly using convolution: where is called the system's impulse response and ∗ represents convolution (not to be confused with multiplication). What's more, there are systematic methods for solving any such system (determining ), whereas systems not meeting both properties are generally more difficult (or impossible) to solve analytically. A good example of an LTI system is any electrical circuit consisting of resistors, capacitors, inductors and linear amplifiers.
Linear time-invariant system theory is also used in image processing, where the systems have spatial dimensions instead of, or in addition to, a temporal dimension. These systems may be referred to as linear translation-invariant to give the terminology the most general reach. In the case of generic discrete-time (i.e., sampled) systems, linear shift-invariant is the corresponding term. LTI system theory is an area of applied mathematics which has direct applications in electrical circuit analysis and design, signal processing and filter design, control theory, mechanical engineering, image processing, the design of measuring instruments of many sorts, NMR spectroscopy, and many other technical areas where systems of ordinary differential equations present themselves.
Overview
The defining properties of any LTI system are linearity and time invariance.
Linearity means that the relationship between the input and the output , both being regarded as functions, is a linear mapping: If is a constant then the system output to is ; if is a further input with system output then the output of the
|
https://en.wikipedia.org/wiki/QualNet
|
QualNet is a testing and simulation tool owned and provided by Scalable Network Technologies, Inc. As network simulation software, it acts as a planning, testing, and training tool which mimics the behavior of a physical communications network.
See also
Network simulation
Wireless networking
Computer network analysis
Computer networking
Simulation software
|
https://en.wikipedia.org/wiki/Dirac%20comb
|
In mathematics, a Dirac comb (also known as sha function, impulse train or sampling function) is a periodic function with the formula
for some given period . Here t is a real variable and the sum extends over all integers k. The Dirac delta function and the Dirac comb are tempered distributions. The graph of the function resembles a comb (with the s as the comb's teeth), hence its name and the use of the comb-like Cyrillic letter sha (Ш) to denote the function.
The symbol , where the period is omitted, represents a Dirac comb of unit period. This implies
Because the Dirac comb function is periodic, it can be represented as a Fourier series based on the Dirichlet kernel:
The Dirac comb function allows one to represent both continuous and discrete phenomena, such as sampling and aliasing, in a single framework of continuous Fourier analysis on tempered distributions, without any reference to Fourier series. The Fourier transform of a Dirac comb is another Dirac comb. Owing to the Convolution Theorem on tempered distributions which turns out to be the Poisson summation formula, in signal processing, the Dirac comb allows modelling sampling by multiplication with it, but it also allows modelling periodization by convolution with it.
Dirac-comb identity
The Dirac comb can be constructed in two ways, either by using the comb operator (performing sampling) applied to the function that is constantly , or, alternatively, by using the rep operator (performing periodization) applied to the Dirac delta . Formally, this yields (; )
where
and
In signal processing, this property on one hand allows sampling a function by multiplication with , and on the other hand it also allows the periodization of by convolution with ().
The Dirac comb identity is a particular case of the Convolution Theorem for tempered distributions.
Scaling
The scaling property of the Dirac comb follows from the properties of the Dirac delta function.
Since for positive real numbers , it
|
https://en.wikipedia.org/wiki/Microsoft%20Support%20Diagnostic%20Tool
|
The Microsoft Support Diagnostic Tool (MSDT) is a legacy service in Microsoft Windows that allows Microsoft technical support agents to analyze diagnostic data remotely for troubleshooting purposes. In April 2022 it was observed to have a security vulnerability that allowed remote code execution which was being exploited to attack computers in Russia and Belarus, and later against the Tibetan government in exile. Microsoft advised a temporary workaround of disabling the MSDT by editing the Windows registry.
Use
When contacting support the user is told to run MSDT and given a unique "passkey" which they enter. They are also given an "incident number" to uniquely identify their case. The MSDT can also be run offline which will generate a .CAB file which can be uploaded from a computer with an internet connection.
Security Vulnerabilities
Follina
Follina is the name given to a remote code execution (RCE) vulnerability, a type of arbitrary code execution (ACE) exploit, in the Microsoft Support Diagnostic Tool (MSDT) which was first widely publicized on May 27, 2022, by a security research group called Nao Sec. This exploit allows a remote attacker to use a Microsoft Office document template to execute code via MSDT. This works by exploiting the ability of Microsoft Office document templates to download additional content from a remote server. If the size of the downloaded content is large enough it causes a buffer overflow allowing a payload of Powershell code to be executed without explicit notification to the user. On May 30 Microsoft issued CVE-2022-30190 with guidance that users should disable MSDT. Malicious actors have been observed exploiting the bug to attack computers in Russia and Belarus since April, and it is believed Chinese state actors had been exploiting it to attack the Tibetan government in exile based in India. Microsoft patched this vulnerability in its June 2022 patches.
DogWalk
The DogWalk vulnerability is a remote code execution (RCE) vulne
|
https://en.wikipedia.org/wiki/Bandwidth%20expansion
|
Bandwidth expansion is a technique for widening the bandwidth or the resonances in an LPC filter. This is done by moving all the poles towards the origin by a constant factor . The bandwidth-expanded filter can be easily derived from the original filter by:
Let be expressed as:
The bandwidth-expanded filter can be expressed as:
In other words, each coefficient in the original filter is simply multiplied by in the bandwidth-expanded filter. The simplicity of this transformation makes it attractive, especially in CELP coding of speech, where it is often used for the perceptual noise weighting and/or to stabilize the LPC analysis. However, when it comes to stabilizing the LPC analysis, lag windowing is often preferred to bandwidth expansion.
|
https://en.wikipedia.org/wiki/Biometric%20device
|
A biometric device is a security identification and authentication device. Such devices use automated methods of verifying or recognising the identity of a living person based on a physiological or behavioral characteristic. These characteristics include fingerprints, facial images, iris and voice recognition.
History
Biometric devices have been in use for thousands of years. Non-automated biometric devices have in use since 500 BC, when ancient Babylonians would sign their business transactions by pressing their fingertips into clay tablets.
Automation in biometric devices was first seen in the 1960s. The Federal Bureau of Investigation (FBI) in the 1960s, introduced the Indentimat, which started checking for fingerprints to maintain criminal records. The first systems measured the shape of the hand and the length of the fingers. Although discontinued in the 1980s, the system set a precedent for future Biometric Devices.
Types of biometric devices
There are two categories of biometric devices,
Contact Devices - These types of devices need contact of body part of live persons. They are mainly fingerprint scanners, either single fingerprint, dual fingerprint or slap (4+4+2) fingerprint scanners, and hand geometry scanners.
Contactless Devices - These devices don't need any type of contact. The main examples of these are face, iris, retina and palm vein scanners and voice identification devices.
Subgroups
The characteristic of the human body is used to access information by the users. According to these characteristics, the sub-divided groups are
Chemical biometric devices: Analyses the segments of the DNA to grant access to the users.
Visual biometric devices: Analyses the visual features of the humans to grant access which includes iris recognition, face recognition, Finger recognition, and Retina Recognition.
Behavioral biometric devices: Analyses the Walking Ability and Signatures (velocity of sign, width of sign, pressure of sign) distinct to
|
https://en.wikipedia.org/wiki/Anamorphosis
|
Anamorphosis is a distorted projection that requires the viewer to occupy a specific vantage point, use special devices, or both to view a recognizable image. It is used in painting, photography, sculpture and installation, toys, and film special effects. The word is derived from the Greek prefix ana-, meaning "back" or "again", and the word morphe, meaning "shape" or "form". Extreme anamorphosis has been used by artists to disguise caricatures, erotic and scatological scenes, and other furtive images from a casual spectator, while revealing an undistorted image to the knowledgeable viewer.
Types of projection
There are two main types of anamorphosis: perspective (oblique) and mirror (catoptric). More complex anamorphoses can be devised using distorted lenses, mirrors, or other optical transformations.
An oblique anamorphism forms an affine transformation of the subject. Early examples of perspectival anamorphosis date to the Renaissance of the fifteenth century and largely relate to religious themes.
With mirror anamorphosis, a conical or cylindrical mirror is placed on the distorted drawing or painting to reveal an undistorted image. The deformed picture relies on laws regarding angles of incidence of reflection. The length of the flat drawing's curves are reduced when viewed in a curved mirror, such that the distortions resolve into a recognizable picture. Unlike perspective anamorphosis, catoptric images can be viewed from many angles. The technique was originally developed in China during the Ming Dynasty, and the first European manual on mirror anamorphosis was published around 1630 by the mathematician Vaulezard.
Channel anamorphosis or tabula scalata has a different image on each side of a corrugated carrier. A straight frontal view shows an unclear mix of the images, while each image can be viewed correctly from a certain angle.
History
Prehistory
The Stone Age cave paintings at Lascaux may make use of anamorphic technique, because the oblique angle
|
https://en.wikipedia.org/wiki/Blind%20equalization
|
Blind equalization is a digital signal processing technique in which the transmitted signal is inferred (equalized) from the received signal, while making use only of the transmitted signal statistics. Hence, the use of the word blind in the name.
Blind equalization is essentially blind deconvolution applied to digital communications. Nonetheless, the emphasis in blind equalization is on online estimation of the equalization filter, which is the inverse of the channel impulse response, rather than the estimation of the channel impulse response itself. This is due to blind deconvolution common mode of usage in digital communications systems, as a means to extract the continuously transmitted signal from the received signal, with the channel impulse response being of secondary intrinsic importance.
The estimated equalizer is then convolved with the received signal to yield an estimation of the transmitted signal.
Problem statement
Noiseless model
Assuming a linear time invariant channel with impulse response , the noiseless model relates the received signal to the transmitted signal via
The blind equalization problem can now be formulated as follows; Given the received signal , find a filter , called an equalization filter, such that
where is an estimation of .
The solution to the blind equalization problem is not unique. In fact, it may be determined only up to a signed scale factor and an arbitrary time delay. That is, if are estimates of the transmitted signal and channel impulse response, respectively, then give rise to the same received signal for any real scale factor and integral time delay . In fact, by symmetry, the roles of and are Interchangeable.
Noisy model
In the noisy model, an additional term, , representing additive noise, is included. The model is therefore
Algorithms
Many algorithms for the solution of the blind equalization problem have been suggested over the years.
However, as one usually has access to only a finite number of s
|
https://en.wikipedia.org/wiki/Arithmetic%20logic%20unit
|
In computing, an arithmetic logic unit (ALU) is a combinational digital circuit that performs arithmetic and bitwise operations on integer binary numbers. This is in contrast to a floating-point unit (FPU), which operates on floating point numbers. It is a fundamental building block of many types of computing circuits, including the central processing unit (CPU) of computers, FPUs, and graphics processing units (GPUs).
The inputs to an ALU are the data to be operated on, called operands, and a code indicating the operation to be performed; the ALU's output is the result of the performed operation. In many designs, the ALU also has status inputs or outputs, or both, which convey information about a previous operation or the current operation, respectively, between the ALU and external status registers.
Signals
An ALU has a variety of input and output nets, which are the electrical conductors used to convey digital signals between the ALU and external circuitry. When an ALU is operating, external circuits apply signals to the ALU inputs and, in response, the ALU produces and conveys signals to external circuitry via its outputs.
Data
A basic ALU has three parallel data buses consisting of two input operands (A and B) and a result output (Y). Each data bus is a group of signals that conveys one binary integer number. Typically, the A, B and Y bus widths (the number of signals comprising each bus) are identical and match the native word size of the external circuitry (e.g., the encapsulating CPU or other processor).
Opcode
The opcode input is a parallel bus that conveys to the ALU an operation selection code, which is an enumerated value that specifies the desired arithmetic or logic operation to be performed by the ALU. The opcode size (its bus width) determines the maximum number of distinct operations the ALU can perform; for example, a four-bit opcode can specify up to sixteen different ALU operations. Generally, an ALU opcode is not the same as a machine langua
|
https://en.wikipedia.org/wiki/Abstract%20index%20notation
|
Abstract index notation (also referred to as slot-naming index notation) is a mathematical notation for tensors and spinors that uses indices to indicate their types, rather than their components in a particular basis. The indices are mere placeholders, not related to any basis and, in particular, are non-numerical. Thus it should not be confused with the Ricci calculus. The notation was introduced by Roger Penrose as a way to use the formal aspects of the Einstein summation convention to compensate for the difficulty in describing contractions and covariant differentiation in modern abstract tensor notation, while preserving the explicit covariance of the expressions involved.
Let be a vector space, and its dual space. Consider, for example, an order-2 covariant tensor . Then can be identified with a bilinear form on . In other words, it is a function of two arguments in which can be represented as a pair of slots:
Abstract index notation is merely a labelling of the slots with Latin letters, which have no significance apart from their designation as labels of the slots (i.e., they are non-numerical):
A tensor contraction (or trace) between two tensors is represented by the repetition of an index label, where one label is contravariant (an upper index corresponding to the factor ) and one label is covariant (a lower index corresponding to the factor ). Thus, for instance,
is the trace of a tensor over its last two slots. This manner of representing tensor contractions by repeated indices is formally similar to the Einstein summation convention. However, as the indices are non-numerical, it does not imply summation: rather it corresponds to the abstract basis-independent trace operation (or natural pairing) between tensor factors of type and those of type .
Abstract indices and tensor spaces
A general homogeneous tensor is an element of a tensor product of copies of and , such as
Label each factor in this tensor product with a Latin letter
|
https://en.wikipedia.org/wiki/VLSI%20Project
|
The VLSI Project was a DARPA-program initiated by Robert Kahn in 1978 that provided research funding to a wide variety of university-based teams in an effort to improve the state of the art in microprocessor design, then known as Very Large Scale Integration (VLSI).
The VLSI Project is one of the most influential research projects in modern computer history. Its offspring include Berkeley Software Distribution (BSD) Unix, the reduced instruction set computer (RISC) processor concept, many computer-aided design (CAD) tools still in use today, 32-bit graphics workstations, fabless manufacturing and design houses, and its own semiconductor fabrication plant (fab), MOSIS, starting in 1981. A similar DARPA project partnering with industry, VHSIC had little or no impact.
The VLSI Project was central in promoting the Mead and Conway revolution throughout industry.
Project
New design rules
In 1975, Carver Mead, Tom Everhart and Ivan Sutherland of Caltech wrote a report for ARPA on the topic of microelectronics. Over the previous few years, Mead had coined the term "Moore's law" to describe Gordon Moore's 1965 prediction for the growth rate of complexity, and in 1974, Robert Dennard of IBM noted that the scale shrinking that formed the basis of Moore's law also affected the performance of the systems. These combined effects implied a massive increase in computing power was about to be unleashed on the industry. The report, published in 1976, suggested that ARPA fund development across a number of fields in order to deal with the complexity that was about to appear due to these "very-large-scale integrated circuits".
Later that year, Sutherland wrote a letter to his brother Bert who was at that time working at Xerox PARC. He suggested a joint effort between PARC and Caltech to begin studying these issues. Bert agreed to form a team, inviting Lynn Conway and Doug Fairbairn to join. Conway had previously worked at IBM on a supercomputer project known as ACS-1. After consid
|
https://en.wikipedia.org/wiki/Toroidal%20solenoid
|
The toroidal solenoid was an early 1946 design for a fusion power device designed by George Paget Thomson and Moses Blackman of Imperial College London. It proposed to confine a deuterium fuel plasma to a toroidal (donut-shaped) chamber using magnets, and then heating it to fusion temperatures using radio frequency energy in the fashion of a microwave oven. It is notable for being the first such design to be patented, filing a secret patent on 8 May 1946 and receiving it in 1948.
A critique by Rudolf Peierls noted several problems with the concept. Over the next few years, Thomson continued to suggest starting an experimental effort to study these issues, but was repeatedly denied as the underlying theory of plasma diffusion was not well developed. When similar concepts were suggested by Peter Thonemann that included a more practical heating arrangement, John Cockcroft began to take the concept more seriously, establishing small study groups at Harwell. Thomson adopted Thonemann's concept, abandoning the radio frequency system.
When the patent had still not been granted in early 1948, the Ministry of Supply inquired about Thomson's intentions. Thomson explained the problems he had getting a program started and that he did not want to hand off the rights until that was clarified. As the directors of the UK nuclear program, the Ministry quickly forced Harwell's hand to provide funding for Thomson's program. Thomson then released his rights the patent, which was granted late that year. Cockcroft also funded Thonemann's work, and with that, the UK fusion program began in earnest. After the news furor over the Huemul Project in February 1951, significant funding was released and led to rapid growth of the program in the early 1950s, and ultimately to the ZETA reactor of 1958.
Conceptual development
The basic understanding of nuclear fusion was developed during the 1920s as physicists explored the new science of quantum mechanics. George Gamow's 1928 work on quantum t
|
https://en.wikipedia.org/wiki/Autonomous%20things
|
Autonomous things, abbreviated AuT, or the Internet of autonomous things, abbreviated as IoAT, is an emerging term for the technological developments that are expected to bring computers into the physical environment as autonomous entities without human direction, freely moving and interacting with humans and other objects.
Self-navigating drones are the first AuT technology in (limited) deployment. It is expected that the first mass-deployment of AuT technologies will be the autonomous car, generally expected to be available around 2020. Other currently expected AuT technologies include home robotics (e.g., machines that provide care for the elderly, infirm or young), and military robots (air, land or sea autonomous machines with information-collection or target-attack capabilities).
AuT technologies share many common traits, which justify the common notation. They are all based on recent breakthroughs in the domains of (deep) machine learning and artificial intelligence. They all require extensive and prompt regulatory developments to specify the requirements from them and to license and manage their deployment (see the further reading below). And they all require unprecedented levels of safety (e.g., automobile safety) and security, to overcome concerns about the potential negative impact of the new technology.
As an example, the autonomous car both addresses the main existing safety issues and creates new issues. It is expected to be much safer than existing vehicles, by eliminating the single most dangerous elementthe driver. The US's National Highway Traffic Safety Administration estimates 94 percent of US accidents were the result of human error and poor decision-making, including speeding and impaired driving, and the Center for Internet and Society at Stanford Law School claims that "Some ninety percent of motor vehicle crashes are caused at least in part by human error". So while safety standards like the ISO 26262 specify the required safety, there is
|
https://en.wikipedia.org/wiki/Shit%20flow%20diagram
|
A shit flow diagram (also called excreta flow diagram or SFD) is a high level technical drawing used to display how excreta moves through a location, and functions as a tool to identify where improvements are needed. The diagram has a particular focus on treatment of the waste, and its final disposal or use. SFDs are most often used in developing countries.
Development
In 2012–2013, the World Bank's Water and Sanitation Program sponsored a study on the fecal sludge management of twelve cities with the goal of developing tools for better understanding the flow of excreta through the cities. As a result, Isabel Blackett, Peter Hawkins, and Christiaan Heymans authored The missing link in sanitation service delivery: a review of fecal sludge management in 12 cities. Using this as a basis, a group of excreta management institutions began collaborating in June 2014 to continue development of SFDs.
In November 2014, the SFD Promotion Initiative was started with funding from the Bill & Melinda Gates Foundation. Initially funded as a one year project, it was extended in 2015. In September 2019, the focus of the program shifted to scaling up the current methods of producing SFDs to allow for citywide sanitation in South Asia and Africa. As of 2021 more than 140 shit flow diagram reports have been published. The initiative is managed as part of the Sustainable Sanitation Alliance and is supported by the Bill and Melinda Gates Foundation. It is partnered with many nonprofit organizations such as the Centre for Science and Environment, EAWAG, and the Global Water Security & Sanitation Partnership.
Use in developing countries
The great majority of those living in urban areas, especially the poor, use non-sewer sanitation systems. This poses environmental and health challenges for growing urban areas in developing countries, and many of these countries will need to change their sanitation strategies as their population grows. Using a shit flow diagram allows political leaders
|
https://en.wikipedia.org/wiki/Abstraction%20layer
|
In computing, an abstraction layer or abstraction level is a way of hiding the working details of a subsystem. Examples of software models that use layers of abstraction include the OSI model for network protocols, OpenGL, and other graphics libraries, which allow the separation of concerns to facilitate interoperability and platform independence. Another example is Media Transfer Protocol.
In computer science, an abstraction layer is a generalization of a conceptual model or algorithm, away from any specific implementation. These generalizations arise from broad similarities that are best encapsulated by models that express similarities present in various specific implementations. The simplification provided by a good abstraction layer allows for easy reuse by distilling a useful concept or design pattern so that situations, where it may be accurately applied, can be quickly recognized.
A layer is considered to be on top of another if it depends on it. Every layer can exist without the layers above it, and requires the layers below it to function. Frequently abstraction layers can be composed into a hierarchy of abstraction levels. The OSI model comprises seven abstraction layers. Each layer of the model encapsulates and addresses a different part of the needs of digital communications, thereby reducing the complexity of the associated engineering solutions.
A famous aphorism of David Wheeler is, "All problems in computer science can be solved by another level of indirection." This is often deliberately misquoted with "abstraction" substituted for "indirection." It is also sometimes misattributed to Butler Lampson. Kevlin Henney's corollary to this is, "...except for the problem of too many layers of indirection."
Computer architecture
In a computer architecture, a computer system is usually represented as consisting of several abstraction levels such as:
software
programmable logic
hardware
Programmable logic is often considered part of the hardware, while
|
https://en.wikipedia.org/wiki/Non-cellular%20life
|
Non-cellular life, also known as acellular life, is life that exists without a cellular structure for at least part of its life cycle. Historically, most definitions of life postulated that an organism must be composed of one or more cells, but this is no longer considered necessary, and modern criteria allow for forms of life based on other structural arrangements.
The primary candidates for non-cellular life are viruses. Some biologists consider viruses to be organisms, but others do not. Their primary objection is that no known viruses are capable of autonomous reproduction; they must rely on cells to copy them.
Viruses as non-cellular life
The nature of viruses was unclear for many years following their discovery as pathogens. They were described as poisons or toxins at first, then as "infectious proteins", but with advances in microbiology it became clear that they also possessed genetic material, a defined structure, and the ability to spontaneously assemble from their constituent parts. This spurred extensive debate as to whether they should be regarded as fundamentally organic or inorganic — as very small biological organisms or very large biochemical molecules — and since the 1950s many scientists have thought of viruses as existing at the border between chemistry and life; a gray area between living and nonliving.
Viral replication and self-assembly has implications for the study of the origin of life, as it lends further credence to the hypotheses that cells and viruses could have started as a pool of replicators where selfish genetic information was parasitizing on producers in RNA world, as two strategies to survive, gained in response to environmental conditions, or as self-assembling organic molecules.
Viroids
Viroids are the smallest infectious pathogens known to biologists, consisting solely of short strands of circular, single-stranded RNA without protein coats. They are mostly plant pathogens and some are animal pathogens, from which some ar
|
https://en.wikipedia.org/wiki/VARAN
|
VARAN (Versatile Automation Random Access Network) is a Fieldbus Ethernet-based industrial communication system.
VARAN is a wired data network technology for local data networks (LAN) with the main application in the field of automation technology. It enables the exchange of data in the form of data frames between all LAN connected devices (controllers, input/output devices, drives, etc.).
VARAN includes the definitions for types of cables and connectors, describes the physical signalling and specifies packet formats and protocols. From the perspective of the OSI model, VARAN specifies both the physical layer (OSI Layer 1) and the data link layer (OSI Layer 2). VARAN is a protocol according to the principle master-slave. The VARAN BUS USER ORGANIZATION (VNO) is responsible for the care of the Protocol.
|
https://en.wikipedia.org/wiki/General-purpose%20input/output
|
A general-purpose input/output (GPIO) is an uncommitted digital signal pin on an integrated circuit or electronic circuit (e.g. MCUs/MPUs) board which may be used as an input or output, or both, and is controllable by software.
GPIOs have no predefined purpose and are unused by default. If used, the purpose and behavior of a GPIO is defined and implemented by the designer of higher assembly-level circuitry: the circuit board designer in the case of integrated circuit GPIOs, or system integrator in the case of board-level GPIOs.
Integrated circuit GPIOs
Integrated circuit (IC) GPIOs are implemented in a variety of ways. Some ICs provide GPIOs as a primary function whereas others include GPIOs as a convenient "accessory" to some other primary function. Examples of the former include the Intel 8255, which interfaces 24 GPIOs to a parallel communication bus, and various GPIO expander ICs, which interface GPIOs to serial communication buses such as I²C and SMBus. An example of the latter is the Realtek ALC260 IC, which provides eight GPIOs along with its main function of audio codec.
Microcontroller ICs usually include GPIOs. Depending on the application, a microcontroller's GPIOs may comprise its primary interface to external circuitry or they may be just one type of I/O used among several, such as analog signal I/O, counter/timer, and serial communication.
In some ICs, particularly microcontrollers, a GPIO pin may be capable of other functions than GPIO. Often in such cases it is necessary to configure the pin to operate as a GPIO (vis-á-vis its other functions) in addition to configuring the GPIO's behavior. Some microcontroller devices (e.g., Microchip dsPIC33 family) incorporate internal signal routing circuitry that allows GPIOs to be programmatically mapped to device pins. Field-programmable gate arrays (FPGA) extend this ability by allowing GPIO pin mapping, instantiation and architecture to be programmatically controlled.
Board-level GPIOs
Many circuit boar
|
https://en.wikipedia.org/wiki/Tetrad%20formalism
|
The tetrad formalism is an approach to general relativity that generalizes the choice of basis for the tangent bundle from a coordinate basis to the less restrictive choice of a local basis, i.e. a locally defined set of four linearly independent vector fields called a tetrad or vierbein. It is a special case of the more general idea of a vielbein formalism, which is set in (pseudo-)Riemannian geometry. This article as currently written makes frequent mention of general relativity; however, almost everything it says is equally applicable to (pseudo-)Riemannian manifolds in general, and even to spin manifolds. Most statements hold simply by substituting arbitrary for . In German, "" translates to "four", and "" to "many".
The general idea is to write the metric tensor as the product of two vielbeins, one on the left, and one on the right. The effect of the vielbeins is to change the coordinate system used on the tangent manifold to one that is simpler or more suitable for calculations. It is frequently the case that the vielbein coordinate system is orthonormal, as that is generally the easiest to use. Most tensors become simple or even trivial in this coordinate system; thus the complexity of most expressions is revealed to be an artifact of the choice of coordinates, rather than a innate property or physical effect. That is, as a formalism, it does not alter predictions; it is rather a calculational technique.
The advantage of the tetrad formalism over the standard coordinate-based approach to general relativity lies in the ability to choose the tetrad basis to reflect important physical aspects of the spacetime. The abstract index notation denotes tensors as if they were represented by their coefficients with respect to a fixed local tetrad. Compared to a completely coordinate free notation, which is often conceptually clearer, it allows an easy and computationally explicit way to denote contractions.
The significance of the tetradic formalism appear in the E
|
https://en.wikipedia.org/wiki/Programmable%20load
|
A programmable load is a type of test equipment or instrument which emulates DC or AC resistance loads normally required to perform functional tests of batteries, power supplies or solar cells. By virtue of being programmable, tests like load regulation, battery discharge curve measurement and transient tests can be fully automated and load changes for these tests can be made without introducing switching transient that might change the measurement or operation of the power source under test.
Implementation
Programmable loads most commonly use one transistor/FET, or an array of parallel connected transistors/FETs for more current handling, to act as a variable resistor. Internal circuitry in the equipment monitors the actual current through the transistor/FET, compares it to a user-programmed desired current, and through an error amplifier changes the drive voltage to the transistor/FET to dynamically change its resistance. This 'negative feedback' results in the actual current always matching the programmed desired current, regardless of other changes in the supplied voltage or other variables. Of course, if the power source is not able to supply the desired amount of current, the DC load equipment cannot furnish the difference; it can restrict current to a level, but it cannot boost current to a higher level. Most commercial DC loads are equipped with microprocessor front end circuits that allow the user to not only program a desired current through the load ('constant current' or CC), but the user can alternatively program the load to have a constant resistance (CR) or constant power dissipation (CP).
Electronic test equipment
Hardware testing
Electronic engineering
|
https://en.wikipedia.org/wiki/Wavelet%20packet%20decomposition
|
Originally known as optimal subband tree structuring (SB-TS), also called wavelet packet decomposition (WPD)
(sometimes known as just wavelet packets or subband tree), is a wavelet transform where the discrete-time (sampled) signal is passed through more filters than the discrete wavelet transform (DWT).
Introduction
In the DWT, each level is calculated by passing only the previous wavelet approximation coefficients (cAj) through discrete-time low- and high-pass quadrature mirror filters. However, in the WPD, both the detail (cDj (in the 1-D case), cHj, cVj, cDj (in the 2-D case)) and approximation coefficients are decomposed to create the full binary tree.
For n levels of decomposition the WPD produces 2n different sets of coefficients (or nodes) as opposed to sets for the DWT. However, due to the downsampling process the overall number of coefficients is still the same and there is no redundancy.
From the point of view of compression, the standard wavelet transform may not produce the best result, since it is limited to wavelet bases that increase by a power of two towards the low frequencies. It could be that another combination of bases produce a more desirable representation for a particular signal. There are several algorithms for subband tree structuring that find a set of optimal bases that provide the most desirable representation of the data relative to a particular cost function (entropy, energy compaction, etc.).
There were relevant studies in signal processing and communications fields to address the selection of subband trees (orthogonal basis) of various kinds, e.g. regular, dyadic, irregular, with respect to performance metrics of interest including energy compaction (entropy), subband correlations and others.
Discrete wavelet transform theory (continuous in the time variable) offers an approximation to transform discrete (sampled) signals. In contrast, the discrete-time subband transform theory enables a perfect representation of already sa
|
https://en.wikipedia.org/wiki/Taxonomic%20boundary%20paradox
|
The term boundary paradox refers to the conflict between traditional, rank-based classification of life and evolutionary thinking. In the hierarchy of ranked categories it is implicitly assumed that the morphological gap is growing along with increasing ranks: two species from the same genus are more similar than other two species from different genera in the same family, these latter two species are more similar than any two species from different families of the same order, and so on. However, this requirement may only satisfy for the classification of contemporary organisms; difficulties arise if we wish to classify descendants together with their ancestors. Theoretically, such a classification necessarily involves segmentation of the spatio-temporal continuum of populations into groups with crisp boundaries. However, the problem is not only that many parent populations would separate at species level from their offspring. The truly paradoxical situation is that some between-species boundaries would necessarily coincide with between-genus boundaries, and a few between-genus boundaries with borders between families, and so on. This apparent ambiguity cannot be resolved in Linnaean systems; resolution is only possible if classification is cladistic (see below).
Historical background
Jean-Baptiste Lamarck, in Philosophie zoologique (1809), was the first who questioned the objectivity of rank-based classification of life, by saying:
Half a century later, Charles Darwin explained that sharp separation of groups of organisms observed at present becomes less obvious if we go back into the past:
In his book on orchids, Darwin also warned that the system of ranks would not work if we knew more details about past life:
Finally, Richard Dawkins has argued recently that
and
with the following conclusion:
Illustrative models
The paradox may be best illustrated by model diagrams similar to Darwin’s single evolutionary tree in On the Origin of Species. In these tree grap
|
https://en.wikipedia.org/wiki/Computer%20module
|
A computer module is a selection of independent electronic circuits packaged onto a circuit board to provide a basic function within a computer. An example might be an inverter or flip-flop, which would require two or more transistors and a small number of additional supporting devices. Modules would be inserted into a chassis and then wired together to produce a larger logic unit, like an adder.
History
Modules were the basic building block of most early computer designs, until they started being replaced by integrated circuits in the 1960s, which were essentially an entire module packaged onto a single computer chip. Modules with discrete components continued to be used in specialist roles into the 1970s, notably high-speed modular designs like the CDC 8600, but advances in chip design led to the disappearance of the discrete-component module in the 1970s.
See also
Modularity
|
https://en.wikipedia.org/wiki/Proofs%20from%20THE%20BOOK
|
Proofs from THE BOOK is a book of mathematical proofs by Martin Aigner and Günter M. Ziegler. The book is dedicated to the mathematician Paul Erdős, who often referred to "The Book" in which God keeps the most elegant proof of each mathematical theorem. During a lecture in 1985, Erdős said, "You don't have to believe in God, but you should believe in The Book."
Content
Proofs from THE BOOK contains 32 sections (45 in the sixth edition), each devoted to one theorem but often containing multiple proofs and related results. It spans a broad range of mathematical fields: number theory, geometry, analysis, combinatorics and graph theory. Erdős himself made many suggestions for the book, but died before its publication. The book is illustrated by . It has gone through six editions in English, and has been translated into Persian, French, German, Hungarian, Italian, Japanese, Chinese, Polish, Portuguese, Korean, Turkish, Russian and Spanish.
In November 2017 the American Mathematical Society announced the 2018 Leroy P. Steele Prize for Mathematical Exposition to be awarded to Aigner and Ziegler for this book.
The proofs include:
Six proofs of the infinitude of the primes, including Euclid's and Furstenberg's
Proof of Bertrand's postulate
Fermat's theorem on sums of two squares
Two proofs of the Law of quadratic reciprocity
Proof of Wedderburn's little theorem asserting that every finite division ring is a field
Four proofs of the Basel problem
Proof that e is irrational (also showing the irrationality of certain related numbers)
Hilbert's third problem
Sylvester–Gallai theorem and De Bruijn–Erdős theorem
Cauchy's theorem
Borsuk's conjecture
Schröder–Bernstein theorem
Wetzel's problem on families of analytic functions with few distinct values
The fundamental theorem of algebra
Monsky's theorem (4th edition)
Van der Waerden's conjecture
Littlewood–Offord lemma
Buffon's needle problem
Sperner's theorem, Erdős–Ko–Rado theorem and Hall's theorem
Lindström
|
https://en.wikipedia.org/wiki/Readout%20integrated%20circuit
|
A Readout integrated circuit (ROIC) is an integrated circuit (IC) specifically used for reading detectors of a particular type. They are compatible with different types of detectors such as infrared and ultraviolet. The primary purpose for ROICs is to accumulate the photocurrent from each pixel and then transfer the resultant signal onto output taps for readout. Conventional ROIC technology stores the signal charge at each pixel and then routes the signal onto output taps for readout. This requires storing large signal charge at each pixel site and maintaining signal-to-noise ratio (or dynamic range) as the signal is read out and digitized.
A ROIC has high-speed analog outputs to transmit pixel data outside of the integrated circuit. If digital outputs are implemented, the IC is referred to as a Digital Readout Integrated Circuit (DROIC).
A Digital readout integrated circuit (DROIC) is a class of ROIC that uses on-chip analog-to-digital conversion (ADC) to digitize the accumulated photocurrent in each pixel of the imaging array. DROICs are easier to integrate into a system compared to ROICs as the package size and complexity are reduced, they are less sensitive to noise and have higher bandwidth compared to analog outputs.
A Digital pixel readout integrated circuit (DPROIC) is a ROIC that uses on-chip analog-to-digital conversion (ADC) within each pixel (or small group of pixels) to digitize the accumulated photocurrent within the imaging array. DPROICs have an even higher bandwidth than DROICs and can significantly increase the well capacity and dynamic range of the device.
|
https://en.wikipedia.org/wiki/Rod%20calculus
|
Rod calculus or rod calculation was the mechanical method of algorithmic computation with counting rods in China from the Warring States to Ming dynasty before the counting rods were increasingly replaced by the more convenient and faster abacus. Rod calculus played a key role in the development of Chinese mathematics to its height in Song Dynasty and Yuan Dynasty, culminating in the invention
of polynomial equations of up to four unknowns in the work of Zhu Shijie.
Hardware
The basic equipment for carrying out rod calculus is a bundle of counting rods and a counting board. The counting rods are usually made of bamboo sticks, about 12 cm- 15 cm in length, 2mm to 4 mm diameter, sometimes from animal bones, or ivory and jade (for well-heeled merchants). A counting board could be a table top, a wooden board with or without grid, on the floor or on sand.
In 1971 Chinese archaeologists unearthed a bundle of well-preserved animal bone counting rods stored in a silk pouch from a tomb in Qian Yang county in Shanxi province, dated back to the first half of Han dynasty (206 BC – 8AD). In 1975 a bundle of bamboo counting rods was unearthed.
The use of counting rods for rod calculus flourished in the Warring States, although no archaeological artefacts were found earlier than the Western Han Dynasty (the first half of Han dynasty; however, archaeologists did unearth software artefacts of rod calculus dated back to the Warring States); since the rod calculus software must have gone along with rod calculus hardware, there is no doubt that rod calculus was already flourishing during the Warring States more than 2,200 years ago.
Software
The key software required for rod calculus was a simple 45 phrase positional decimal multiplication table used in China since antiquity, called the nine-nine table, which were learned by heart by pupils, merchants, government officials and mathematicians alike.
Rod numerals
Displaying numbers
Rod numerals is the only numeric system that uses
|
https://en.wikipedia.org/wiki/Algebraic%20signal%20processing
|
Algebraic signal processing (ASP) is an emerging area of theoretical signal processing (SP). In the algebraic theory of signal processing, a set of filters is treated as an (abstract) algebra, a set of signals is treated as a module or vector space, and convolution is treated as an algebra representation. The advantage of algebraic signal processing is its generality and portability.
History
In the original formulation of algebraic signal processing by Puschel and Moura, the signals are collected in an -module for some algebra of filters, and filtering is given by the action of on the -module.
Definitions
Let be a field, for instance the complex numbers, and be a -algebra (i.e. a vector space over with a binary operation that is linear in both arguments) treated as a set of filters. Suppose is a vector space representing a set signals. A representation of consists of an algebra homomorphism where is the algebra of linear transformations with composition (equivalent, in the finite-dimensional case, to matrix multiplication). For convenience, we write for the endomorphism . To be an algebra homomorphism, must not only be a linear transformation, but also satisfy the propertyGiven a signal , convolution of the signal by a filter yields a new signal . Some additional terminology is needed from the representation theory of algebras. A subset is said to generate the algebra if every element of can be represented as polynomials in the elements of . The image of a generator is called a shift operator. In all practically all examples, convolutions are formed as polynomials in generated by shift operators. However, this is not necessarily the case for a representation of an arbitrary algebra.
Examples
Discrete Signal Processing
In discrete signal processing (DSP), the signal space is the set of complex-valued functions with bounded energy (i.e. square-integrable functions). This means the infinite series where is the modulus of a complex number. T
|
https://en.wikipedia.org/wiki/Super%20Bloch%20oscillations
|
In physics, a Super Bloch oscillation describes a certain type of motion of a particle in a lattice potential under external periodic driving. The term super refers to the fact that the amplitude in position space of such an oscillation is several orders of magnitude larger than for 'normal' Bloch oscillations.
Bloch oscillations vs. Super Bloch oscillations
Normal Bloch oscillations and Super Bloch oscillations are closely connected. In general, Bloch oscillations are a consequence of the periodic structure of the lattice potential and the existence of a maximum value of the Bloch wave vector . A constant force results in the acceleration of the particle until the edge of the first Brillouin zone is reached. The following sudden change in velocity from to can be interpreted as a Bragg scattering of the particle by the lattice potential. As a result, the velocity of the particle never exceeds but oscillates in a saw-tooth like manner with a corresponding periodic oscillation in position space. Surprisingly, despite of the constant acceleration the particle does not translate, but just moves over very few lattice sites.
Super Bloch oscillations arise when an additional periodic driving force is added to , resulting in:
The details of the motion depend on the ratio between the driving frequency and the Bloch frequency . A small detuning results in a beat between the Bloch cycle and the drive, with a drastic change of the particle motion. On top of the Bloch oscillation, the motion shows a much larger oscillation in position space that extends over hundreds of lattice sites. Those Super Bloch oscillations directly correspond to the motion of normal Bloch oscillations, just rescaled in space and time.
A quantum mechanical description of the rescaling can be found here. An experimental realization is demonstrated in these.
A theoretical analysis of the properties of Super-Bloch Oscillations, including dependence on the phase of the driving field is found here.
|
https://en.wikipedia.org/wiki/Gravity-assisted%20microdissection
|
Gravity-assisted microdissection (GAM) is one of the laser microdissection methods. The dissected material is allowed to fall by gravity into a cap and may thereafter be used for isolating proteins or genetic material. Two manufacturers in the world have developed their own device based on GAM method.
Microdissection procedure
In the case of ION LMD system, after preparing sample and staining, transfer tissue on window slide. The slide is mounted inversely. Motorized stage moves to pre-selected drawing line and laser beam cuts the cells of interests by laser ablation. Selected cells are collected in the tube cap which is under the slide via gravity.
Application
Dissected materials such as single cells or cell populations of interests are used for these further researches.
Molecular pathology
Cell biology
Genomics
Cancer research
Pharmaceutical research
Veterinary medicine
Forensic analysis
Reproductive medicine
|
https://en.wikipedia.org/wiki/List%20of%20countries%20by%20medal%20count%20at%20International%20Mathematical%20Olympiad
|
The following is the complete list of countries by medal count at the International Mathematical Olympiad:
Notes
A. This team is now defunct.
|
https://en.wikipedia.org/wiki/List%20of%20genetic%20algorithm%20applications
|
This is a list of genetic algorithm (GA) applications.
Natural Sciences, Mathematics and Computer Science
Bayesian inference links to particle methods in Bayesian statistics and hidden Markov chain models
Artificial creativity
Chemical kinetics (gas and solid phases)
Calculation of bound states and local-density approximations
Code-breaking, using the GA to search large solution spaces of ciphers for the one correct decryption.
Computer architecture: using GA to find out weak links in approximate computing such as lookahead.
Configuration applications, particularly physics applications of optimal molecule configurations for particular systems like C60 (buckyballs)
Construction of facial composites of suspects by eyewitnesses in forensic science.
Data Center/Server Farm.
Distributed computer network topologies
Electronic circuit design, known as evolvable hardware
Feature selection for Machine Learning
Feynman-Kac models
File allocation for a distributed system
Filtering and signal processing
Finding hardware bugs.
Game theory equilibrium resolution
Genetic Algorithm for Rule Set Production
Scheduling applications, including job-shop scheduling and scheduling in printed circuit board assembly. The objective being to schedule jobs in a sequence-dependent or non-sequence-dependent setup environment in order to maximize the volume of production while minimizing penalties such as tardiness. Satellite communication scheduling for the NASA Deep Space Network was shown to benefit from genetic algorithms.
Learning robot behavior using genetic algorithms
Image processing: Dense pixel matching
Learning fuzzy rule base using genetic algorithms
Molecular structure optimization (chemistry)
Optimisation of data compression systems, for example using wavelets.
Power electronics design.
Traveling salesman problem and its applications
Earth Sciences
Climatology: Estimation of heat flux between the atmosphere and sea ice
Climatology: Modelling global te
|
https://en.wikipedia.org/wiki/Heterotrophic%20nutrition
|
Heterotrophic nutrition is a mode of nutrition in which organisms depend upon other organisms for food to survive. They can't make their own food like Green plants. Heterotrophic organisms have to take in all the organic substances they need to survive.
All animals, certain types of fungi, and non-photosynthesizing plants are heterotrophic. In contrast, green plants, red algae, brown algae, and cyanobacteria are all autotrophs, which use photosynthesis to produce their own food from sunlight. Some fungi may be saprotrophic, meaning they will extracellularly secrete enzymes onto their food to be broken down into smaller, soluble molecules which can diffuse back into the fungus.
Description
All eukaryotes except for green plants and algae are unable to manufacture their own food: They obtain food from other organisms. This mode of nutrition is also known as heterotrophic nutrition.
All heterotrophs (except blood and gut parasites) have to convert solid food into soluble compounds which are capable of being absorbed (digestion). Then the soluble products of digestion for the organism are being broken down for the release of energy (respiration). All heterotrophs depend on autotrophs for their nutrition. Heterotrophic organisms have only four types of nutrition.
Footnotes
|
https://en.wikipedia.org/wiki/Chemotroph
|
A chemotroph is an organism that obtains energy by the oxidation of electron donors in their environments. These molecules can be organic (chemoorganotrophs) or inorganic (chemolithotrophs). The chemotroph designation is in contrast to phototrophs, which use photons. Chemotrophs can be either autotrophic or heterotrophic. Chemotrophs can be found in areas where electron donors are present in high concentration, for instance around hydrothermal vents.
Chemoautotroph
Chemoautotrophs, in addition to deriving energy from chemical reactions, synthesize all necessary organic compounds from carbon dioxide. Chemoautotrophs can use inorganic energy sources such as hydrogen sulfide, elemental sulfur, ferrous iron, molecular hydrogen, and ammonia or organic sources to produce energy. Most chemoautotrophs are extremophiles, bacteria or archaea that live in hostile environments (such as deep sea vents) and are the primary producers in such ecosystems. Chemoautotrophs generally fall into several groups: methanogens, sulfur oxidizers and reducers, nitrifiers, anammox bacteria, and thermoacidophiles. An example of one of these prokaryotes would be Sulfolobus. Chemolithotrophic growth can be dramatically fast, such as Hydrogenovibrio crunogenus with a doubling time around one hour.
The term "chemosynthesis", coined in 1897 by Wilhelm Pfeffer, originally was defined as the energy production by oxidation of inorganic substances in association with autotrophy—what would be named today as chemolithoautotrophy. Later, the term would include also the chemoorganoautotrophy, that is, it can be seen as a synonym of chemoautotrophy.
Chemoheterotroph
Chemoheterotrophs (or chemotrophic heterotrophs) are unable to fix carbon to form their own organic compounds. Chemoheterotrophs can be chemolithoheterotrophs, utilizing inorganic electron sources such as sulfur, or, much more commonly, chemoorganoheterotrophs, utilizing organic electron sources such as carbohydrates, lipids, and proteins.
|
https://en.wikipedia.org/wiki/Network%20information%20system
|
A network information system (NIS) is an information system for managing networks, such as electricity network, water supply network, gas supply network, telecommunications network., or street light network
NIS may manage all data relevant to the network, e.g.- all components and their attributes, the connectivity between them and other information, relating to the operation, design and construction of such networks.
NIS for electricity may manage any, some or all voltage levels- Extra High, High, Medium and low voltage. It may support only the distribution network or also the transmission network.
Telecom NIS typically consists of the physical network inventory and logical network inventory. Physical network inventory is used to manage outside plant components, such as cables, splices, ducts, trenches, nodes and inside plant components such as active and passive devices. The most differentiating factor of telecom NIS from traditional GIS is the capability of recording thread level connectivity. The logical network inventory is used to manage the logical connections and circuits utilizing the logical connections. Traditionally, the logical network inventory has been a separate product but in most modern systems the functionality is built in the GIS serving both the functionality of the physical network and logical network.
Water network information system typically manages the water network components, such as ducts, branches, valves, hydrants, reservoirs and pumping stations. Some systems such as include the water consumers as well as water meters and their readings in the NIS. Sewage and stormwater components are typically included in the NIS. By adding sensors as well as analysis and calculations based on the measured values the concept of Smart water system is included in the NIS. By adding actuators into the network the concept of SCADA can be included in the NIS.
NIS may be built on top of a GIS (Geographical information system).
Private Cloud based NIS
|
https://en.wikipedia.org/wiki/List%20of%20wavelet-related%20transforms
|
A list of wavelet related transforms:
Continuous wavelet transform (CWT)
Discrete wavelet transform (DWT)
Multiresolution analysis (MRA)
Lifting scheme
Binomial QMF (BQMF)
Fast wavelet transform (FWT)
Complex wavelet transform
Non or undecimated wavelet transform, the downsampling is omitted
Newland transform, an orthonormal basis of wavelets is formed from appropriately constructed top-hat filters in frequency space
Wavelet packet decomposition (WPD), detail coefficients are decomposed and a variable tree can be formed
Stationary wavelet transform (SWT), no downsampling and the filters at each level are different
e-decimated discrete wavelet transform, depends on if the even or odd coefficients are selected in the downsampling
Second generation wavelet transform (SGWT), filters and wavelets are not created in the frequency domain
Dual-tree complex wavelet transform (DTCWT), two trees are used for decomposion to produce the real and complex coefficients
WITS: Where Is The Starlet, a collection of a hundredth of wavelet names in -let and associated multiscale, directional, geometric, representations, from activelets to x-lets through bandelets, chirplets, contourlets, curvelets, noiselets, wedgelets ...
Transforms
Wavelet-related transforms
|
https://en.wikipedia.org/wiki/System%20on%20module
|
A system on a module (SoM) is a board-level circuit that integrates a system function in a single module. It may integrate digital and analog functions on a single board. A typical application is in the area of embedded systems. Unlike a single-board computer, a SoM serves a special function like a system on a chip (SoC). The devices integrated in the SoM typically requires a high level of interconnection for reasons such as speed, timing, bus width etc.. There are benefits in building a SoM, as for SoC; one notable result is to reduce the cost of the base board or the main PCB. Two other major advantages of SoMs are design-reuse and that they can be integrated into many embedded computer applications.
History
The acronym SoM has its roots in the blade-based modules. In the mid 1980s, when VMEbus blades used M-Modules, these were commonly referred to as system On a module (SoM). These SoMs performed specific functions such as compute functions and data acquisition functions. SoMs were used extensively by Sun Microsystems, Motorola, Xerox, DEC, and IBM in their blade computers.
Design
A typical SoM consists of:
at least one microcontroller, microprocessor or digital signal processor (DSP) core
multiprocessor systems-on-chip (MPSoCs) have more than one processor core
memory blocks including a selection of ROM, RAM, EEPROM and/or flash memory
timing sources
industry standard communication interfaces such as USB, FireWire, Ethernet, USART, SPI, I²C
peripherals including counter-timers, real-time timers and power-on reset generators
analog interfaces including analog-to-digital converters and digital-to-analog converters
voltage regulators and power management circuits
See also
|
https://en.wikipedia.org/wiki/Simple%20programmable%20logic%20device
|
A simple programmable logic device (SPLD) is a programmable logic device with complexity below that of a complex programmable logic device (CPLD).
The term commonly refers to devices such as ROMs, PALs, PLAs and GALs.
Basic description
Simple programmable logic devices (SPLD) are the simplest, smallest and least-expensive forms of programmable logic devices. SPLDs can be used in boards to replace standard logic components (AND, OR, and NOT gates), such as 7400-series TTL.
They typically comprise 4 to 22 fully connected macrocells. These macrocells typically consist of some combinatorial logic (such as AND OR gates) and a flip-flop. In other words, a small Boolean logic equation can be built within each macrocell. This equation will combine the state of some number of binary inputs into a binary output and, if necessary, store that output in the flip-flop until the next clock edge. Of course, the particulars of the available logic gates and flip-flops are specific to each manufacturer and product family. But the general idea is always the same.
Most SPLDs use either fuses or non-volatile memory cells (EPROM, EEPROM, Flash, and others) to define the functionality.
These devices are also known as:
Programmable array logic (PAL)
Generic array logic (GAL)
Programmable logic arrays (PLA)
Field-programmable logic arrays (FPLA)
Programmable logic devices (PLD)
Advantages
PLDs are often used for address decoding, where they have several clear advantages over the 7400-series TTL parts that they replaced:
One chip requires less board area, power, and wiring than several do.
The design inside the chip is flexible, so a change in the logic does not require any rewiring of the board. Rather, simply replacing one PLD with another part that has been programmed with the new design can alter the decoding logic.
|
https://en.wikipedia.org/wiki/Oophagy
|
Oophagy ( ) sometimes ovophagy, literally "egg eating", is the practice of
embryos feeding on eggs produced by the ovary while still inside the mother's uterus. The word oophagy is formed from the classical Greek (, "egg") and classical Greek (, "to eat"). In contrast, adelphophagy is the cannibalism of a multi-celled embryo.
Oophagy is thought to occur in all sharks in the order Lamniformes and has been recorded in the bigeye thresher (Alopias superciliosus), the pelagic thresher (A. pelagicus), the shortfin mako (Isurus oxyrinchus) and the porbeagle (Lamna nasus) among others. It also occurs in the tawny nurse shark (Nebrius ferrugineus), and in the family Pseudotriakidae.
This practice may lead to larger embryos or prepare the embryo for a predatory lifestyle.
There are variations in the extent of oophagy among the different shark species. The grey nurse shark (Carcharias taurus) practices intrauterine cannibalism, the first developed embryo consuming both additional eggs and any other developing embryos. Slender smooth-hounds (Gollum attenuatus), form egg capsules which contain 30-80 ova, within which only one ovum develops; the remaining ova are ingested and their yolks stored in its external yolk sac. The embryo then proceeds to develop normally, without ingesting further eggs.
Oophagy is also used as a synonym of egg predation practised by some snakes and other animals. Similarly, the term can be used to describe the destruction of non-queen eggs in nests of certain social wasps, bees, and ants. This is seen in the wasp species Polistes biglumis and Polistes humilis. Oophagy has been observed in Leptothorax acervorum and Parachartergus fraternus, where oophagy is practiced to increase energy circulation and provide more dietary protein. Polistes fuscatus use oophagy as a method to establish a dominance hierarchy; dominant females will eat the eggs of subordinate females such that they no longer produce eggs, possibly due to the unnecessary expenditure
|
https://en.wikipedia.org/wiki/Chassis%20management%20controller
|
A chassis management controller (CMC) is an embedded system management hardware and software solution to manage multiple servers, networking, and storage.
A CMC can provide a secure browser-based interface that enables an IT system administrator to take inventory, perform configuration and monitoring tasks, remotely power on/off blade servers, and enable alerts for events on servers or components in the blade chassis. It has its own microprocessor and memory and is powered by the modular chassis it is plugged into. The inventory of hardware components is built-in and a CMC has a dedicated internal network. The blade enclosure, which can hold multiple blade servers, provides power, cooling, various interconnects, and additional systems management capabilities. Unlike a tower or rack server, a blade server cannot run by itself; it requires a compatible blade enclosure.
|
https://en.wikipedia.org/wiki/Proofs%20That%20Really%20Count
|
Proofs That Really Count: the Art of Combinatorial Proof is an undergraduate-level mathematics book on combinatorial proofs of mathematical identies. That is, it concerns equations between two integer-valued formulas, shown to be equal either by showing that both sides of the equation count the same type of mathematical objects, or by finding a one-to-one correspondence between the different types of object that they count. It was written by Arthur T. Benjamin and Jennifer Quinn, and published in 2003 by the Mathematical Association of America as volume 27 of their Dolciani Mathematical Expositions series. It won the Beckenbach Book Prize of the Mathematical Association of America.
Topics
The book provides combinatorial proofs of thirteen theorems in combinatorics and 246 numbered identities (collated in an appendix). Several additional "uncounted identities" are also included. Many proofs are based on a visual-reasoning method that the authors call "tiling", and in a foreword, the authors describe their work as providing a follow-up for counting problems of the Proof Without Words books by Roger B. Nelson.
The first three chapters of the book start with integer sequences defined by linear recurrence relations, the prototypical example of which is the sequence of Fibonacci numbers. These numbers can be given a combinatorial interpretation as the number of ways of tiling a strip of squares with tiles of two types, single squares and dominos; this interpretation can be used to prove many of the fundamental identities involving the Fibonacci numbers, and generalized to similar relations about other sequences defined similarly, such as the Lucas numbers, using "circular tilings and colored tilings". For instance, for the Fibonacci numbers, considering whether a tiling does or does not connect positions and of a strip of length immediately leads to the identity
Chapters four through seven of the book concern identities involving continued fractions, binomial coef
|
https://en.wikipedia.org/wiki/Frequency%20response
|
In signal processing and electronics, the frequency response of a system is the quantitative measure of the magnitude and phase of the output as a function of input frequency. The frequency response is widely used in the design and analysis of systems, such as audio and control systems, where they simplify mathematical analysis by converting governing differential equations into algebraic equations. In an audio system, it may be used to minimize audible distortion by designing components (such as microphones, amplifiers and loudspeakers) so that the overall response is as flat (uniform) as possible across the system's bandwidth. In control systems, such as a vehicle's cruise control, it may be used to assess system stability, often through the use of Bode plots. Systems with a specific frequency response can be designed using analog and digital filters.
The frequency response characterizes systems in the frequency domain, just as the impulse response characterizes systems in the time domain. In linear systems (or as an approximation to a real system neglecting second order non-linear properties), either response completely describes the system and thus have one-to-one correspondence: the frequency response is the Fourier transform of the impulse response. The frequency response allows simpler analysis of cascaded systems such as multistage amplifiers, as the response of the overall system can be found through multiplication of the individual stages' frequency responses (as opposed to convolution of the impulse response in the time domain). The frequency response is closely related to the transfer function in linear systems, which is the Laplace transform of the impulse response. They are equivalent when the real part of the transfer function's complex variable is zero.
Measurement and plotting
Measuring the frequency response typically involves exciting the system with an input signal and measuring the resulting output signal, calculating the frequency spectra
|
https://en.wikipedia.org/wiki/Human%20nutrition
|
Human nutrition deals with the provision of essential nutrients in food that are necessary to support human life and good health. Poor nutrition is a chronic problem often linked to poverty, food security, or a poor understanding of nutritional requirements. Malnutrition and its consequences are large contributors to deaths, physical deformities, and disabilities worldwide. Good nutrition is necessary for children to grow physically and mentally, and for normal human biological development.
Overview
The human body contains chemical compounds such as water, carbohydrates, amino acids (found in proteins), fatty acids (found in lipids), and nucleic acids (DNA and RNA). These compounds are composed of elements such as carbon, hydrogen, oxygen, nitrogen, and phosphorus. Any study done to determine nutritional status must take into account the state of the body before and after experiments, as well as the chemical composition of the whole diet and of all the materials excreted and eliminated from the body (including urine and feces).
Nutrients
The seven major classes of nutrients are carbohydrates, fats, fiber, minerals, proteins, vitamins, and water. Nutrients can be grouped as either macronutrients or micronutrients (needed in small quantities). Carbohydrates, fats, and proteins are macronutrients, and provide energy. Water and fiber are macronutrients but do not provide energy. The micronutrients are minerals and vitamins.
The macronutrients (excluding fiber and water) provide structural material (amino acids from which proteins are built, and lipids from which cell membranes and some signaling molecules are built), and energy. Some of the structural material can also be used to generate energy internally, and in either case it is measured in Joules or kilocalories (often called "Calories" and written with a capital 'C' to distinguish them from little 'c' calories). Carbohydrates and proteins provide 17 kJ approximately (4 kcal) of energy per gram, while fats prov
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20quantum%20theory
|
This is a list of mathematical topics in quantum theory, by Wikipedia page. See also list of functional analysis topics, list of Lie group topics, list of quantum-mechanical systems with analytical solutions.
Mathematical formulation of quantum mechanics
bra–ket notation
canonical commutation relation
complete set of commuting observables
Heisenberg picture
Hilbert space
Interaction picture
Measurement in quantum mechanics
quantum field theory
quantum logic
quantum operation
Schrödinger picture
semiclassical
statistical ensemble
wavefunction
wave–particle duality
Wightman axioms
WKB approximation
Schrödinger equation
quantum mechanics, matrix mechanics, Hamiltonian (quantum mechanics)
particle in a box
particle in a ring
particle in a spherically symmetric potential
quantum harmonic oscillator
hydrogen atom
ring wave guide
particle in a one-dimensional lattice (periodic potential)
Fock symmetry in theory of hydrogen
Symmetry
identical particles
angular momentum
angular momentum operator
rotational invariance
rotational symmetry
rotation operator
translational symmetry
Lorentz symmetry
Parity transformation
Noether's theorem
Noether charge
Spin (physics)
isospin
Aman matrices
scale invariance
spontaneous symmetry breaking
supersymmetry breaking
Quantum states
quantum number
Pauli exclusion principle
quantum indeterminacy
uncertainty principle
wavefunction collapse
zero-point energy
bound state
coherent state
squeezed coherent state
density state
Fock state, Fock space
vacuum state
quasinormal mode
no-cloning theorem
quantum entanglement
Dirac equation
spinor, spinor group, spinor bundle
Dirac sea
Spin foam
Poincaré group
gamma matrices
Dirac adjoint
Wigner's classification
anyon
Interpretations of quantum mechanics
Copenhagen interpretation
locality principle
Bell's theorem
Bell test loopholes
CHSH inequality
hidden variable theory
path integral formulation, quantum action
Bohm interp
|
https://en.wikipedia.org/wiki/Road%20coloring%20theorem
|
In graph theory the road coloring theorem, known previously as the road coloring conjecture, deals with synchronized instructions. The issue involves whether by using such instructions, one can reach or locate an object or destination from any other point within a network (which might be a representation of city streets or a maze). In the real world, this phenomenon would be as if you called a friend to ask for directions to his house, and he gave you a set of directions that worked no matter where you started from. This theorem also has implications in symbolic dynamics.
The theorem was first conjectured by Roy Adler and Benjamin Weiss. It was proved by Avraham Trahtman.
Example and intuition
The image to the right shows a directed graph on eight vertices in which each vertex has out-degree 2. (Each vertex in this case also has in-degree 2, but that is not necessary for a synchronizing coloring to exist.) The edges of this graph have been colored red and blue to create a synchronizing coloring.
For example, consider the vertex marked in yellow. No matter where in the graph you start, if you traverse all nine edges in the walk "blue-red-red—blue-red-red—blue-red-red", you will end up at the yellow vertex. Similarly, if you traverse all nine edges in the walk "blue-blue-red—blue-blue-red—blue-blue-red", you will always end up at the vertex marked in green, no matter where you started.
The road coloring theorem states that for a certain category of directed graphs, it is always possible to create such a coloring.
Mathematical description
Let G be a finite, strongly connected, directed graph where all the vertices have the same out-degree k. Let A be the alphabet containing the letters 1, ..., k. A synchronizing coloring (also known as a collapsible coloring) in G is a labeling of the edges in G with letters from A such that (1) each vertex has exactly one outgoing edge with a given label and (2) for every vertex v in the graph, there exists a word w over A such
|
https://en.wikipedia.org/wiki/Integration%20appliance
|
An integration appliance is a computer system specifically designed to lower the cost of integrating computer systems. Most integration appliances send or receive electronic messages from other computers that are exchanging electronic documents. Most Integration Appliances support XML messaging standards such as SOAP and Web services are frequently referred to as XML appliances and perform functions that can be grouped together as XML-Enabled Networking.
Vendors providing integration appliances
DataPower XI50 and IBM MQ Appliance — IBM
Intel SOA Products Division
Premier, Inc.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.