source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Computer%20network%20programming
|
Computer network programming involves writing computer programs that enable processes to communicate with each other across a computer network.
Connection-oriented and connectionless communications
Very generally, most of communications can be divided into connection-oriented, and connectionless. Whether a communication is connection-oriented or connectionless, is defined by the communication protocol, and not by . Examples of the connection-oriented protocols include and , and examples of connectionless protocols include , "raw IP", and .
Clients and servers
For connection-oriented communications, communication parties usually have different roles. One party is usually waiting for incoming connections; this party is usually referred to as "server". Another party is the one which initiates connection; this party is usually referred to as "client".
For connectionless communications, one party ("server") is usually waiting for an incoming packet, and another party ("client") is usually understood as the one which sends an unsolicited packet to "server".
Popular protocols and APIs
Network programming traditionally covers different layers of OSI/ISO model (most of application-level programming belongs to L4 and up). The table below contains some examples of popular protocols belonging to different OSI/ISO layers, and popular APIs for them.
See also
Software-defined networking
Infrastructure as code
Site reliability engineering
DevOps
|
https://en.wikipedia.org/wiki/Thermolabile
|
Thermolabile refers to a substance which is subject to, decomposition, or change in response to heat. This term is often used
describe biochemical substances.
For example, many bacterial exotoxins are thermolabile and can be easily inactivated by the application of moderate heat.
Enzymes are also thermolabile and lose their activity when the temperature rises.
Loss of activity in such toxins and enzymes is likely due to change in the three-dimensional structure of the toxin protein during exposure to heat.
In pharmaceutical compounds, heat generated during grinding may lead to degradation of thermolabile compounds.
This is of particular use in testing gene function. This is done by intentionally creating mutants which are thermolabile. Growth below the permissive temperature allows normal protein function, while increasing the temperature above the permissive temperature ablates activity, likely by denaturing the protein.
Thermolabile enzymes are also studied for their applications in DNA replication techniques, such as PCR, where thermostable enzymes are necessary for proper DNA replication. Enzyme function at higher temperatures may be enhanced with trehalose, which opens up the possibility of using normally thermolabile enzymes in DNA replication.
See also
Thermostable
Thermolabile protecting groups
|
https://en.wikipedia.org/wiki/Chamfered%20dodecahedron
|
In geometry, the chamfered dodecahedron is a convex polyhedron with 80 vertices, 120 edges, and 42 faces: 30 hexagons and 12 pentagons. It is constructed as a chamfer (edge-truncation) of a regular dodecahedron. The pentagons are reduced in size and new hexagonal faces are added in place of all the original edges. Its dual is the pentakis icosidodecahedron.
It is also called a truncated rhombic triacontahedron, constructed as a truncation of the rhombic triacontahedron. It can more accurately be called an order-12 truncated rhombic triacontahedron because only the order-12 vertices are truncated.
Structure
These 12 order-5 vertices can be truncated such that all edges are equal length. The original 30 rhombic faces become non-regular hexagons, and the truncated vertices become regular pentagons.
The hexagon faces can be equilateral but not regular with D symmetry. The angles at the two vertices with vertex configuration are and at the remaining four vertices with , they are each.
It is the Goldberg polyhedron , containing pentagonal and hexagonal faces.
It also represents the exterior envelope of a cell-centered orthogonal projection of the 120-cell, one of six convex regular 4-polytopes.
Chemistry
This is the shape of the fullerene ; sometimes this shape is denoted to describe its icosahedral symmetry and distinguish it from other less-symmetric 80-vertex fullerenes. It is one of only four fullerenes found by to have a skeleton that can be isometrically embeddable into an L space.
Related polyhedra
This polyhedron looks very similar to the uniform truncated icosahedron which has 12 pentagons, but only 20 hexagons.
The chamfered dodecahedron creates more polyhedra by basic Conway polyhedron notation. The zip chamfered dodecahedron makes a chamfered truncated icosahedron, and Goldberg (2,2).
Chamfered truncated icosahedron
In geometry, the chamfered truncated icosahedron is a convex polyhedron with 240 vertices, 360 edges, and 122 faces, 110 hexagon
|
https://en.wikipedia.org/wiki/Eyes%20%28cheese%29
|
Eyes are the round holes that are a characteristic feature of Swiss-type cheese (e.g. Emmentaler cheese) and some Dutch-type cheeses. The eyes are bubbles of carbon dioxide gas. The gas is produced by various species of bacteria in the cheese.
Swiss cheese
In Swiss-type cheeses, the eyes form as a result of the activity of propionic acid bacteria (propionibacteria), notably Propionibacterium freudenreichii subsp. shermanii. These bacteria transform lactic acid into propionic acid and carbon dioxide, according to the formula:
3 Lactate → 2 Propionate + Acetate + CO2 + H2O
The CO2 so produced accumulates at weak points in the curd, where it forms the bubbles that become the cheese's eyes. Not all CO2 is so trapped: in an cheese, about 20 L CO2 remain in the eyes, while 60 L remain dissolved in the cheese mass and 40 L are lost from the cheese.
Dutch cheese
In Dutch-type cheeses, the CO2 that forms the eyes results from the metabolisation of citrate by citrate-positive ("Cit+") strains of lactococci.
Bibliography
Polychroniadou, A. (2001). Eyes in cheese: a concise review. Milchwissenschaft 56, 74–77.
|
https://en.wikipedia.org/wiki/Double%20subscript%20notation
|
In engineering, double-subscript notation is notation used to indicate some variable between two points (each point being represented by one of the subscripts). In electronics, the notation is usually used to indicate the direction of current or voltage, while in mechanical engineering it is sometimes used to describe the force or stress between two points, and sometimes even a component that spans between two points (like a beam on a bridge or truss). Although there are many cases where multiple subscripts are used, they are not necessarily called double subscript notation specifically.
Electronic usage
IEEE standard 255-1963, "Letter Symbols for Semiconductor Devices", defined eleven original quantity symbols expressed as abbreviations.
This is the basis for a convention to standardize the directions of double-subscript labels. The following uses transistors as an example, but shows how the direction is read generally. The convention works like this:
represents the voltage from C to B. In this case, C would denote the collector end of a transistor, and B would denote the base end of the same transistor. This is the same as saying "the voltage drop from C to B", though this applies the standard definitions of the letters C and B. This convention is consistent with IEC 60050-121.
would in turn represent the current from C to E. In this case, C would again denote the collector end of a transistor, and E would denote the emitter end of the transistor. This is the same as saying "the current in the direction going from C to E".
Power supply pins on integrated circuits utilize the same letters for denoting what kind of voltage the pin would receive. For example, a power input labeled VCC would be a positive input that would presumably connect to the collector pin of a BJT transistor in the circuit, and likewise respectively with other subscripted letters. The format used is the same as for notations described above, though without the connotation of VCC meaning
|
https://en.wikipedia.org/wiki/Register%20renaming
|
In computer architecture, register renaming is a technique that abstracts logical registers from physical registers.
Every logical register has a set of physical registers associated with it.
When a machine language instruction refers to a particular logical register, the processor transposes this name to one specific physical register on the fly.
The physical registers are opaque and cannot be referenced directly but only via the canonical names.
This technique is used to eliminate false data dependencies arising from the reuse of registers by successive instructions that do not have any real data dependencies between them.
The elimination of these false data dependencies reveals more instruction-level parallelism in an instruction stream, which can be exploited by various and complementary techniques such as superscalar and out-of-order execution for better performance.
Problem approach
In a register machine, programs are composed of instructions which operate on values.
The instructions must name these values in order to distinguish them from one another.
A typical instruction might say: “add and and put the result in ”.
In this instruction, , and are the names of storage locations.
In order to have a compact instruction encoding, most processor instruction sets have a small set of special locations which can be referred to by special names: registers.
For example, the x86 instruction set architecture has 8 integer registers, x86-64 has 16, many RISCs have 32, and IA-64 has 128.
In smaller processors, the names of these locations correspond directly to elements of a register file.
Different instructions may take different amounts of time; for example, a processor may be able to execute hundreds of instructions while a single load from the main memory is in progress.
Shorter instructions executed while the load is outstanding will finish first, thus the instructions are finishing out of the original program order.
Out-of-order execution has been used in
|
https://en.wikipedia.org/wiki/Scramble%20competition
|
In ecology, scramble competition (or complete symmetric competition or exploitation competition) refers to a situation in which a resource is accessible to all competitors (that is, it is not monopolizable by an individual or group). However, since the particular resource is usually finite, scramble competition may lead to decreased survival rates for all competitors if the resource is used to its carrying capacity. Scramble competition is also defined as "[a] finite resource [that] is shared equally amongst the competitors so that the quantity of food per individual declines with increasing population density". A further description of scramble competition is "competition for a resource that is inadequate for the needs of all, but which is partitioned equally among contestants, so that no competitor obtains the amount it needs and all would die in extreme cases."
Types of intraspecific competition
Researchers recognize two main forms of intraspecific competition, where members of a species are all using a shared resource in short supply. These are contest competition and scramble competition.
Contest competition
Contest competition is a form of competition where there is a winner and a loser and where resources can be attained completely or not at all. Contest competition sets up a situation where "each successful competitor obtains all resources it requires for survival or reproduction". Here "contest" refers to the fact that physical action plays an active role in securing the resource. Contest competition involves resources that are stable, i.e. food or mates. Contests can be for a ritual objective such as territory or status, and losers may return to the competition another day to try again.
Scramble competition
In scramble competition resources are limited, which may lead to group member starvation.
Contest competition is often the result of aggressive social domains, including hierarchies or social chains. Conversely, scramble competition is what occurs by
|
https://en.wikipedia.org/wiki/Frans%C3%A9n%E2%80%93Robinson%20constant
|
The Fransén–Robinson constant, sometimes denoted F, is the mathematical constant that represents the area between the graph of the reciprocal Gamma function, , and the positive x axis. That is,
Other expressions
The Fransén–Robinson constant has numerical value , and continued fraction representation [2; 1, 4, 4, 1, 18, 5, 1, 3, 4, 1, 5, 3, 6, ...] . The constant is somewhat close to Euler's number This fact can be explained by approximating the integral by a sum:
and this sum is the standard series for e. The difference is
or equivalently
The Fransén–Robinson constant can also be expressed using the Mittag-Leffler function as the limit
It is however unknown whether F can be expressed in closed form in terms of other known constants.
Calculation history
A fair amount of effort has been made to calculate the numerical value of the Fransén–Robinson constant with high accuracy.
The value was computed to 36 decimal places by Herman P. Robinson using 11 point Newton–Cotes quadrature, to 65 digits by A. Fransén using Euler–Maclaurin summation, and to 80 digits by Fransén and S. Wrigge using Taylor series and other methods. William A. Johnson computed 300 digits, and Pascal Sebah was able to compute 1025 digits using Clenshaw–Curtis integration.
|
https://en.wikipedia.org/wiki/Food%20packaging
|
Food packaging is a packaging system specifically designed for food and represents one of the most important aspects among the processes involved in the food industry, as it provides protection from chemical, biological and physical alterations. The main goal of food packaging is to provide a practical means of protecting and delivering food goods at a reasonable cost while meeting the needs and expectations of both consumers and industries. Additionally, current trends like sustainability, environmental impact reduction, and shelf-life extension have gradually become among the most important aspects in designing a packaging system.
History
Packaging of food products has seen a vast transformation in technology usage and application from the stone age to the industrial revolution:
7000 BC: The adoption of pottery and glass which saw industrialization around 1500 BC.
1700s: The first manufacturing production of tinplate was introduced in England (1699) and in France (1720). Afterwards, the Dutch navy start to use such packaging to prolong the preservation of food products.
1804: Nicolas Appert, in response to inquiries into extending the shelf life of food for the French Army, employed glass bottles along with thermal food treatment. Glass has been replaced by metal cans in this application. However, there is still an ongoing debate about who first introduced the use of tinplates as food packaging.
1870: The use of paper board was launched and corrugated materials patented.
1880s: First cereal packaged in a folding box by Quaker Oats.
1890s: The crown cap for glass bottles was patented by William Painter.
1960s: Development of the two-piece drawn and wall-ironed metal cans in the US, along with the ring-pull opener and the Tetra Brik Aseptic carton package.
1970s: The barcode system was introduced in the retail and manufacturing industry. PET plastic blow-mold bottle technology, which is widely used in the beverage industry, was introduced.
1990s: The app
|
https://en.wikipedia.org/wiki/Food%20sampling
|
Food sampling is a process used to check that a food is safe and that it does not contain harmful contaminants, or that it contains only permitted additives at acceptable levels, or that it contains the right levels of key ingredients and its label declarations are correct, or to know the levels of nutrients present.
A food sample is carried out by subjecting the product to physical analysis. Analysis may be undertaken by or on behalf of a manufacturer regarding their own product, or for official food law enforcement or control purposes, or for research or public information.
To undertake any analysis, unless the whole amount of food to be considered is very small so that the food can be used for testing in its entirety, it is usually necessary for a portion of it to be taken (e.g. a small quantity from a full production batch, or a portion of what is on sale in a shop) – this process is known as food sampling.
In most cases with food to be analysed there are two levels of sampling – the first being selection of a portion from the whole, which is then submitted to a laboratory for testing, and the second being the laboratory's taking of the individual amounts necessary for individual tests that may be applied. It is the former that is ‘food sampling’: the latter is analytical laboratory ‘sub-sampling’, often relying upon initial homogenisation of the entire submitted sample.
Where it is intended that the results of any analysis to relate to the food as a whole it is crucially important that the sample is representative of that whole – and the results of any analysis can only be meaningful if the sampling is undertaken effectively. This is true whether the ‘whole’ is a manufacturer's entire production batch, or where it is a single item but too large to all be used for the test.
Factors relevant in considering the representativeness of a sample include the homogeneity of the food, the relative sizes of the sample to be taken and the whole, the potential
|
https://en.wikipedia.org/wiki/Ecosystem%20collapse
|
An ecosystem, short for ecological system, is defined as a collection of interacting organisms within a biophysical environment. Ecosystems are never static, and are continually subject to stabilizing and destabilizing processes alike. Stabilizing processes allow ecosystems to adequately respond to destabilizing changes, or pertubations, in ecological conditions, or to recover from degradation induced by them: yet, if destabilizing processes become strong enough or fast enough to cross a critical threshold within that ecosystem, often described as an ecological 'tipping point', then an ecosystem collapse (sometimes also termed ecological collapse) occurs.
Ecosystem collapse does not mean total disappearance of life from the area, but it does result in the loss of the original ecosystem's defining characteristics, typically including the ecosystem services it may have provided. Collapse of an ecosystem is effectively irreversible more often than not, and even if the reversal is possible, it tends to be slow and difficult. Ecosystems with low resilience may collapse even during a comparatively stable time, which then typically leads to their replacement with a more resilient system in the biosphere. However, even resilient ecosystems may disappear during the times of rapid environmental change, and study of the fossil record was able to identify how certain ecosystems went through a collapse, such as with the Carboniferous rainforest collapse or the collapse of Lake Baikal and Lake Hovsgol ecosystems during the Last Glacial Maximum.
Today, the ongoing Holocene extinction is caused primarily by human impact on the environment, and the greatest biodiversity loss so far had been due to habitat degradation and fragmentation, which eventually destroys entire ecosystems if left unchecked. There have been multiple notable examples of such an ecosystem collapse in the recent past, such as the collapse of the Atlantic northwest cod fishery. More are likely to occur without
|
https://en.wikipedia.org/wiki/Vectored%20interrupt
|
In computer science, a vectored interrupt is a processing technique in which the interrupting device directs the processor to the appropriate interrupt service routine. This is in contrast to a polled interrupt system, in which a single interrupt service routine must determine the source of the interrupt by checking all potential interrupt sources, a slow and relatively laborious process.
Implementation
Vectored interrupts are achieved by assigning each interrupting device a unique code, typically four to eight bits in length. When a device interrupts, it sends its unique code over the data bus to the processor, telling the processor which interrupt service routine to execute.
|
https://en.wikipedia.org/wiki/Repeater%20insertion
|
Repeater insertion is a technique used to reduce time delays associated with long wire lines in integrated circuits. This technique involves cutting the long wire into one or more shorter wires, and then inserting a repeater between each pair of newly created short wires.
The time it takes for a signal to travel from one end of a wire to the other end is known as wire-line delay or just delay. In an integrated circuit, this delay is characterized by RC, the resistance of the wire (R) multiplied by the wire's capacitance (C). Thus, if the wire's resistance is 100 ohms and its capacitance is 0.01 microfarad (μF), the wire's delay is one microsecond (µs).
To first order, the resistance of a wire on an integrated circuit is directly proportional, or linear, according to the wire's length. If a 1 mm length of the wire has 100 ohms resistance, then a 2 mm length will have 200 ohms resistance.
For the purposes of our highly simplified discussion, the capacitance of a wire also increases linearly along its length. If a 1 mm length of the wire has 0.01 µF capacitance, a 2 mm length of the wire will have 0.02 µF, a 3 mm wire will have 0.03 µF, and so o
Thus, the time delay through a wire increases with the square of the wire's length. This is true, to first order, for any wire whose cross-section remains constant along the length of the wire.
wire resistance capacitance time delay
length R C t
1 mm 100 ohm 0.01 µF 1 µs
2 mm 200 ohm 0.02 µF 4 µs
3 mm 300 ohm 0.03 µF 9 µs
The interesting consequence of this behavior is that, while a single 2 mm length of wire has a delay of 4 µs. Two separate 1 mm wires only have a delay of 1 µs each and cover the same distance in half the time. By cutting the wire in half, one can double its speed.
To make this science trick work properly, an active circuit must be placed between the two separate wires so as to move the signal from one to
|
https://en.wikipedia.org/wiki/Current%20conveyor
|
A current conveyor is an abstraction for a three-terminal analogue electronic device. It is a form of electronic amplifier with unity gain. There are three versions of generations of the idealised device, CCI, CCII and CCIII. When configured with other circuit elements, real current conveyors can perform many analogue signal processing functions, in a similar manner to the way op-amps and the ideal concept of the op-amp are used.
History
When Sedra and Smith first introduced the current conveyor in 1968, it was not clear what the benefits of the concept would be. The idea of the op-amp had been well known since the 1940s, and integrated circuit manufacturers were better able to capitalise on this widespread knowledge within the electronics industry. Monolithic current conveyor implementations were not introduced, and the op-amp became widely implemented. Since the early 2000s, implementations of the current conveyor concept, especially within larger VLSI projects such as mobile phones, have proved worthwhile.
Advantages
Current conveyors can provide better gain-bandwidth products than comparable op-amps, under both small and large signal conditions. In instrumentation amplifiers, their gain does not depend on matching pairs of external components, only on the absolute value of a single circuit element.
First generation (CCI)
The CCI is a three-terminal device with the terminals designated X, Y, and Z. The potential at X equals whatever voltage is applied to Y. Whatever current flows into Y also flows into X, and is mirrored at Z with a high output impedance, as a variable constant current source. In sub-type CCI+, current into Y produces current into Z; in a CCI-, current into Y results in an equivalent current flowing out of Z.
Second generation (CCII)
In a more versatile later design, no current flows through terminal Y. The ideal CCII can be seen as an ideal transistor with perfected characteristics. No current flows into the gate or base which is represen
|
https://en.wikipedia.org/wiki/Outline%20of%20arithmetic
|
Arithmetic is an elementary branch of mathematics that is widely used for tasks ranging from simple day-to-day counting to advanced science and business calculations.
Essence of arithmetic
Elementary arithmetic
Decimal arithmetic
Decimal point
Numeral
Place value
Face value
History of arithmetic
Arithmetic operations and related concepts
Order of Operations
Addition
Summation – Answer after adding a sequence of numbers
Additive Inverse
Subtraction – Taking away numbers
Multiplication – Repeated addition
Multiple – Product of Multiplication
Least Common Multiple
Multiplicative Inverse
Division – Repeated Subtraction
Modulo – The remainder of division
Quotient – Result of Division
Quotition and Partition – How many parts are there, and what is the size of each part
Fraction – A number that isn't whole, often shown as a divsion equation
Decimal Fraction – Representation of a Fraction in the form of a number
Proper Fraction – Fraction with a Numerator that is less than the Denominator
Improper Fraction – Fractions with a Numerator that is any number
Ratio – Showing how much one number can go into another
Least Common Denominator – Least Common Multiple of 2 or more fractions' denominators
Factoring – Breaking a number down into its products
Fundamental theorem of arithmetic
Prime number – Number divisable by only 1 or itself
Prime number theorem
Distribution of primes
Composite number – Number made of 2 smaller integers
Factor – A number that can be divided from it's original number to get a whole number
Greatest Common Factor – Greatest Factor that is common between 2 numbers
Euclid's algorithm for finding greatest common divisors
Exponentiation (power) – Repreated Multiplication
Square root – Reversal of a power of 2 (exponent of 1/2)
Cube root – Reversal of a power of 3 (exponent of 1/3)
Properties of Operations
Associative property
Distributive property
Commutative property
Factorial – Multiplication of numbers from the current number to 0
Types of numbers
Re
|
https://en.wikipedia.org/wiki/Critical%20area%20%28computing%29
|
In integrated circuit design, a critical area is a section of a circuit design wherein a particle of a particular size can cause a failure. It measures the sensitivity of the circuit to a reduction in yield.
The critical area on a single layer integrated circuit design is given by:
where is the area in which a defect of radius will cause a failure, and is the density function of said defect.
|
https://en.wikipedia.org/wiki/Extract
|
An extract (essence) is a substance made by extracting a part of a raw material, often by using a solvent such as ethanol, oil or water. Extracts may be sold as tinctures, absolutes or in powder form.
The aromatic principles of many spices, nuts, herbs, fruits, etc., and some flowers, are marketed as extracts, among the best known of true extracts being almond, cinnamon, cloves, ginger, lemon, nutmeg, orange, peppermint, pistachio, rose, spearmint, vanilla, violet, rum, and wintergreen.
Extraction techniques
Most natural essences are obtained by extracting the essential oil from the feedstock, such as blossoms, fruit, and roots, or from intact plants through multiple techniques and methods:
Expression (juicing, pressing) involves physical extraction material from feedstock, used when the oil is plentiful and easily obtained from materials such as citrus peels, olives, and grapes.
Absorption (steeping, decoction). Extraction is done by soaking material in a solvent, as used for vanilla beans or tea leaves.
Maceration, as used to soften and degrade material without heat, normally using oils, such as for peppermint extract and wine making.
Distillation or separation process, creating a higher concentration of the extract by heating material to a specific boiling point, then collecting this and condensing the extract, leaving the unwanted material behind, as used for lavender extract.
The distinctive flavors of nearly all fruits are desirable adjuncts to many food preparations, but only a few are practical sources of sufficiently concentrated flavor extract, such as from lemons, oranges, and vanilla beans.
Artificial extracts
The majority of concentrated fruit flavors, such as banana, cherry, peach, pineapple, raspberry, and strawberry, are produced by combining a variety of esters with special oils. Suitable coloring is generally obtained by the use of dyes. Among the esters most generally employed are ethyl acetate and ethyl butyrate. The chief factors
|
https://en.wikipedia.org/wiki/Butterfly%20count
|
Butterfly counts are often carried out in North America and Europe to estimate the populations of butterflies in a specific geographical area.
The counts are conducted by interested, mostly non-professional, residents of the area who maintain an interest in determining the numbers and species of butterflies in their locale. A butterfly count usually occurs at a specific time during the year and is sometimes coordinated to occur with other counts which may include a park, county, entire state or country. The results of the counts are usually shared with other interested parties including professional lepidopterists and researchers. The data gathered during a count can indicate population changes and health within a species.
Sponsors
Professionals, universities, clubs, elementary and secondary schools, other educational providers, nature preserves, parks, and amateur organizations can organize a count. The participants often receive training to help them identify the butterfly species. The North American Butterfly Association organized over 400 counts in 2014.
Types of butterfly counts
There are several methods for counting butterflies currently in use, with the notable division being between restricted and open searches. Most counts are designed to count all butterflies observed in a locality. The purpose of a count is to estimate butterfly populations in a larger area from a smaller sample.
Counts may be targeted at single species and, in some cases, butterflies are observed and counted as they move from one area to another. A heavily researched example of butterfly migration is the annual migration of monarch butterflies in North America. Some programs will tag butterflies to trace their migration routes, but these are migratory programs and not butterfly counts. Butterfly counts are sometimes done where there is a concentration (a roost) of a species of butterflies in an area. One example of this is the winter count of western monarch butterflies as the
|
https://en.wikipedia.org/wiki/List%20of%20calculus%20topics
|
This is a list of calculus topics.
Limits
Limit (mathematics)
Limit of a function
One-sided limit
Limit of a sequence
Indeterminate form
Orders of approximation
(ε, δ)-definition of limit
Continuous function
Differential calculus
Derivative
Notation
Newton's notation for differentiation
Leibniz's notation for differentiation
Simplest rules
Derivative of a constant
Sum rule in differentiation
Constant factor rule in differentiation
Linearity of differentiation
Power rule
Chain rule
Local linearization
Product rule
Quotient rule
Inverse functions and differentiation
Implicit differentiation
Stationary point
Maxima and minima
First derivative test
Second derivative test
Extreme value theorem
Differential equation
Differential operator
Newton's method
Taylor's theorem
L'Hôpital's rule
General Leibniz rule
Mean value theorem
Logarithmic derivative
Differential (calculus)
Related rates
Regiomontanus' angle maximization problem
Rolle's theorem
Integral calculus
Antiderivative/Indefinite integral
Simplest rules
Sum rule in integration
Constant factor rule in integration
Linearity of integration
Arbitrary constant of integration
Cavalieri's quadrature formula
Fundamental theorem of calculus
Integration by parts
Inverse chain rule method
Integration by substitution
Tangent half-angle substitution
Differentiation under the integral sign
Trigonometric substitution
Partial fractions in integration
Quadratic integral
Proof that 22/7 exceeds π
Trapezium rule
Integral of the secant function
Integral of secant cubed
Arclength
Solid of revolution
Shell integration
Special functions and numbers
Natural logarithm
e (mathematical constant)
Exponential function
Hyperbolic angle
Hyperbolic function
Stirling's approximation
Bernoulli numbers
Absolute numerical
See also list of numerical analysis topics
Rectangle method
Trapezoidal rule
Simpson's rule
Newton–Cotes formulas
Gaussian quadrature
Lists and tables
|
https://en.wikipedia.org/wiki/Systemness
|
Systemness is the state, quality, or condition of a complex system, that is, of a set of interconnected elements that behave as, or appear to be, a whole, exhibiting behavior distinct from the behavior of the parts. The term is new and has been applied to large social phenomena and organizations (healthcare and higher education) by advocates of higher degrees of system-like, coherent behavior for delivering value to stakeholders.
In sociology, Montreal-based Polish academic Szymon Chodak (1973) used "societal systemness" in English to describe the empirical reality that inspired Emile Durkheim.
The healthcare-related usage of the term was as early as 1986 in a Dutch psychiatric research paper. It has recently been adapted to describe the sustainability efforts of healthcare institutions amidst budget cuts stemming from the 2008–2012 global recession.
The higher educational use appears to have featured in professional discussions between sociologist Neil Smelser and University of California Chancellor and President Clark Kerr in the 1950s or 60s; in the foreword to Kerr's 2001 memoir, Smelser uses the term in inverted commas in recalling such discussions.
The term's overt operationalization, however, was instituted by The State University of New York's (SUNY) Chancellor Nancy L. Zimpher in the State of the University Address on January 9, 2012. Zimpher noted systemness as "the coordination of multiple components that, when working together, create a network of activity that is more powerful than any action of individual parts on their own." The concept was later explored in the volume, Higher Education Systems 3.0, edited by Jason E. Lane and D. Bruce Johnston.
Use in higher education
The term "systemness" has received widespread adoption in discussions within and among the leaders of multi-campus university systems to discuss the evolution of multi-campus collaboration and coordination in a range of different programmatic areas. The term was first coined b
|
https://en.wikipedia.org/wiki/Hilbert%20spectroscopy
|
Hilbert Spectroscopy uses Hilbert transforms to analyze broad spectrum signals from gigahertz to terahertz frequency radio. One suggested use is to quickly analyze liquids inside airport passenger luggage.
|
https://en.wikipedia.org/wiki/Tarjan%27s%20algorithm
|
Tarjan's algorithm may refer to one of several algorithms attributed to Robert Tarjan, including:
Tarjan's strongly connected components algorithm
Tarjan's off-line lowest common ancestors algorithm
Tarjan's algorithm for finding bridges in an undirected graph
Tarjan's algorithm for finding simple circuits in a directed graph
See also
List of algorithms
|
https://en.wikipedia.org/wiki/WIP%20message
|
WIP message is a work-in-progress message sent from a computer client to a computer server. It is used to update a server with the progress of an item during a manufacturing process. The only known use is in the automotive wiring manufacturing process, but the message structure is generic enough to be used in any manufacturing process.
History
The WIP Message Protocol was originally developed to overcome the need to allow computers running disparate operating system to communicate with one another. The first implementation was on the Acorn computer running the RISC OS swiftly followed by a PC implementation.
Communication methodology
Each computer may act as a server, a client, or both. In the server configuration a listening socket is opened on a specific port (default port is 99) and the server waits for connection attempts from its clients. The client connects by opening a socket and sending data to the server in the format [Header][Data]. The header contains information about the message such as the message length, message number which can be anything from 1 to 4,294,967,295 and the part unique identifier or serial number which is limited to 10 digits (9,999,999,999 max). The serial number consists of the year 4 digits, the day of the year (0-366) 3 digits and a 3 digit sequential number.
The server will action the message (each message number has a specific meaning to the particular process) and respond with a return code. The return code is commonly used to designate whether the process is allowed to proceed or not. The server will usually be written in such way that the manufacturing process flow is mapped and the Server will therefore not allow manufacturing to progress to the next stage if the previous stage is incomplete or failed for some reason.
Message format
Two formats of message are used. Loosely termed a 'short' and a 'long' message format, a short message contains specific information along with 18 bytes that can be used for custom information,
|
https://en.wikipedia.org/wiki/NO%20CARRIER
|
NO CARRIER (capitalized) is a text message transmitted from a modem to its attached device (typically a computer), indicating the modem is not (or no longer) connected to a remote system.
NO CARRIER is a response message that is defined in the Hayes command set. Due to the popularity of Hayes modems during the heyday of dial-up connectivity, most other modem manufacturers supported the Hayes command set. For this reason, the NO CARRIER message was ubiquitously understood to mean that one was no longer connected to a remote system.
Carrier tone
A carrier tone is an audio carrier signal used by two modems to suppress echo cancellation and establish a baseline frequency for communication. When the answering modem detects a ringtone on the phone line, it picks up that line and starts transmitting a carrier tone. If it does not receive data from the calling modem within a set amount of time, it disconnects the line. The calling modem waits for the tone after it dials the phone line before it initiates data transmission. If it does not receive a carrier tone within a set amount of time, it will disconnect the phone line and issues the NO CARRIER message.
The actual data is transmitted from the answering modem to the calling modem via modulation of the carrier.
Practical meaning
The NO CARRIER message is issued by a modem for any of the following reasons:
A dial (ATD) or answer (ATA) command did not result in a successful connection to another modem, and the reason wasn't that the line was BUSY (a separately defined message).
A dial or answer command was aborted while in progress. The abort can be triggered by the computer receiving a keypress to abort or the computer dropping the Data Terminal Ready (DTR) signal to hang up.
A previously established data connection has ended (either at the attached computer's command, or as a result of being disconnected from the remote end), and the modem has now gone from the data mode to the command mode.
Current use
As modems
|
https://en.wikipedia.org/wiki/Voigt%20notation
|
In mathematics, Voigt notation or Voigt form in multilinear algebra is a way to represent a symmetric tensor by reducing its order. There are a few variants and associated names for this idea: Mandel notation, Mandel–Voigt notation and Nye notation are others found. Kelvin notation is a revival by Helbig of old ideas of Lord Kelvin. The differences here lie in certain weights attached to the selected entries of the tensor. Nomenclature may vary according to what is traditional in the field of application.
For example, a 2×2 symmetric tensor X has only three distinct elements, the two on the diagonal and the other being off-diagonal. Thus it can be expressed as the vector
.
As another example:
The stress tensor (in matrix notation) is given as
In Voigt notation it is simplified to a 6-dimensional vector:
The strain tensor, similar in nature to the stress tensor—both are symmetric second-order tensors --, is given in matrix form as
Its representation in Voigt notation is
where , , and are engineering shear strains.
The benefit of using different representations for stress and strain is that the scalar invariance
is preserved.
Likewise, a three-dimensional symmetric fourth-order tensor can be reduced to a 6×6 matrix.
Mnemonic rule
A simple mnemonic rule for memorizing Voigt notation is as follows:
Write down the second order tensor in matrix form (in the example, the stress tensor)
Strike out the diagonal
Continue on the third column
Go back to the first element along the first row.
Voigt indexes are numbered consecutively from the starting point to the end (in the example, the numbers in blue).
Mandel notation
For a symmetric tensor of second rank
only six components are distinct, the three on the diagonal and the others being off-diagonal.
Thus it can be expressed, in Mandel notation, as the vector
The main advantage of Mandel notation is to allow the use of the same conventional operations used with vectors,
for example:
A symmetric tensor o
|
https://en.wikipedia.org/wiki/Dependent%20component%20analysis
|
Dependent component analysis (DCA) is a blind signal separation (BSS) method and an extension of Independent component analysis (ICA). ICA is the separating of mixed signals to individual signals without knowing anything about source signals. DCA is used to separate mixed signals into individual sets of signals that are dependent on signals within their own set, without knowing anything about the original signals. DCA can be ICA if all sets of signals only contain a single signal within their own set.
Mathematical representation
For simplicity, assume all individual sets of signals are the same size, k, and total N sets. Building off the basic equations of BSS (seen below) instead of independent source signals, one has independent sets of signals, s(t) = ({s1(t),...,sk(t)},...,{skN-k+1(t)...,skN(t)})T, which are mixed by coefficients A=[aij]εRmxkN that produce a set of mixed signals, x(t)=(x1(t),...,xm(t))T. The signals can be multidimensional.
The following equation BSS separates the set of mixed signals, x(t), by finding and using coefficients, B=[Bij]εRkNxm, to separate and get the set of approximation of the original signals, y(t)=({y1(t),...,yk(t)},...,{ykN-k+1(t)...,ykN(t)})T.
Methods
Sub-Band Decomposition ICA (SDICA) is based on the fact that wideband source signals are dependent, but that other subbands are independent. It uses an adaptive filter by choosing subbands using a minimum of mutual information (MI) to separate mixed signals. After finding subband signals, ICA can be used to reconstruct, based on subband signals, by using ICA. Below is a formula to find MI based on entropy, where H is entropy.
|
https://en.wikipedia.org/wiki/List%20of%20formulas%20in%20Riemannian%20geometry
|
This is a list of formulas encountered in Riemannian geometry. Einstein notation is used throughout this article. This article uses the "analyst's" sign convention for Laplacians, except when noted otherwise.
Christoffel symbols, covariant derivative
In a smooth coordinate chart, the Christoffel symbols of the first kind are given by
and the Christoffel symbols of the second kind by
Here is the inverse matrix to the metric tensor . In other words,
and thus
is the dimension of the manifold.
Christoffel symbols satisfy the symmetry relations
or, respectively, ,
the second of which is equivalent to the torsion-freeness of the Levi-Civita connection.
The contracting relations on the Christoffel symbols are given by
and
where |g| is the absolute value of the determinant of the metric tensor . These are useful when dealing with divergences and Laplacians (see below).
The covariant derivative of a vector field with components is given by:
and similarly the covariant derivative of a -tensor field with components is given by:
For a -tensor field with components this becomes
and likewise for tensors with more indices.
The covariant derivative of a function (scalar) is just its usual differential:
Because the Levi-Civita connection is metric-compatible, the covariant derivatives of metrics vanish,
as well as the covariant derivatives of the metric's determinant (and volume element)
The geodesic starting at the origin with initial speed has Taylor expansion in the chart:
Curvature tensors
Definitions
(3,1) Riemann curvature tensor
(3,1) Riemann curvature tensor
Ricci curvature
Scalar curvature
Traceless Ricci tensor
(4,0) Riemann curvature tensor
(4,0) Weyl tensor
Einstein tensor
Identities
Basic symmetries
The Weyl tensor has the same basic symmetries as the Riemann tensor, but its 'analogue' of the Ricci tensor is zero:
The Ricci tensor, the Einstein tensor, and the traceless Ricci tensor are symmetric 2-tensors:
First Bianch
|
https://en.wikipedia.org/wiki/Daisy%20chain%20%28electrical%20engineering%29
|
In electrical and electronic engineering, a daisy chain is a wiring scheme in which multiple devices are wired together in sequence or in a ring, similar to a garland of daisy flowers. Daisy chains may be used for power, analog signals, digital data, or a combination thereof.
The term daisy chain may refer either to large scale devices connected in series, such as a series of power strips plugged into each other to form a single long line of strips, or to the wiring patterns embedded inside of devices. Other examples of devices which can be used to form daisy chains are those based on Universal Serial Bus (USB), FireWire, Thunderbolt and Ethernet cables.
Signal transmission
For analog signals, connections usually consist of a simple electrical bus and, especially in the case of a chain of many devices, may require the use of one or more repeaters or amplifiers within the chain to counteract attenuation (the natural loss of energy in such a system). Digital signals between devices may also travel on a simple electrical bus, in which case a bus terminator may be needed on the last device in the chain. However, unlike analog signals, because digital signals are discrete, they may also be electrically regenerated, but not modified, by any device in the chain.
Types
Computer hardware
Some hardware can be attached to a computing system in a daisy chain configuration by connecting each component to another similar component, rather than directly to the computing system that uses the component. Only the last component in the chain directly connects to the computing system. For example, chaining multiple components that each have a UART port to each other. The components must also behave cooperatively. e.g., only one seizes the communications bus at a time.
SCSI is an example of a digital system that is electrically a bus, in the case of external devices, is physically wired as a daisy chain. Since the network is electrically a bus, it must be terminated and this m
|
https://en.wikipedia.org/wiki/Sides%20of%20an%20equation
|
In mathematics, LHS is informal shorthand for the left-hand side of an equation. Similarly, RHS is the right-hand side. The two sides have the same value, expressed differently, since equality is symmetric.
More generally, these terms may apply to an inequation or inequality; the right-hand side is everything on the right side of a test operator in an expression, with LHS defined similarly.
Example
The expression on the right side of the "=" sign is the right side of the equation and the expression on the left of the "=" is the left side of the equation.
For example, in
is the left-hand side (LHS) and is the right-hand side (RHS).
Homogeneous and inhomogeneous equations
In solving mathematical equations, particularly linear simultaneous equations, differential equations and integral equations, the terminology homogeneous is often used for equations with some linear operator L on the LHS and 0 on the RHS. In contrast, an equation with a non-zero RHS is called inhomogeneous or non-homogeneous, as exemplified by
Lf = g,
with g a fixed function, which equation is to be solved for f. Then any solution of the inhomogeneous equation may have a solution of the homogeneous equation added to it, and still remain a solution.
For example in mathematical physics, the homogeneous equation may correspond to a physical theory formulated in empty space, while the inhomogeneous equation asks for more 'realistic' solutions with some matter, or charged particles.
Syntax
More abstractly, when using infix notation
T * U
the term T stands as the left-hand side and U as the right-hand side of the operator *. This usage is less common, though.
See also
Equals sign
|
https://en.wikipedia.org/wiki/Up%20to
|
Two mathematical objects and are called "equal up to an equivalence relation "
if and are related by , that is,
if holds, that is,
if the equivalence classes of and with respect to are equal.
This figure of speech is mostly used in connection with expressions derived from equality, such as uniqueness or count.
For example, " is unique up to " means that all objects under consideration are in the same equivalence class with respect to the relation .
Moreover, the equivalence relation is often designated rather implicitly by a generating condition or transformation.
For example, the statement "an integer's prime factorization is unique up to ordering" is a concise way to say that any two lists of prime factors of a given integer are equivalent with respect to the relation that relates two lists if one can be obtained by reordering (permuting) the other. As another example, the statement "the solution to an indefinite integral is , up to addition of a constant" tacitly employs the equivalence relation between functions, defined by if the difference is a constant function, and means that the solution and the function are equal up to this .
In the picture, "there are 4 partitions up to rotation" means that the set has 4 equivalence classes with respect to defined by if can be obtained from by rotation; one representative from each class is shown in the bottom left picture part.
Equivalence relations are often used to disregard possible differences of objects, so "up to " can be understood informally as "ignoring the same subtleties as ignores".
In the factorization example, "up to ordering" means "ignoring the particular ordering".
Further examples include "up to isomorphism", "up to permutations", and "up to rotations", which are described in the Examples section.
In informal contexts, mathematicians often use the word modulo (or simply mod) for similar purposes, as in "modulo isomorphism".
Examples
Tetris
Consider the seven Tetris pieces
|
https://en.wikipedia.org/wiki/Median%20filter
|
The median filter is a non-linear digital filtering technique, often used to remove noise from an image or signal. Such noise reduction is a typical pre-processing step to improve the results of later processing (for example, edge detection on an image). Median filtering is very widely used in digital image processing because, under certain conditions, it preserves edges while removing noise (but see the discussion below), also having applications in signal processing.
Algorithm description
The main idea of the median filter is to run through the signal entry by entry, replacing each entry with the median of neighboring entries. The pattern of neighbors is called the "window", which slides, entry by entry, over the entire signal. For one-dimensional signals, the most obvious window is just the first few preceding and following entries, whereas for two-dimensional (or higher-dimensional) data the window must include all entries within a given radius or ellipsoidal region (i.e. the median filter is not a separable filter).
Worked one-dimensional example
To demonstrate, using a window size of three with one entry immediately preceding and following each entry, a median filter will be applied to the following simple one-dimensional signal:
x = (2, 3, 80, 6, 2, 3).
So, the median filtered output signal y will be:
y1 = med(2, 3, 80) = 3, (already 2, 3, and 80 are in the increasing order so no need to arrange them)
y2 = med(3, 80, 6) = med(3, 6, 80) = 6, (3, 80, and 6 are rearranged to find the median)
y3 = med(80, 6, 2) = med(2, 6, 80) = 6,
y4 = med(6, 2, 3) = med(2, 3, 6) = 3,
i.e. y = (3, 6, 6, 3).
Boundary issues
When implementing a median filter, the boundaries of the signal must be handled with special care, as there are not enough entries to fill an entire window. There are several schemes that have different properties that might be preferred in particular circumstances:
When calculating the median of a value near the boundary, missing values are f
|
https://en.wikipedia.org/wiki/Eocyte%20hypothesis
|
The eocyte hypothesis in evolutionary biology proposes that the eukaryotes originated from a group of prokaryotes called eocytes (later classified as Thermoproteota, a group of archaea). After his team at the University of California, Los Angeles discovered eocytes in 1984, James A. Lake formulated the hypothesis as "eocyte tree" that proposed eukaryotes as part of archaea. Lake hypothesised the tree of life as having only two primary branches: Parkaryoates that include Bacteria and Archaea, and karyotes that comprise Eukaryotes and eocytes. Parts of this early hypothesis were revived in a newer two-domain system of biological classification which named the primary domains as Archaea and Bacteria.
Lake's hypothesis was based on an analysis of the structural components of ribosomes. It was largely ignored, being overshadowed by the three-domain system which relied on more precise genetic analysis. In 1990, Carl Woese and his colleagues proposed that cellular life consists of three domains – Eucarya, Bacteria, and Archaea – based on the ribosomal RNA sequences. The three-domain concept was widely accepted in genetics, and became the presumptive classification system for high-level taxonomy, and was promulgated in many textbooks.
Resurgence of archaea research after the 2000s, using advanced genetic techniques, and later discoveries of new groups of archaea revived the eocyte hypothesis; consequently, the two-domain system has found wider acceptance.
Description
In 1984, James A. Lake, Michael W. Clark, Eric Henderson, and Melanie Oakes of the University of California, Los Angeles described a new group of prokaryotic organisms designated as "a group of sulfur-dependent bacteria." Based on the structure and composition of their ribosomal subunits, they found that these organisms were different from other prokaryotes, bacteria and archaea, known at the time. They named them eocytes (for "dawn cells") and proposed a new biological kingdom Eocyta. According to this disc
|
https://en.wikipedia.org/wiki/Adjacent%20channel%20power%20ratio
|
Adjacent Channel Power Ratio (ACPR) is ratio between the total power of adjacent channel (intermodulation signal) to the main channel's power (useful signal).
Ratio
The ratio between the total power adjacent channel (intermodulation signal) to the main channel's power (useful signal). There are two ways of measuring ACPR. The first way is by finding 10*log of the ratio of the total output power to the power in adjacent channel. The second (and much more popular method) is to find the ratio of the output power in a smaller bandwidth around the center of carrier to the power in the adjacent channel. The smaller bandwidth is equal to the bandwidth of the adjacent channel signal. Second way is more popular, because it can be measured easily.
ACPR is desired to be as low as possible. A high ACPR indicates that significant spectral spreading has occurred.
See also
Spectral leakage
Spread spectrum
|
https://en.wikipedia.org/wiki/Open%20architecture
|
Open architecture is a type of computer architecture or software architecture intended to make adding, upgrading, and swapping components with other computers easy. For example, the IBM PC, Amiga 500 and Apple IIe have an open architecture supporting plug-in cards, whereas the Apple IIc computer has a closed architecture. Open architecture systems may use a standardized system bus such as S-100, PCI or ISA or they may incorporate a proprietary bus standard such as that used on the Apple II, with up to a dozen slots that allow multiple hardware manufacturers to produce add-ons, and for the user to freely install them. By contrast, closed architectures, if they are expandable at all, have one or two "expansion ports" using a proprietary connector design that may require a license fee from the manufacturer, or enhancements may only be installable by technicians with specialized tools or training.
Computer platforms may include systems with both open and closed architectures. The Mac mini and Compact Macintosh are closed; the Macintosh II and Power Mac G5 are open. Most desktop PCs are open architecture.
Similarly, an open software architecture is one in which additional software modules can be added to the basic framework provided by the architecture. Open APIs (Application Programming Interfaces) to major software products are the way in which the basic functionality of such products can be modified or extended. The Google APIs are examples. A second type of open software architecture consists of the messages that can flow between computer systems. These messages have a standard structure that can be modified or extended per agreements between the computer systems. An example is IBM's Distributed Data Management Architecture.
Open architecture allows potential users to see inside all or parts of the architecture without any proprietary constraints. Typically, an open architecture publishes all or parts of its architecture that the developer or integrator wants to
|
https://en.wikipedia.org/wiki/Five-bar%20linkage
|
In kinematics, a five-bar linkage is a mechanism with two degrees of freedom that is constructed from five links that are connected together in a closed chain. All links are connected to each other by five joints in series forming a loop. One of the links is the ground or base. This configuration is also called a pantograph, however, it is not to be confused with the parallelogram-copying linkage pantograph.
The linkage can be a one-degree-of-freedom mechanism if two gears are attached to two links and are meshed together, forming a geared five-bar mechanism.
Robotic configuration
When controlled motors actuate the linkage, the whole system (a mechanism and its actuators) becomes a robot. This is usually done by placing two servomotors (to control the two degrees of freedom) at the joints A and B, controlling the angle of the links L2 and L5. L1 is the grounded link. In this configuration, the controlled endpoint or end-effector is the point D, where the objective is to control its x and y coordinates in the plane in which the linkage resides. The angles theta 1 and theta 2 can be calculated as a function of the x,y coordinates of point D using trigonometric functions. This robotic configuration is a parallel manipulator. It is a parallel configuration robot as it is composed of two controlled serial manipulators connected to the endpoint.
Unlike a serial manipulator, this configuration has the advantage of having both motors grounded at the base link. As the motor can be quite massive, this significantly decreases the total moment of inertia of the linkage and improves backdrivability for haptic feedback applications. On the other hand, workspace reached by the endpoint is usually significantly smaller than that of a serial manipulator.
Kinematics and dynamics
Both the forward and inverse kinematics of this robotic configuration can be found in closed-form equations through geometric relationships. Different methods of finding both have been done by Campion and
|
https://en.wikipedia.org/wiki/MISRA%20C
|
MISRA C is a set of software development guidelines for the C programming language developed by The MISRA Consortium. Its aims are to facilitate code safety, security, portability and reliability in the context of embedded systems, specifically those systems programmed in ISO C / C90 / C99.
There is also a set of guidelines for MISRA C++ not covered by this article.
History
Draft: 1997
First edition: 1998 (rules, required/advisory)
Second edition: 2004 (rules, required/advisory)
Third edition: 2012 (directives; rules, Decidable/Undecidable)
MISRA compliance: 2016, updated 2020
For the first two editions of MISRA-C (1998 and 2004) all Guidelines were considered as Rules. With the publication of MISRA C:2012 a new category of Guideline was introduced - the Directive whose compliance is more open to interpretation, or relates to process or procedural matters.
Adoption
Although originally specifically targeted at the automotive industry, MISRA C has evolved as a widely accepted model for best practices by leading developers in sectors including automotive, aerospace, telecom, medical devices, defense, railway, and others.
For example:
The Joint Strike Fighter project C++ Coding Standards are based on MISRA-C:1998.
The NASA Jet Propulsion Laboratory C Coding Standards are based on MISRA-C:2004.
ISO 26262 Functional Safety - Road Vehicles cites MISRA C as being an appropriate sub-set of the C language:
ISO 26262-6:2011 Part 6: Product development at the software level cites MISRA-C:2004 and MISRA AC AGC.
ISO 26262-6:2018 Part 6: Product development at the software level cites MISRA C:2012.
The AUTOSAR General Software Specification (SRS_BSW_00007) likewise cites MISRA C:
The AUTOSAR 4.2 General Software Specification requires that If the BSW Module implementation is written in C language, then it shall conform to the MISRA C:2004 Standard.
The AUTOSAR 4.3 General Software Specification requires that If the BSW Module implementation is written in C la
|
https://en.wikipedia.org/wiki/NCR%205380
|
The NCR 5380 is an early SCSI controller chip developed by NCR Microelectronics. It was popular due to its simplicity and low cost. The 5380 was used in the Macintosh Plus and in numerous SCSI cards for personal computers, including the Amiga and Atari TT. The 5380 was second sourced by several chip makers, including AMD and Zilog. The 5380 was designed by engineers at the NCR plant then located in Wichita, Kansas, and initially fabricated by NCR Microelectronics in Colorado Springs, Colorado. It was the first single-chip implementation of the SCSI-1 protocol.
The NCR 5380 also made a significant appearance in Digital Equipment Corporation's VAX computers, where it was featured on various Q-Bus modules and as an integrated SCSI controller in numerous MicroVAX, VAXstation and VAXserver computers. Many UMAX SCSI optical scanners also contain the 53C80 chip interfaced to an Intel 8031-series microcontroller.
Single-chip SCSI controller NCR 53c400 used SCSI 5380 core.
See also
NCR 53C9x
|
https://en.wikipedia.org/wiki/Pipeline%20video%20inspection
|
Pipeline video inspection is a form of telepresence used to visually inspect the interiors of pipelines, plumbing systems, and storm drains. A common application is for a plumber to determine the condition of small diameter sewer lines and household connection drain pipes.
Older sewer lines of small diameter, typically , are made by the union of a number of short sections. The pipe segments may be made of cast iron, with to sections, but are more often made of vitrified clay pipe (VCP), a ceramic material, in , & sections. Each iron or clay segment will have an enlargement (a "bell") on one end to receive the end of the adjacent segment. Roots from trees and vegetation may work into the joins between segments and can be forceful enough to break open a larger opening in terra cotta or corroded cast iron. Eventually a root ball will form that will impede the flow and this may cleaned out by a cutter mechanism or plumber's snake and subsequently inhibited by use of a chemical foam - a rooticide.
With modern video equipment, the interior of the pipe may be inspected - this is a form of non-destructive testing. A small diameter collector pipe will typically have a cleanout access at the far end and will be several hundred feet long, terminating at a manhole. Additional collector pipes may discharge at this manhole and a pipe (perhaps of larger diameter) will carry the effluent to the next manhole, and so forth to a pump station or treatment plant.
Without regular inspection of public sewers, a significant amount of waste may accumulate unnoticed until the system fails. In order to prevent resulting catastrophic events such as pipe bursts and raw sewage flooding onto city streets, municipalities usually conduct pipeline video inspections as a precautionary measure.
Inspection equipment
Service truck
The service truck contains a power supply in the form of a small generator, a small air-conditioned compartment containing video monitoring and recording equipment,
|
https://en.wikipedia.org/wiki/Potential%20gradient
|
In physics, chemistry and biology, a potential gradient is the local rate of change of the potential with respect to displacement, i.e. spatial derivative, or gradient. This quantity frequently occurs in equations of physical processes because it leads to some form of flux.
Definition
One dimension
The simplest definition for a potential gradient F in one dimension is the following:
where is some type of scalar potential and is displacement (not distance) in the direction, the subscripts label two different positions , and potentials at those points, . In the limit of infinitesimal displacements, the ratio of differences becomes a ratio of differentials:
The direction of the electric potential gradient is from to .
Three dimensions
In three dimensions, Cartesian coordinates make it clear that the resultant potential gradient is the sum of the potential gradients in each direction:
where are unit vectors in the directions. This can be compactly written in terms of the gradient operator ,
although this final form holds in any curvilinear coordinate system, not just Cartesian.
This expression represents a significant feature of any conservative vector field , namely has a corresponding potential .
Using Stokes' theorem, this is equivalently stated as
meaning the curl, denoted ∇×, of the vector field vanishes.
Physics
Newtonian gravitation
In the case of the gravitational field , which can be shown to be conservative, it is equal to the gradient in gravitational potential :
There are opposite signs between gravitational field and potential, because the potential gradient and field are opposite in direction: as the potential increases, the gravitational field strength decreases and vice versa.
Electromagnetism
In electrostatics, the electric field is independent of time , so there is no induction of a time-dependent magnetic field by Faraday's law of induction:
which implies is the gradient of the electric potential , identical to the classic
|
https://en.wikipedia.org/wiki/Load%20profile
|
In electrical engineering, a load profile is a graph of the variation in the electrical load versus time. A load profile will vary according to customer type (typical examples include residential, commercial and industrial), temperature and holiday seasons. Power producers use this information to plan how much electricity they will need to make available at any given time. Teletraffic engineering uses a similar load curve.
Power generation
In a power system, a load curve or load profile is a chart illustrating the variation in demand/electrical load over a specific time. Generation companies use this information to plan how much power they will need to generate at any given time. A load duration curve is similar to a load curve. The information is the same but is presented in a different form. These curves are useful in the selection of generator units for supplying electricity.
Electricity distribution
In an electricity distribution grid, the load profile of electricity usage is important to the efficiency and reliability of power transmission. The power transformer or battery-to-grid are critical aspects of power distribution and sizing and modelling of batteries or transformers depends on the load profile. The factory specification of transformers for the optimization of load losses versus no-load losses is dependent directly on the characteristics of the load profile that the transformer is expected to be subjected to. This includes such characteristics as average load factor, diversity factor, utilization factor, and demand factor, which can all be calculated based on a given load profile.
On the power market so-called EFA blocks are used to specify the traded forward contract on the delivery of a certain amount of electrical energy at a certain time.
Retail energy markets
In retail energy markets, supplier obligations are settled on an hourly or subhourly basis. For most customers, consumption is measured on a monthly basis, based on meter reading s
|
https://en.wikipedia.org/wiki/List%20of%20gene%20families
|
This is a list of gene families or gene complexes, i.e. sets of genes which are related ancestrally and often serve similar biological functions. These gene families typically encode functionally related proteins, and sometimes the term gene families is a shorthand for the sets of proteins that the genes encode. They may or may not be physically adjacent on the same chromosome.
Regulatory protein gene families
14-3-3 protein family
Achaete-scute complex (neuroblast formation)
FOX proteins (forkhead box proteins)
Families containing homeobox domains
DLX gene family
Hox gene family
POU family
Krüppel-type zinc finger (ZNF)
MADS-box gene family
NOTCH2NL
P300-CBP coactivator family
SOX gene family
Immune system proteins
Immunoglobulin superfamily
Major histocompatibility complex (MHC)
Motor proteins
Dynein
Kinesin
Myosin
Signal transducing proteins
G-proteins
MAP Kinase
Olfactory receptor
Peroxiredoxin
Receptor tyrosine kinases
Transporters
ABC transporters
Antiporter
Aquaporins
Other families
See also
Protein family
Housekeeping gene
F
Biological classification
Gene families
|
https://en.wikipedia.org/wiki/Pharmacometabolomics
|
Pharmacometabolomics, also known as pharmacometabonomics, is a field which stems from metabolomics, the quantification and analysis of metabolites produced by the body. It refers to the direct measurement of metabolites in an individual's bodily fluids, in order to predict or evaluate the metabolism of pharmaceutical compounds, and to better understand the pharmacokinetic profile of a drug. Alternatively, pharmacometabolomics can be applied to measure metabolite levels following the administration of a pharmaceutical compound, in order to monitor the effects of the compound on certain metabolic pathways(pharmacodynamics). This provides detailed mapping of drug effects on metabolism and the pathways that are implicated in mechanism of variation of response to treatment. In addition, the metabolic profile of an individual at baseline (metabotype) provides information about how individuals respond to treatment and highlights heterogeneity within a disease state. All three approaches require the quantification of metabolites found in bodily fluids and tissue, such as blood or urine, and can be used in the assessment of pharmaceutical treatment options for numerous disease states.
Goals of Pharmacometabolomics
Pharmacometabolomics is thought to provide information that complements that gained from other omics, namely genomics, transcriptomics, and proteomics. Looking at the characteristics of an individual down through these different levels of detail, there is an increasingly more accurate prediction of a person's ability to respond to a pharmaceutical compound. The genome, made up of 25 000 genes, can indicate possible errors in drug metabolism; the transcriptome, made up of 85,000 transcripts, can provide information about which genes important in metabolism are being actively transcribed; and the proteome, >10,000,000 members, depicts which proteins are active in the body to carry out these functions. Pharmacometabolomics complements the omics with direct measureme
|
https://en.wikipedia.org/wiki/Gieseking%20manifold
|
In mathematics, the Gieseking manifold is a cusped hyperbolic 3-manifold of finite volume. It is non-orientable and has the smallest volume among non-compact hyperbolic manifolds, having volume approximately . It was discovered by . The volume is called Gieseking constant and has a closed-form,
with Clausen function . Compare to the related Catalan's constant which also manifests as a volume,
The Gieseking manifold can be constructed by removing the vertices from a tetrahedron, then gluing the faces together in pairs using affine-linear maps. Label the vertices 0, 1, 2, 3. Glue the face with vertices 0,1,2 to the face with vertices 3,1,0 in that order. Glue the face 0,2,3 to the face 3,2,1 in that order. In the hyperbolic structure of the Gieseking manifold, this ideal tetrahedron is the canonical polyhedral decomposition of David B. A. Epstein and Robert C. Penner. Moreover, the angle made by the faces is . The triangulation has one tetrahedron, two faces, one edge and no vertices, so all the edges of the original tetrahedron are glued together.
The Gieseking manifold has a double cover homeomorphic to the figure-eight knot complement. The underlying compact manifold has a Klein bottle boundary, and the first homology group of the Gieseking manifold is the integers.
The Gieseking manifold is a fiber bundle over the circle with fiber the once-punctured torus and monodromy given by
The square of this map is Arnold's cat map and this gives another way to see that the Gieseking manifold is double covered by the complement of the figure-eight knot.
See also
List of mathematical constants
|
https://en.wikipedia.org/wiki/High%20availability
|
High availability (HA) is a characteristic of a system that aims to ensure an agreed level of operational performance, usually uptime, for a higher than normal period.
Modernization has resulted in an increased reliance on these systems. For example, hospitals and data centers require high availability of their systems to perform routine daily activities. Availability refers to the ability of the user community to obtain a service or good, access the system, whether to submit new work, update or alter existing work, or collect the results of previous work. If a user cannot access the system, it is – from the user's point of view – unavailable. Generally, the term downtime is used to refer to periods when a system is unavailable.
Resilience
High availability is a property of network resilience, the ability to "provide and maintain an acceptable level of service in the face of faults and challenges to normal operation." Threats and challenges for services can range from simple misconfiguration over large scale natural disasters to targeted attacks. As such, network resilience touches a very wide range of topics. In order to increase the resilience of a given communication network, the probable challenges and risks have to be identified and appropriate resilience metrics have to be defined for the service to be protected.
The importance of network resilience is continuously increasing, as communication networks are becoming a fundamental component in the operation of critical infrastructures. Consequently, recent efforts focus on interpreting and improving network and computing resilience with applications to critical infrastructures. As an example, one can consider as a resilience objective the provisioning of services over the network, instead of the services of the network itself. This may require coordinated response from both the network and from the services running on top of the network.
These services include:
supporting distributed processing
supportin
|
https://en.wikipedia.org/wiki/Ultramicrotomy
|
Ultramicrotomy is a method for cutting specimens into extremely thin slices, called ultra-thin sections, that can be studied and documented at different magnifications in a transmission electron microscope (TEM). It is used mostly for biological specimens, but sections of plastics and soft metals can also be prepared. Sections must be very thin because the 50 to 125 kV electrons of the standard electron microscope cannot pass through biological material much thicker than 150 nm. For best resolutions, sections should be from 30 to 60 nm. This is roughly the equivalent to splitting a 0.1 mm-thick human hair into 2,000 slices along its diameter, or cutting a single red blood cell into 100 slices.
Ultramicrotomy process
Ultra-thin sections of specimens are cut using a specialized instrument called an "ultramicrotome". The ultramicrotome is fitted with either a diamond knife, for most biological ultra-thin sectioning, or a glass knife, often used for initial cuts. There are numerous other pieces of equipment involved in the ultramicrotomy process. Before selecting an area of the specimen block to be ultra-thin sectioned, the technician examines semithin or "thick" sections range from 0.5 to 2 μm. These thick sections are also known as survey sections and are viewed under a light microscope to determine whether the right area of the specimen is in a position for thin sectioning. "Ultra-thin" sections from 50 to 100 nm thick are able to be viewed in the TEM.
Tissue sections obtained by ultramicrotomy are compressed by the cutting force of the knife. In addition, interference microscopy of the cut surface of the blocks reveals that the sections are often not flat. With Epon or Vestopal as embedding medium the ridges and valleys usually do not exceed 0.5 μm in height, i.e., 5–10 times the thickness of ordinary sections (1).
A small sample is taken from the specimen to be investigated. Specimens may be from biological matter, like animal or plant tissue, or from inorgani
|
https://en.wikipedia.org/wiki/Nanoprobing
|
Nanoprobing is method of extracting device electrical parameters through the use of nanoscale tungsten wires, used primarily in the semiconductor industry. The characterization of individual devices is instrumental to engineers and integrated circuit designers during initial product development and debug. It is commonly utilized in device failure analysis laboratories to aid with yield enhancement, quality and reliability issues and customer returns. Commercially available nanoprobing systems are integrated into either a vacuum-based scanning electron microscope (SEM) or atomic force microscope (AFM). Nanoprobing systems that are based on AFM technology are referred to as Atomic Force nanoProbers (AFP).
Principles and operation
AFM based nanoprobers, enable up to eight probe tips to be scanned to generate high resolution AFM topography images, as well as Conductive AFM, Scanning Capacitance, and Electrostatic Force Microscopy images. Conductive AFM provides pico-amp resolution to identify and localize electrical failures such as shorts, opens, resistive contacts and leakage paths, enabling accurate probe positioning for current-voltage measurements. AFM based nanoprobers enable nanometer scale device defect localization and accurate transistor device characterization without the physical damage and electrical bias induced by high energy electron beam exposure.
For SEM based nanoprobers, the ultra-high resolution of the microscopes that house the nanoprobing system allow the operator to navigate the probe tips with precise movement, allowing the user to see exactly where the tips will be landed, in real time. Existing nanoprobe needles or “probe tips” have a typical end-point radius ranging from 5 to 35 nm. The fine tips enable access to individual contacts nodes of modern IC transistors. Navigation of the probe tips in SEM based nanoprobers are typically controlled by precision piezoelectric manipulators. Typical systems have anywhere from 2 to 8 probe manipulato
|
https://en.wikipedia.org/wiki/Workgroup%20%28computer%20networking%29
|
In computer networking a work group is collection of computers connected on a LAN that share the common resources and responsibilities. Workgroup is Microsoft's term for a peer-to-peer local area network. Computers running Microsoft operating systems in the same work group may share files, printers, or Internet connection. Work group contrasts with a domain, in which computers rely on centralized authentication.
See also
Windows for Workgroups – the earliest version of Windows to allow a work group
Windows HomeGroup – a feature introduced in Windows 7 and later removed in Windows 10 (Version 1803) that allows work groups to share contents more easily
Browser service – the service enabled 'browsing' all the resources in work groups
Peer Name Resolution Protocol (PNRP) - IPv6-based dynamic name publication and resolution
|
https://en.wikipedia.org/wiki/Hybrid%20incompatibility
|
Hybrid incompatibility is a phenomenon in plants and animals, wherein offspring produced by the mating of two different species or populations have reduced viability and/or are less able to reproduce. Examples of hybrids include mules and ligers from the animal world, and subspecies of the Asian rice crop Oryza sativa from the plant world. Multiple models have been developed to explain this phenomenon. Recent research suggests that the source of this incompatibility is largely genetic, as combinations of genes and alleles prove lethal to the hybrid organism. Incompatibility is not solely influenced by genetics, however, and can be affected by environmental factors such as temperature. The genetic underpinnings of hybrid incompatibility may provide insight into factors responsible for evolutionary divergence between species.
Background
Hybrid incompatibility occurs when the offspring of two closely related species are not viable or suffer from infertility. Charles Darwin posited that hybrid incompatibility is not a product of natural selection, stating that the phenomenon is an outcome of the hybridizing species diverging, rather than something that is directly acted upon by selective pressures. The underlying causes of the incompatibility can be varied: earlier research focused on things like changes in ploidy in plants. More recent research has taken advantage of improved molecular techniques and has focused on the effects of genes and alleles in the hybrid and its parents.
Dobzhansky-Muller model
The first major breakthrough in the genetic basis of hybrid incompatibility is the Dobzhansky-Muller model, a combination of findings by Theodosius Dobzhansky and Joseph Muller between 1937 and 1942. The model provides an explanation as to why a negative fitness effect like hybrid incompatibility is not selected against. By hypothesizing that the incompatibility arose from alterations at two or more loci, rather than one, the incompatible alleles are in one hybrid in
|
https://en.wikipedia.org/wiki/Math%20rock
|
Math rock is a style of alternative and indie rock with roots in bands such as King Crimson and Rush. It is characterized by complex, atypical rhythmic structures (including irregular stopping and starting), counterpoint, odd time signatures, and extended chords. It bears similarities to post-rock.
Characteristics
Math rock is typified by its rhythmic complexity, seen as mathematical in character by listeners and critics. While most rock music uses a meter (however accented or syncopated), math rock makes use of more non-standard, frequently changing time signatures such as , , , or .
As in traditional rock, the sound is most often dominated by guitars and drums. However, drums play a greater role in math rock in providing driving, complex rhythms. Math rock guitarists make use of tapping techniques and loop pedals to build on these rhythms, as illustrated by songs like those of "math rock supergroup" Battles.
Lyrics are generally not the focus of math rock; the voice is treated as just another instrument in the mix. Often, vocals are not overdubbed, and are positioned less prominently, as in the recording style of Steve Albini. Many of math rock's best-known groups are entirely instrumental such as Don Caballero or Hella.
The term began as a joke but has developed into the accepted name for the musical style. One advocate of this is Matt Sweeney, singer with Chavez, a group often linked to the math rock scene. Despite this, not all critics see math rock as a serious sub-genre of rock.
A significant intersection exists between math rock and emo, exemplified by bands such as Tiny Moving Parts or American Football, whose sound has been described as "twinkly, mathy rock, a sound that became one of the defining traits of the emo scene throughout the 2000s".
Bands
Early
The albums Red and Discipline by King Crimson, Spiderland by Slint are generally considered seminal influences on the development of math rock. The Canadian punk rock group Nomeansno (founded
|
https://en.wikipedia.org/wiki/Vector%20calculus%20identities
|
The following are important identities involving derivatives and integrals in vector calculus.
Operator notation
Gradient
For a function in three-dimensional Cartesian coordinate variables, the gradient is the vector field:
where i, j, k are the standard unit vectors for the x, y, z-axes. More generally, for a function of n variables , also called a scalar field, the gradient is the vector field:
where are orthogonal unit vectors in arbitrary directions.
As the name implies, the gradient is proportional to and points in the direction of the function's most rapid (positive) change.
For a vector field , also called a tensor field of order 1, the gradient or total derivative is the n × n Jacobian matrix:
For a tensor field of any order k, the gradient is a tensor field of order k + 1.
For a tensor field of order k > 0, the tensor field of order k + 1 is defined by the recursive relation
where is an arbitrary constant vector.
Divergence
In Cartesian coordinates, the divergence of a continuously differentiable vector field is the scalar-valued function:
As the name implies the divergence is a measure of how much vectors are diverging.
The divergence of a tensor field of non-zero order k is written as , a contraction to a tensor field of order k − 1. Specifically, the divergence of a vector is a scalar. The divergence of a higher order tensor field may be found by decomposing the tensor field into a sum of outer products and using the identity,
where is the directional derivative in the direction of multiplied by its magnitude. Specifically, for the outer product of two vectors,
For a tensor field of order k > 1, the tensor field of order k − 1 is defined by the recursive relation
where is an arbitrary constant vector.
Curl
In Cartesian coordinates, for the curl is the vector field:
where i, j, and k are the unit vectors for the x-, y-, and z-axes, respectively.
As the name implies the curl is a measure of how much nearby vectors te
|
https://en.wikipedia.org/wiki/Circuit%20topology%20%28electrical%29
|
The circuit topology of an electronic circuit is the form taken by the network of interconnections of the circuit components. Different specific values or ratings of the components are regarded as being the same topology. Topology is not concerned with the physical layout of components in a circuit, nor with their positions on a circuit diagram; similarly to the mathematical concept of topology, it is only concerned with what connections exist between the components. There may be numerous physical layouts and circuit diagrams that all amount to the same topology.
Strictly speaking, replacing a component with one of an entirely different type is still the same topology. In some contexts, however, these can loosely be described as different topologies. For instance, interchanging inductors and capacitors in a low-pass filter results in a high-pass filter. These might be described as high-pass and low-pass topologies even though the network topology is identical. A more correct term for these classes of object (that is, a network where the type of component is specified but not the absolute value) is prototype network.
Electronic network topology is related to mathematical topology. In particular, for networks which contain only two-terminal devices, circuit topology can be viewed as an application of graph theory. In a network analysis of such a circuit from a topological point of view, the network nodes are the vertices of graph theory, and the network branches are the edges of graph theory.
Standard graph theory can be extended to deal with active components and multi-terminal devices such as integrated circuits. Graphs can also be used in the analysis of infinite networks.
Circuit diagrams
The circuit diagrams in this article follow the usual conventions in electronics; lines represent conductors, filled small circles represent junctions of conductors, and open small circles represent terminals for connection to the outside world. In most cases, imped
|
https://en.wikipedia.org/wiki/LAVIS%20%28software%29
|
LAVIS is a software tool created by the TOOL Corporation, Japan. LAVIS is a "layout visualisation platform". It supports a variety of formats such as GDSII, OASIS and LEF/DEF and can be used as a platform for common IC processes.
|
https://en.wikipedia.org/wiki/Plasmid%20preparation
|
A plasmid preparation is a method of DNA extraction and purification for plasmid DNA, it is an important step in many molecular biology experiments and is essential for the successful use of plasmids in research and biotechnology. Many methods have been developed to purify plasmid DNA from bacteria. During the purification procedure, the plasmid DNA is often separated from contaminating proteins and genomic DNA.
These methods invariably involve three steps: growth of the bacterial culture, harvesting and lysis of the bacteria, and purification of the plasmid DNA. Purification of plasmids is central to molecular cloning. A purified plasmid can be used for many standard applications, such as sequencing and transfections into cells.
Growth of the bacterial culture
Plasmids are almost always purified from liquid bacteria cultures, usually E. coli, which have been transformed and isolated. Virtually all plasmid vectors in common use encode one or more antibiotic resistance genes as a selectable marker, for example a gene encoding ampicillin or kanamycin resistance, which allows bacteria that have been successfully transformed to multiply uninhibited. Bacteria that have not taken up the plasmid vector are assumed to lack the resistance gene, and thus only colonies representing successful transformations are expected to grow.
Bacteria are grown under favourable conditions.
Harvesting and lysis of the bacteria
There are several methods for cell lysis, including alkaline lysis, mechanical lysis, and enzymatic lysis.
Alkaline lysis
The most common method is alkaline lysis, which involves the use of a high concentration of a basic solution, such as sodium hydroxide, to lyse the bacterial cells. When bacteria are lysed under alkaline conditions (pH 12.0–12.5) both chromosomal DNA and protein are denatured; the plasmid DNA however, remains stable. Some scientists reduce the concentration of NaOH used to 0.1M in order to reduce the occurrence of ssDNA. After the addition o
|
https://en.wikipedia.org/wiki/Budan%27s%20theorem
|
In mathematics, Budan's theorem is a theorem for bounding the number of real roots of a polynomial in an interval, and computing the parity of this number. It was published in 1807 by François Budan de Boislaurent.
A similar theorem was published independently by Joseph Fourier in 1820. Each of these theorems is a corollary of the other. Fourier's statement appears more often in the literature of 19th century and has been referred to as Fourier's, Budan–Fourier, Fourier–Budan, and even Budan's theorem
Budan's original formulation is used in fast modern algorithms for real-root isolation of polynomials.
Sign variation
Let be a finite sequence of real numbers. A sign variation or sign change in the sequence is a pair of indices such that and either or for all such that .
In other words, a sign variation occurs in the sequence at each place where the signs change, when ignoring zeros.
For studying the real roots of a polynomial, the number of sign variations of several sequences may be used. For Budan's theorem, it is the sequence of the coefficients. For the Fourier's theorem, it is the sequence of values of the successive derivatives at a point. For Sturm's theorem it is the sequence of values at a point of the Sturm sequence.
Descartes' rule of signs
All results described in this article are based on Descartes' rule of signs.
If is a univariate polynomial with real coefficients, let us denote by the number of its positive real roots, counted with their multiplicity, and by the number of sign variations in the sequence of its coefficients. Descartes's rule of signs asserts that
is a nonnegative even integer.
In particular, if , then one has .
Budan's statement
Given a univariate polynomial with real coefficients, let us denote by the number of real roots, counted with their multiplicities, of in a half-open interval (with real numbers). Let us denote also by the number of sign variations in the sequence of the coefficients of the polynomial
|
https://en.wikipedia.org/wiki/Derivation%20of%20the%20Routh%20array
|
The Routh array is a tabular method permitting one to establish the stability of a system using only the coefficients of the characteristic polynomial. Central to the field of control systems design, the Routh–Hurwitz theorem and Routh array emerge by using the Euclidean algorithm and Sturm's theorem in evaluating Cauchy indices.
The Cauchy index
Given the system:
Assuming no roots of lie on the imaginary axis, and letting
= The number of roots of with negative real parts, and
= The number of roots of with positive real parts
then we have
Expressing in polar form, we have
where
and
from (2) note that
where
Now if the ith root of has a positive real part, then (using the notation y=(RE[y],IM[y]))
and
and
Similarly, if the ith root of has a negative real part,
and
and
From (9) to (11) we find that when the ith root of has a positive real part, and from (12) to (14) we find that when the ith root of has a negative real part. Thus,
So, if we define
then we have the relationship
and combining (3) and (17) gives us
and
Therefore, given an equation of of degree we need only evaluate this function to determine , the number of roots with negative real parts and , the number of roots with positive real parts.
In accordance with (6) and Figure 1, the graph of vs , varying over an interval (a,b) where and are integer multiples of , this variation causing the function to have increased by , indicates that in the course of travelling from point a to point b, has "jumped" from to one more time than it has jumped from to . Similarly, if we vary over an interval (a,b) this variation causing to have decreased by , where again is a multiple of at both and , implies that has jumped from to one more time than it has jumped from to as was varied over the said interval.
Thus, is times the difference between the number of points at which jumps from to and the number of points at which jumps from to as
|
https://en.wikipedia.org/wiki/Codex%20Alimentarius
|
The is a collection of internationally recognized standards, codes of practice, guidelines, and other recommendations published by the Food and Agriculture Organization of the United Nations relating to food, food production, food labeling, and food safety.
History and governance
Its name is derived from the Codex Alimentarius Austriacus. Its texts are developed and maintained by the Codex Alimentarius Commission (CAC), a body established in early November 1961 by the Food and Agriculture Organization of the United Nations (FAO), was joined by the World Health Organization (WHO) in June 1962, and held its first session in Rome in October 1963.
The Commission's main goals are to protect the health of consumers, to facilitate international trade, and ensure fair practices in the international food trade.
The CAC is an intergovernmental organization; the member states of the FAO and WHO send delegations to the CAC. As of 2021, there were 189 members of the CAC (188 member countries plus one member organization, the European Union (EU) and 239 Codex observers (59 intergovernmental organizations, 164 non-governmental organizations, and 16 United Nations organizations).
The CAC develops food standards on scientific evidence furnished by the scientific committees of the FAO and WHO; the oldest of these, the Joint FAO/WHO Expert Committee on Food Additives (JECFA), was established in 1956 and predates the establishment of the CAC itself. According to a 2013 study, the CAC's primary functions are "establishing international food standards for approved food additives providing maximum levels in foods, maximum limits for contaminants and toxins, maximum residue limits for pesticides and for veterinary drugs used in veterinary animals, and establishing hygiene and technological function practice codes".
The CAC does not have regulatory authority, and the Codex Alimentarius is a reference guide, not an enforceable standard on its own. However, several nations adopt the Co
|
https://en.wikipedia.org/wiki/Lichenology
|
Lichenology is the branch of mycology that studies the lichens, symbiotic organisms made up of an intimate symbiotic association of a microscopic alga (or a cyanobacterium) with a filamentous fungus.
Study of lichens draws knowledge from several disciplines: mycology, phycology, microbiology and botany. Scholars of lichenology are known as lichenologists.
History
The beginnings
Lichens as a group have received less attention in classical treatises on botany than other groups although the relationship between humans and some species has been documented from early times. Several species have appeared in the works of Dioscorides, Pliny the Elder and Theophrastus although the studies are not very deep. During the first centuries of the modern age they were usually put forward as examples of spontaneous generation and their reproductive mechanisms were totally ignored. For centuries naturalists had included lichens in diverse groups until in the early 18th century a French researcher Joseph Pitton de Tournefort in his Institutiones Rei Herbariae grouped them into their own genus. He adopted the Latin term lichen, which had already been used by Pliny who had imported it from Theophrastus but up until then this term had not been widely employed. The original meaning of the Greek word λειχήν (leichen) was moss that in its turn derives from the Greek verb λείχω (liekho) to suck because of the great ability of these organisms to absorb water. In its original use the term signified mosses, liverworts as well as lichens. Some forty years later Dillenius in his Historia Muscorum made the first division of the group created by Tournefort separating the sub-families Usnea, Coralloides and Lichens in response to the morphological characteristics of the lichen thallus.
After the revolution in taxonomy brought in by Linnaeus and his new system of classification lichens are retained in the Plant Kingdom forming a single group Lichen with eight divisions within the group according
|
https://en.wikipedia.org/wiki/Plasmaron
|
In physics, the plasmaron was proposed by Lundqvist in 1967 as a quasiparticle arising in a system that has strong plasmon-electron interactions. In the original work, the plasmaron was proposed to describe a secondary peak (or satellite) in the photoemission spectral function of the electron gas. More precisely it was defined as an additional zero of the quasi-particle equation . The same authors pointed out, in a subsequent work, that this extra solution might be an artifact of the used approximations:
A more mathematical discussion is provided.
The plasmaron was also studied in more recent works in the literature. It was shown, also with the support of the numerical simulations, that the plasmaron energy is an artifact of the approximation used to numerically compute the spectral function, e.g. solution of the dyson equation for the many body green function with a frequency dependent GW self-energy. This approach give rise to a wrong plasmaron peak instead of the plasmon satellite which can be measured experimentally.
Despite this fact, experimental observation of a plasmaron was reported in 2010 for graphene.
Also supported by earlier theoretical work. However subsequent works discussed that the theoretical interpretation of the experimental measure was not correct, in agreement with the fact that the plasmaron is only an artifact of the GW self-energy used with the Dyson equation. The artificial nature of the plasmaron peak was also proven via the comparison of experimental and numerical simulations for the photo-emission spectrum of bulk silicon. Other works on plasmaron have been published in the literature.
Observation of plasmaron peaks have also been reported in optical measurements of elemental bismuth and in other optical measurements.
|
https://en.wikipedia.org/wiki/Signal%20regeneration
|
In telecommunications, signal regeneration is signal processing that restores a signal, recovering its original characteristics.
The signal may be electrical, as in a repeater on a T-carrier line, or optical, as in an OEO optical cross-connect.
The process is used when it is necessary to change the signal type in order to transmit it via different media. Once it comes back to the original medium the signal is usually required to be regenerated so as to bring it back to its original state.
See also
Fiber-optic communication#Regeneration
|
https://en.wikipedia.org/wiki/ILAND%20project
|
The iLAND project (middleware for deterministic dynamically reconfigurable networked embedded systems) is a cross-industry research & development project for advanced research in embedded systems. It has been developed with the collaboration of 9 organisations including Industries, SMEs and Universities from Spain, France, Portugal, Netherlands and a university from United States. The project is co-funded by the ARTEMIS Programme related to the topic: 'SP5 Computing Environments for Embedded Systems'.
Middleware functionalities
The merging of the real-time systems and the service-oriented architectures enables more flexible a dynamic distributed systems with real time features. So a number of functionalities have been identified to create a SoA based middleware for deterministic reconfiguration of service-based applications:
Service registration/deregistration: Stores in the system the functionalities and the description of the different services.
Service discovery: Enables external actor to discover the services currently stored in the system.
Service composition: Creates the service-based application on run-time.
Service orchestration: Manages the invocation of the different services.
Service based admission test: This functionality checks if there are enough resources for the services execution in the distributed system.
Resource reservation: This functionality acquires the necessary resources in the host machine and the network.
System monitoring: This functionality measures if the resources required for the execution of services are not being exhausted.
System reconfiguration: This functionality changes the services currently running on the system by other services providing same functionality.
Middleware architecture
The architecture of the iLAND middleware consists in two layers. The high level one is the Core Functionality Layer. It is oriented to the management of the real time service model. The low layer creates bridges to the system resourc
|
https://en.wikipedia.org/wiki/Essentially%20unique
|
In mathematics, the term essentially unique is used to describe a weaker form of uniqueness, where an object satisfying a property is "unique" only in the sense that all objects satisfying the property are equivalent to each other. The notion of essential uniqueness presupposes some form of "sameness", which is often formalized using an equivalence relation.
A related notion is a universal property, where an object is not only essentially unique, but unique up to a unique isomorphism (meaning that it has trivial automorphism group). In general there can be more than one isomorphism between examples of an essentially unique object.
Examples
Set theory
At the most basic level, there is an essentially unique set of any given cardinality, whether one labels the elements or .
In this case, the non-uniqueness of the isomorphism (e.g., match 1 to or 1 to ) is reflected in the symmetric group.
On the other hand, there is an essentially unique ordered set of any given finite cardinality: if one writes and , then the only order-preserving isomorphism is the one which maps 1 to , 2 to , and 3 to .
Number theory
The fundamental theorem of arithmetic establishes that the factorization of any positive integer into prime numbers is essentially unique, i.e., unique up to the ordering of the prime factors.
Group theory
In the context of classification of groups, there is an essentially unique group containing exactly 2 elements. Similarly, there is also an essentially unique group containing exactly 3 elements: the cyclic group of order three. In fact, regardless of how one chooses to write the three elements and denote the group operation, all such groups can be shown to be isomorphic to each other, and hence are "the same".
On the other hand, there does not exist an essentially unique group with exactly 4 elements, as there are in this case two non-isomorphic groups in total: the cyclic group of order 4 and the Klein four group.
Measure theory
There is an essentially
|
https://en.wikipedia.org/wiki/Hume%20%28programming%20language%29
|
Hume is a functionally based programming language developed at the University of St Andrews and Heriot-Watt University in Scotland since the year 2000. The language name is both an acronym meaning 'Higher-order Unified Meta-Environment' and an honorific to the 18th-century philosopher David Hume. It targets real-time computing embedded systems, aiming to produce a design that is both highly abstract, and yet allows precise extraction of time and space execution costs. This allows guaranteeing the bounded time and space demands of executing programs.
Hume combines functional programming ideas with ideas from finite state automata. Automata are used to structure communicating programs into a series of "boxes", where each box maps inputs to outputs in a purely functional way using high-level pattern-matching. It is structured as a series of levels, each of which exposes different machine properties.
Design model
The Hume language design attempts to maintain the essential properties and features required by the embedded systems domain (especially for transparent time and space costing) whilst incorporating as high a level of program abstraction as possible. It aims to target applications ranging from simple microcontrollers to complex real-time systems such as smartphones. This ambitious goal requires incorporating both low-level notions such as interrupt handling, and high-level ones of data structure abstraction etc. Such systems are programmed in widely differing ways, but the language design should accommodate such varying requirements.
Hume is a three-layer language: an outer (static) declaration/metaprogramming layer, an intermediate coordination layer describing a static layout of dynamic processes and the associated devices, and an inner layer describing each process as a (dynamic) mapping from patterns to expressions. The inner layer is stateless and purely functional.
Rather than attempting to apply cost modeling and correctness proving technology to an
|
https://en.wikipedia.org/wiki/Computational%20complexity%20of%20mathematical%20operations
|
The following tables list the computational complexity of various algorithms for common mathematical operations.
Here, complexity refers to the time complexity of performing computations on a multitape Turing machine. See big O notation for an explanation of the notation used.
Note: Due to the variety of multiplication algorithms, below stands in for the complexity of the chosen multiplication algorithm.
Arithmetic functions
This table lists the complexity of mathematical operations on integers.
On stronger computational models, specifically a pointer machine and consequently also a unit-cost random-access machine it is possible to multiply two -bit numbers in time O(n).
Algebraic functions
Here we consider operations over polynomials and denotes their degree; for the coefficients we use a unit-cost model, ignoring the number of bits in a number. In practice this means that we assume them to be machine integers.
Special functions
Many of the methods in this section are given in Borwein & Borwein.
Elementary functions
The elementary functions are constructed by composing arithmetic operations, the exponential function (), the natural logarithm (), trigonometric functions (), and their inverses. The complexity of an elementary function is equivalent to that of its inverse, since all elementary functions are analytic and hence invertible by means of Newton's method. In particular, if either or in the complex domain can be computed with some complexity, then that complexity is attainable for all other elementary functions.
Below, the size refers to the number of digits of precision at which the function is to be evaluated.
It is not known whether is the optimal complexity for elementary functions. The best known lower bound is the trivial bound
.
Non-elementary functions
Mathematical constants
This table gives the complexity of computing approximations to the given constants to correct digits.
Number theory
Algorithms for number theoretical cal
|
https://en.wikipedia.org/wiki/IEC%2061131
|
IEC 61131 is an IEC standard for programmable controllers. It was first published in 1993; the current (third) edition dates from 2013. It was known as IEC 1131 before the change in numbering system by IEC. The parts of the IEC 61131 standard are prepared and maintained by working group 7, programmable control systems, of subcommittee SC 65B of Technical Committee TC65 of the IEC.
Sections of IEC 61131
Standard IEC 61131 is divided into several parts:
Part 1: General information. It is the introductory chapter; it contains definitions of terms that are used in the subsequent parts of the standard and outlines the main functional properties and characteristics of PLCs.
Part 2: Equipment requirements and tests - establishes the requirements and associated tests for programmable controllers and their peripherals. This standard prescribes: the normal service conditions and requirements (for example, requirements related with climatic conditions, transport and storage, electrical service, etc.); functional requirements (power supply & memory, digital and analog I/Os); functional type tests and verification (requirements and tests on environmental, vibration, drop, free fall, I/O, power ports, etc.) and electromagnetic compatibility (EMC) requirements and tests that programmable controllers must implement. This standard can serve as a basis in the evaluation of safety programmable controllers to IEC 61508.
Part 3: Programming languages
Part 4: User guidelines
Part 5: Communications
Part 6: Functional safety
Part 7: Fuzzy control programming
Part 8: Guidelines for the application and implementation of programming languages
Part 9: Single-drop digital communication interface for small sensors and actuators (SDCI, marketed as IO-Link)
Part 10: PLC open XML exchange format for the export and import of IEC 61131-3 projects
Related standards
IEC 61499 Function Block
PLCopen has developed several standards and working groups.
TC1 - Standards
TC2 - Functions
TC3
|
https://en.wikipedia.org/wiki/List%20of%20Laplace%20transforms
|
The following is a list of Laplace transforms for many common functions of a single variable. The Laplace transform is an integral transform that takes a function of a positive real variable (often time) to a function of a complex variable (frequency).
Properties
The Laplace transform of a function can be obtained using the formal definition of the Laplace transform. However, some properties of the Laplace transform can be used to obtain the Laplace transform of some functions more easily.
Linearity
For functions and and for scalar , the Laplace transform satisfies
and is, therefore, regarded as a linear operator.
Time shifting
The Laplace transform of is .
Frequency shifting
is the Laplace transform of .
Explanatory notes
The unilateral Laplace transform takes as input a function whose time domain is the non-negative reals, which is why all of the time domain functions in the table below are multiples of the Heaviside step function, .
The entries of the table that involve a time delay are required to be causal (meaning that ). A causal system is a system where the impulse response is zero for all time prior to . In general, the region of convergence for causal systems is not the same as that of anticausal systems.
The following functions and variables are used in the table below:
represents the Dirac delta function.
represents the Heaviside step function. Literature may refer to this by other notation, including or .
represents the Gamma function.
is the Euler–Mascheroni constant.
is a real number. It typically represents time, although it can represent any independent dimension.
is the complex frequency domain parameter, and is its real part.
is an integer.
are real numbers.
is a complex number.
Table
See also
List of Fourier transforms
|
https://en.wikipedia.org/wiki/PowerEdge%20VRTX
|
Dell PowerEdge VRTX is a computer hardware product line from Dell.
It is a mini-blade chassis with built-in storage system. The VRTX comes in two models: a 19" rack version that is 5 rack units high or as a stand-alone tower system.
Specifications
The VRTX system is partially based on the Dell M1000e blade-enclosure and shares some technologies and components. There are also some differences with that system. The M1000e can support an EqualLogic storage area network that connects the servers to the storage via iSCSI, while the VRTX uses a shared PowerEdge RAID Controller (6Gbit PERC8). A second difference is the option to add certain PCIe cards (Gen2 support) and assign them to any of the four servers.
Servers: The VRTX chassis has 4 half-height slots available for Ivy-Bridge based PowerEdge blade servers. At launch the PE-M520 (Xeon E5-2400v2) and the PE-M620 (Xeon E5-2600v2) were the only two supported server blades, however the M520 was since discontinued. The same blades are used in the M1000e but for use in the VRTX they need to run specific configuration, using two PCIe 2.0 mezzanine cards per server. A conversion kit is available from Dell to allow moving a blade from a M1000e to VRTX chassis.
Storage: The VRTX chassis includes shared storage slots that connect to a single or dual PERC 8 controller(s) via switched 6Gbit SAS. This controller which is managed through the CMC allows RAID groups to be configured and then allows for those RAID groups to be subdivided into individual virtual disks that can be presented out to either single or multiple blades. The shared storage slots are either 12 x 3.5" HDD slots or 24 x 2.5" HDD slots depending on the VRTX chassis purchased. Dell offers 12Gbit SAS disks for the VRTX, but these will operate at the slower 6Gbit rate for compatibility with the older PERC8 and SAS switches.
Networking: The VRTX chassis has a built in IOM for supporting ethernet traffic to the server blades. At present the options for this IOM ar
|
https://en.wikipedia.org/wiki/AY-3-8500
|
The AY-3-8500 "Ball & Paddle" integrated circuit was the first in a series of ICs from General Instrument designed for the consumer video game market. These chips were designed to output video to an RF modulator, which would then display the game on a domestic television set. The AY-3-8500 contained six selectable games — tennis (a.k.a. Pong), hockey (or soccer), squash, practice, and two shooting games. The AY-3-8500 was the 625-line PAL version and the AY-3-8500-1 was the 525-line NTSC version. It was introduced in 1976, Coleco becoming the first customer having been introduced to the IC development by Ralph H. Baer. A minimum number of external components were needed to build a complete system.
The AY-3-8500 was the first version. It played seven Pong variations. The video was in black-and-white, although it was possible to colorize the game by using an additional chip, such as the AY-3-8515.
Games
Six selectable games for one or two players were included:
In addition, a seventh undocumented game could be played when none of the previous six was selected: Handicap, a hockey variant where the player on the right has a third paddle. This game was implemented on very few systems.
Usage
The AY-3-8500 was designed to be powered by six 1.5 V cells (9 V). Its specified operation is at 6-7 V and a maximum of 12 V instead of the 5 V standard for logic. The nominal clock was 2.0 MHz, yielding a 500 ns pixel width. One way to generate such a clock is to divide a 14.31818 MHz 4 × colorburst clock by 7, producing 2.04545 MHz. It featured independent video outputs for left player, right player, ball, and playground+counter, that were summed using resistors, allowing designers to use a different luminance for each one. It was housed in a standard 28-pin DIP.
Applications
Some of the dedicated consoles employing the AY-3-8500 (there are at least two hundred different consoles using this chip):
Sears Hockey Pong
Coleco Telstar series (Coleco Telstar, Coleco Telstar Clas
|
https://en.wikipedia.org/wiki/Network%20delay
|
Network delay is a design and performance characteristic of a telecommunications network. It specifies the latency for a bit of data to travel across the network from one communication endpoint to another. It is typically measured in multiples or fractions of a second. Delay may differ slightly, depending on the location of the specific pair of communicating endpoints. Engineers usually report both the maximum and average delay, and they divide the delay into several parts:
Processing delay time it takes a router to process the packet header
Queuing delay time the packet spends in routing queues
Transmission delay time it takes to push the packet's bits onto the link
Propagation delay time for a signal to propagate through the media
A certain minimum level of delay is experienced by signals due to the time it takes to transmit a packet serially through a link. This delay is extended by more variable levels of delay due to network congestion. IP network delays can range from a few milliseconds to several hundred milliseconds.
See also
Age of Information
End-to-end delay
Lag (video games)
Latency (engineering)
Minimum-Pairs Protocol
Round-trip delay
|
https://en.wikipedia.org/wiki/Hybrid%20Scheduling
|
Hybrid Scheduling is a class of scheduling mechanisms that mix different scheduling criteria or disciplines in one algorithm. For example, scheduling uplink and downlink traffic in a WLAN (Wireless Local Area Network, such as IEEE 802.11e) using a single discipline or framework is an instance of hybrid scheduling. Other examples include a scheduling scheme that can provide differentiated and integrated (guaranteed) services in one discipline. Another example could be scheduling of node communications where centralized communications and distributed communications coexist. Further examples of such schedulers are found in the following articles:
|
https://en.wikipedia.org/wiki/Foias%20constant
|
In mathematical analysis, the Foias constant is a real number named after Ciprian Foias.
It is defined in the following way: for every real number x1 > 0, there is a sequence defined by the recurrence relation
for n = 1, 2, 3, .... The Foias constant is the unique choice α such that if x1 = α then the sequence diverges to infinity. For all other values of x1, the sequence is divergent as well, but it has two accumulation points: 1 and infinity. Numerically, it is
.
No closed form for the constant is known.
When x1 = α then the growth rate of the sequence (xn) is given by the limit
where "log" denotes the natural logarithm.
The same methods used in the proof of the uniqueness of the Foias constant may also be applied to other similar recursive sequences.
See also
Mathematical constant
Notes and references
Mathematical analysis
Mathematical constants
|
https://en.wikipedia.org/wiki/Gerontology
|
Gerontology ( ) is the study of the social, cultural, psychological, cognitive, and biological aspects of aging. The word was coined by Ilya Ilyich Mechnikov in 1903, from the Greek (), meaning "old man", and (), meaning "study of". The field is distinguished from geriatrics, which is the branch of medicine that specializes in the treatment of existing disease in older adults. Gerontologists include researchers and practitioners in the fields of biology, nursing, medicine, criminology, dentistry, social work, physical and occupational therapy, psychology, psychiatry, sociology, economics, political science, architecture, geography, pharmacy, public health, housing, and anthropology.
The multidisciplinary nature of gerontology means that there are a number of sub-fields which overlap with gerontology. There are policy issues, for example, involved in government planning and the operation of nursing homes, investigating the effects of an aging population on society, and the design of residential spaces for older people that facilitate the development of a sense of place or home. Dr. Lawton, a behavioral psychologist at the Philadelphia Geriatric Center, was among the first to recognize the need for living spaces designed to accommodate the elderly, especially those with Alzheimer's disease. As an academic discipline the field is relatively new. The USC Leonard Davis School of Gerontology created the first PhD, master's and bachelor's degree programs in gerontology in 1975.
History
In the medieval Islamic world, several physicians wrote on issues related to Gerontology. Avicenna's The Canon of Medicine (1025) offered instruction for the care of the aged, including diet and remedies for problems including constipation. Arabic physician Ibn Al-Jazzar Al-Qayrawani (Algizar, c. 898–980) wrote on the aches and conditions of the elderly. His scholarly work covers sleep disorders, forgetfulness, how to strengthen memory, and causes of mortality. Ishaq ibn Hunayn (died 910
|
https://en.wikipedia.org/wiki/Contact%20pad
|
Contact pads or bond pads are small, conductive surface areas of a printed circuit board (PCB) or die of an integrated circuit. They are often made of gold, copper, or aluminum and measure mere micrometres wide. Pads are positioned on the edges of die, to facilitate connections without shorting. Contact pads exist to provide a larger surface area for connections to a microchip or PCB, allowing for the input and output of data and power.
Possible methods of connecting contact pads to a system include soldering, wirebonding, or flip chip mounting.
Contact pads are created alongside a chip's functional structure during the photolithography steps of the fabrication process, and afterwards they are tested. During the test process, contact pads are probed with the needles of a probe card on Automatic Test Equipment in order to check for faults via electrical resistance.
Further reading
Kraig Mitzner, Complete PCB Design Using OrCAD Capture and PCB Editor, Newnes, 2009 .
Jing Li, Evaluation and Improvement of the Robustness of a PCB Pad in a Lead-free Environment, ProQuest, 2007 .
Deborah Lea, Fredirikus Jonck, Christopher Hunt, Solderability Measurements of PCB Pad Finishes and Geometries, National Physical Laboratory, 2001 .
Electronic engineering
Printed circuit board manufacturing
|
https://en.wikipedia.org/wiki/Outline%20of%20software%20engineering
|
The following outline is provided as an overview of and topical guide to software engineering:
Software engineering – application of a systematic, disciplined, quantifiable approach to the development, operation, and maintenance of software; that is the application of engineering to software.
The ACM Computing Classification system is a poly-hierarchical ontology that organizes the topics of the field and can be used in semantic web applications and as a de facto standard classification system for the field. The major section "Software and its Engineering" provides an outline and ontology for software engineering.
Software applications
Software engineers build software (applications, operating systems, system software) that people use.
Applications influence software engineering by pressuring developers to solve problems in new ways. For example, consumer software emphasizes low cost, medical software emphasizes high quality, and Internet commerce software emphasizes rapid development.
Business software
Accounting software
Analytics
Data mining closely related to database
Decision support systems
Airline reservations
Banking
Automated teller machines
Cheque processing
Credit cards
Commerce
Trade
Auctions (e.g. eBay)
Reverse auctions (procurement)
Bar code scanners
Compilers
Parsers
Compiler optimization
Interpreters
Linkers
Loaders
Communication
E-mail
Instant messengers
VOIP
Calendars — scheduling and coordinating
Contact managers
Computer graphics
Animation
Special effects for video and film
Editing
Post-processing
Cryptography
Databases, support almost every field
Embedded systems Both software engineers and traditional engineers write software control systems for embedded products.
Automotive software
Avionics software
Heating ventilating and air conditioning (HVAC) software
Medical device software
Telephony
Telemetry
Engineering All traditional engineering branches use software extensively. Engineers use spreadsheets, more than they ever used calculators
|
https://en.wikipedia.org/wiki/Biocybernetics
|
Biocybernetics is the application of cybernetics to biological science disciplines such as neurology and multicellular systems. Biocybernetics plays a major role in systems biology, seeking to integrate different levels of information to understand how biological systems function. The field of cybernetics itself has origins in biological disciplines such as neurophysiology. Biocybernetics is an abstract science and is a fundamental part of theoretical biology, based upon the principles of systemics. Biocybernetics is a psychological study that aims to understand how the human body functions as a biological system and performs complex mental functions like thought processing, motion, and maintaining homeostasis.(PsychologyDictionary.org)Within this field, many distinct qualities allow for different distinctions within the cybernetic groups such as humans and insects such as beehives and ants. Humans work together but they also have individual thoughts that allow them to act on their own, while worker bees follow the commands of the queen bee. (Seeley, 1989). Although humans often work together, they can also separate from the group and think for themselves.(Gackenbach, J. 2007) A unique example of this within the human sector of biocybernetics would be in society during the colonization period, when Great Britain established their colonies in North America and Australia. Many of the traits and qualities of the mother country were inherited by the colonies, as well as niche qualities that were unique to them based on their areas like language and personality—similar vines and grasses, where the parent plant produces offshoots, spreading from the core. Once the shoots grow their roots and get separated from the mother plant, they will survive independently and be considered their plant. Society is more closely related to plants than to animals since, like plants, there is no distinct separation between parent and offspring. The branching of society is more similar t
|
https://en.wikipedia.org/wiki/Virus%20classification
|
Virus classification is the process of naming viruses and placing them into a taxonomic system similar to the classification systems used for cellular organisms.
Viruses are classified by phenotypic characteristics, such as morphology, nucleic acid type, mode of replication, host organisms, and the type of disease they cause. The formal taxonomic classification of viruses is the responsibility of the International Committee on Taxonomy of Viruses (ICTV) system, although the Baltimore classification system can be used to place viruses into one of seven groups based on their manner of mRNA synthesis. Specific naming conventions and further classification guidelines are set out by the ICTV.
A catalogue of all the world's known viruses has been proposed and, in 2013, some preliminary efforts were underway.
Definitions
Species definition
Species form the basis for any biological classification system. Before 1982, it was thought that viruses could not be made to fit Ernst Mayr's reproductive concept of species, and so were not amenable to such treatment. In 1982, the ICTV started to define a species as "a cluster of strains" with unique identifying qualities. In 1991, the more specific principle that a virus species is a polythetic class of viruses that constitutes a replicating lineage and occupies a particular ecological niche was adopted.
In July 2013, the ICTV definition of species changed to state: "A species is a monophyletic group of viruses whose properties can be distinguished from those of other species by multiple criteria." These criteria include the structure of the capsid, the existence of an envelope, the gene expression program for its proteins, host range, pathogenicity, and most importantly genetic sequence similarity and phylogenetic relationship.
The actual criteria used vary by the taxon, and can be inconsistent (arbitrary similarity thresholds) or unrelated to lineage (geography) at times. The matter is, for many, not yet settled.
Virus defi
|
https://en.wikipedia.org/wiki/Mathematical%20diagram
|
Mathematical diagrams, such as charts and graphs, are mainly designed to convey mathematical relationships—for example, comparisons over time.
Specific types of mathematical diagrams
Argand diagram
A complex number can be visually represented as a pair of numbers forming a vector on a diagram called an Argand diagram
The complex plane is sometimes called the Argand plane because it is used in Argand diagrams. These are named after Jean-Robert Argand (1768–1822), although they were first described by Norwegian-Danish land surveyor and mathematician Caspar Wessel (1745–1818). Argand diagrams are frequently used to plot the positions of the poles and zeroes of a function in the complex plane.
The concept of the complex plane allows a geometric interpretation of complex numbers. Under addition, they add like vectors. The multiplication of two complex numbers can be expressed most easily in polar coordinates — the magnitude or modulus of the product is the product of the two absolute values, or moduli, and the angle or argument of the product is the sum of the two angles, or arguments. In particular, multiplication by a complex number of modulus 1 acts as a rotation.
Butterfly diagram
In the context of fast Fourier transform algorithms, a butterfly is a portion of the computation that combines the results of smaller discrete Fourier transforms (DFTs) into a larger DFT, or vice versa (breaking a larger DFT up into subtransforms). The name "butterfly" comes from the shape of the data-flow diagram in the radix-2 case, as described below. The same structure can also be found in the Viterbi algorithm, used for finding the most likely sequence of hidden states.
The butterfly diagram show a data-flow diagram connecting the inputs x (left) to the outputs y that depend on them (right) for a "butterfly" step of a radix-2 Cooley–Tukey FFT algorithm. This diagram resembles a butterfly as in the Morpho butterfly shown for comparison, hence the name.
Commutative diagram
In
|
https://en.wikipedia.org/wiki/Rent%27s%20rule
|
Rent's rule pertains to the organization of computing logic, specifically the relationship between the number of external signal connections to a logic block (i.e., the number of "pins") with the number of logic gates in the logic block, and has been applied to circuits ranging from small digital circuits to mainframe computers. Put simply, it states that there is a simple power law relationship between these two values (pins and gates).
E. F. Rent's discovery and first publications
In the 1960s, E. F. Rent, an IBM employee, found a remarkable trend between the number of pins (terminals, T) at the boundaries of integrated circuit designs at IBM and the number of internal components (g), such as logic gates or standard cells. On a log–log plot, these datapoints were on a straight line, implying a power-law relation , where t and p are constants (p < 1.0, and generally 0.5 < p < 0.8).
Rent's findings in IBM-internal memoranda were published in the IBM Journal of Research and Development in 2005, but the relation was described in 1971 by Landman and Russo. They performed a hierarchical circuit partitioning in such a way that at each hierarchical level (top-down) the fewest interconnections had to be cut to partition the circuit (in more or less equal parts). At each partitioning step, they noted the number of terminals and the number of components in each partition and then partitioned the sub-partitions further. They found the power-law rule applied to the resulting T versus g plot and named it "Rent's rule".
Rent's rule is an empirical result based on observations of existing designs, and therefore it is less applicable to the analysis of non-traditional circuit architectures. However, it provides a useful framework with which to compare similar architectures.
Theoretical basis
Christie and Stroobandt later derived Rent's rule theoretically for homogeneous systems and pointed out that the amount of optimization achieved in placement is reflected by the paramete
|
https://en.wikipedia.org/wiki/Substrate%20%28biology%29
|
In biology, a substrate is the surface on which an organism (such as a plant, fungus, or animal) lives. A substrate can include biotic or abiotic materials and animals. For example, encrusting algae that lives on a rock (its substrate) can be itself a substrate for an animal that lives on top of the algae. Inert substrates are used as growing support materials in the hydroponic cultivation of plants. In biology substrates are often activated by the nanoscopic process of substrate presentation.
In agriculture and horticulture
Cellulose substrate
Expanded clay aggregate (LECA)
Rock wool
Potting soil
Soil
In animal biotechnology
Requirements for animal cell and tissue culture
Requirements for animal cell and tissue culture are the same as described for plant cell, tissue and organ culture (In Vitro Culture Techniques: The Biotechnological Principles). Desirable requirements are (i) air conditioning of a room, (ii) hot room with temperature recorder, (iii) microscope room for carrying out microscopic work where different types of microscopes should be installed, (iv) dark room, (v) service room, (vi) sterilization room for sterilization of glassware and culture media, and (vii) preparation room for media preparation, etc. In addition the storage areas should be such where following should be kept properly : (i) liquids-ambient (4-20°C), (ii) glassware-shelving, (iii) plastics-shelving, (iv) small items-drawers, (v) specialized equipments-cupboard, slow turnover, (vi) chemicals-sidled containers.
For cell growth
There are many types of vertebrate cells that require support for their growth in vitro otherwise they will not grow properly. Such cells are called anchorage-dependent cells. Therefore, many substrates which may be adhesive (e.g. plastic, glass, palladium, metallic surfaces, etc.) or non-adhesive (e.g. agar, agarose, etc.) types may be used as discussed below:
Plastic as a substrate. Disposable plastics are cheaper substrate as they are commonly made
|
https://en.wikipedia.org/wiki/Fuzzy%20electronics
|
Fuzzy electronics is an electronic technology that uses fuzzy logic, instead of the two-state Boolean logic more commonly used in digital electronics. Fuzzy electronics is fuzzy logic implemented on dedicated hardware. This is to be compared with fuzzy logic implemented in software running on a conventional processor. Fuzzy electronics has a wide range of applications, including control systems and artificial intelligence.
History
The first fuzzy electronic circuit was built by Takeshi Yamakawa et al. in 1980 using discrete bipolar transistors. The first industrial fuzzy application was in a cement kiln in Denmark in 1982. The first VLSI fuzzy electronics was by Masaki Togai and Hiroyuki Watanabe in 1984. In 1987, Yamakawa built the first analog fuzzy controller. The first digital fuzzy processors came in 1988 by Togai (Russo, pp. 2-6).
In the early 1990s, the first fuzzy logic chips were presented to the public. Two companies which are Omron and NEC have announced the development of dedicated fuzzy electronic hardware in the year 1991. Two years later, the japanese Omron Cooperation has shown a working fuzzy chip during a technical fair.
See also
Defuzzification
Fuzzy set
Fuzzy set operations
|
https://en.wikipedia.org/wiki/Photoheterotroph
|
Photoheterotrophs (Gk: photo = light, hetero = (an)other, troph = nourishment) are heterotrophic phototrophs—that is, they are organisms that use light for energy, but cannot use carbon dioxide as their sole carbon source. Consequently, they use organic compounds from the environment to satisfy their carbon requirements; these compounds include carbohydrates, fatty acids, and alcohols. Examples of photoheterotrophic organisms include purple non-sulfur bacteria, green non-sulfur bacteria, and heliobacteria. These microorganisms are ubiquitous in aquatic habitats, occupy unique niche-spaces, and contribute to global biogeochemical cycling. Recent research has also indicated that the oriental hornet and some aphids may be able to use light to supplement their energy supply.
Research
Studies have shown that mammalian mitochondria can also capture light and synthesize ATP when mixed with pheophorbide, a light-capturing metabolite of chlorophyll. Research demonstrated that the same metabolite when fed to the worm Caenorhabditis elegans leads to increase in ATP synthesis upon light exposure, along with an increase in life span.
Furthermore, inoculation experiments suggest that mixotrophic Ochromonas danica (i.e., Golden algae)—and comparable eukaryotes—favor photoheterotrophy in oligotrophic (i.e., nutrient-limited) aquatic habitats. This preference may increase energy-use efficiency and growth by reducing investment in inorganic carbon fixation (e.g., production of autotrophic machineries such as RuBisCo and PSII).
Metabolism
Photoheterotrophs generate ATP using light, in one of two ways: they use a bacteriochlorophyll-based reaction center, or they use a bacteriorhodopsin. The chlorophyll-based mechanism is similar to that used in photosynthesis, where light excites the molecules in a reaction center and causes a flow of electrons through an electron transport chain (ETS). This flow of electrons through the proteins causes hydrogen ions to be pumped across a membrane
|
https://en.wikipedia.org/wiki/List%20of%20order%20structures%20in%20mathematics
|
In mathematics, and more specifically in order theory, several different types of ordered set have been studied.
They include:
Cyclic orders, orderings in which triples of elements are either clockwise or counterclockwise
Lattices, partial orders in which each pair of elements has a greatest lower bound and a least upper bound. Many different types of lattice have been studied; see map of lattices for a list.
Partially ordered sets (or posets), orderings in which some pairs are comparable and others might not be
Preorders, a generalization of partial orders allowing ties (represented as equivalences and distinct from incomparabilities)
Semiorders, partial orders determined by comparison of numerical values, in which values that are too close to each other are incomparable; a subfamily of partial orders with certain restrictions
Total orders, orderings that specify, for every two distinct elements, which one is less than the other
Weak orders, generalizations of total orders allowing ties (represented either as equivalences or, in strict weak orders, as transitive incomparabilities)
Well-orders, total orders in which every non-empty subset has a least element
Well-quasi-orderings, a class of preorders generalizing the well-orders
See also
Glossary of order theory
List of order theory topics
Mathematics-related lists
Order theory
|
https://en.wikipedia.org/wiki/Off-flavour
|
Off-flavours or off-flavors (see spelling differences) are taints in food products caused by the presence of undesirable compounds. They can originate in raw materials, from chemical changes during food processing and storage, and from micro-organisms. Off-flavours are a recurring issue in drinking water supply and many food products.
Water bodies are often affected by geosmin and 2-methylisoborneol, affecting the flavour of water for drinking and of fish growing in that water. Haloanisoles similarly affect water bodies, and are a recognised cause of off-flavour in wine. Cows grazing on weeds such as wild garlic can produce a ‘weedy’ off-flavour in milk.
Many more examples can be seen throughout food production sectors including in oats, coffee, glucose syrup and brewing.
|
https://en.wikipedia.org/wiki/Point%20process%20notation
|
In probability and statistics, point process notation comprises the range of mathematical notation used to symbolically represent random objects known as point processes, which are used in related fields such as stochastic geometry, spatial statistics and continuum percolation theory and frequently serve as mathematical models of random phenomena, representable as points, in time, space or both.
The notation varies due to the histories of certain mathematical fields and the different interpretations of point processes, and borrows notation from mathematical areas of study such as measure theory and set theory.
Interpretation of point processes
The notation, as well as the terminology, of point processes depends on their setting and interpretation as mathematical objects which under certain assumptions can be interpreted as random sequences of points, random sets of points or random counting measures.
Random sequences of points
In some mathematical frameworks, a given point process may be considered as a sequence of points with each point randomly positioned in d-dimensional Euclidean space Rd as well as some other more abstract mathematical spaces. In general, whether or not a random sequence is equivalent to the other interpretations of a point process depends on the underlying mathematical space, but this holds true for the setting of finite-dimensional Euclidean space Rd.
Random set of points
A point process is called simple if no two (or more points) coincide in location with probability one. Given that often point processes are simple and the order of the points does not matter, a collection of random points can be considered as a random set of points The theory of random sets was independently developed by David Kendall and Georges Matheron. In terms of being considered as a random set, a sequence of random points is a random closed set if the sequence has no accumulation points with probability one
A point process is often denoted by a single l
|
https://en.wikipedia.org/wiki/List%20of%20theorems%20called%20fundamental
|
In mathematics, a fundamental theorem is a theorem which is considered to be central and conceptually important for some topic. For example, the fundamental theorem of calculus gives the relationship between differential calculus and integral calculus. The names are mostly traditional, so that for example the fundamental theorem of arithmetic is basic to what would now be called number theory. Some of these are classification theorems of objects which are mainly dealt with in the field. For instance, the fundamental theorem of curves describe classification of regular curves in space up to translation and rotation.
Likewise, the mathematical literature sometimes refers to the fundamental lemma of a field. The term lemma is conventionally used to denote a proven proposition which is used as a stepping stone to a larger result, rather than as a useful statement in-and-of itself.
Fundamental theorems of mathematical topics
Fundamental theorem of algebra
Fundamental theorem of algebraic K-theory
Fundamental theorem of arithmetic
Fundamental theorem of Boolean algebra
Fundamental theorem of calculus
Fundamental theorem of calculus for line integrals
Fundamental theorem of curves
Fundamental theorem of cyclic groups
Fundamental theorem of dynamical systems
Fundamental theorem of equivalence relations
Fundamental theorem of exterior calculus
Fundamental theorem of finitely generated abelian groups
Fundamental theorem of finitely generated modules over a principal ideal domain
Fundamental theorem of finite distributive lattices
Fundamental theorem of Galois theory
Fundamental theorem of geometric calculus
Fundamental theorem on homomorphisms
Fundamental theorem of ideal theory in number fields
Fundamental theorem of Lebesgue integral calculus
Fundamental theorem of linear algebra
Fundamental theorem of linear programming
Fundamental theorem of noncommutative algebra
Fundamental theorem of projective geometry
Fundamental theorem of random fields
Fu
|
https://en.wikipedia.org/wiki/Glossary%20of%20mathematical%20jargon
|
The language of mathematics has a vast vocabulary of specialist and technical terms. It also has a certain amount of jargon: commonly used phrases which are part of the culture of mathematics, rather than of the subject. Jargon often appears in lectures, and sometimes in print, as informal shorthand for rigorous arguments or precise ideas. Much of this is common English, but with a specific non-obvious meaning when used in a mathematical sense.
Some phrases, like "in general", appear below in more than one section.
Philosophy of mathematics
abstract nonsenseA tongue-in-cheek reference to category theory, using which one can employ arguments that establish a (possibly concrete) result without reference to any specifics of the present problem. For that reason, it's also known as general abstract nonsense or generalized abstract nonsense.
canonicalA reference to a standard or choice-free presentation of some mathematical object (e.g., canonical map, canonical form, or canonical ordering). The same term can also be used more informally to refer to something "standard" or "classic". For example, one might say that Euclid's proof is the "canonical proof" of the infinitude of primes.
deepA result is called "deep" if its proof requires concepts and methods that are advanced beyond the concepts needed to formulate the result. For example, the prime number theorem — originally proved using techniques of complex analysis — was once thought to be a deep result until elementary proofs were found. On the other hand, the fact that π is irrational is usually known to be a deep result, because it requires a considerable development of real analysis before the proof can be established — even though the claim itself can be stated in terms of simple number theory and geometry.
elegantAn aesthetic term referring to the ability of an idea to provide insight into mathematics, whether by unifying disparate fields, introducing a new perspective on a single field, or by providing a
|
https://en.wikipedia.org/wiki/Nessum
|
Nessum is a communication technology that can be used in a variety of media, including wired, wireless, and underwater, using high frequencies (kHz to MHz bands). It is standardized as IEEE P1901c.
Overview
Nessum has two types of communication: wired (Nessum WIRE) and wireless (Nessum AIR).
Wired communication
Nessum WIRE can be used for various types of lines such as power lines, twisted pair lines, coaxial cable lines, and telephone lines. The communication distance is about 100m to 200m for power lines and 2,000m for coaxial cables. In addition, when an automatic relay function called multi-hop (ITU-T G.9905) is utilized, a maximum of 10 stages of relay is possible. With a maximum physical speed of 1 Gbps and effective speeds ranging from several Mbps to several tens of Mbps, this technology is used to reduce network construction costs by utilizing existing lines, to increase the speed of low-speed wired communication lines, to supplement wireless communication where it cannot reach, and to reduce the number of lines in equipment.
Wireless communication
Short range wireless communication called Nessum AIR. It uses magnetic field communication in the short range, and the communication distance can be controlled in the range of a few centimeters to 100 centimeters. Maximum physical speed is 1 Gbps, with an effective speed of 100 Mbps.
Technical overview
Physical layer (PHY)
The physical layer uses Wavelet OFDM (Wavelet Orthogonal Frequency Division Multiplexing). While a guard interval is required in ordinary OFDM systems, the Wavelet OFDM system eliminates the guard interval and increases the occupancy rate of the data portion, thereby achieving high efficiency. In addition, due to the bandwidth limitation of each subcarrier, the level of sidelobes is set low, which facilitates the formation of spectral notches. This minimizes interference with existing systems and allows for flexible compliance with frequency utilization regulations. Furthermore, Pulse-
|
https://en.wikipedia.org/wiki/Instant%20rice
|
Instant rice is a white rice that is partly precooked and then is dehydrated and packed in a dried form similar in appearance to that of regular white rice. That process allows the product to be later cooked as if it were normal rice but with a typical cooking time of 5 minutes, not the 20–30 minutes needed by white rice (or the still greater time required by brown rice). This process was invented by Ataullah K. Ozai‐Durrani in 1939 and mass-marketed by General Foods starting in 1946 as Minute Rice, which is still made.
Instant rice is not the "microwave-ready" rice that is pre-cooked but not dehydrated; such rice is fully cooked and ready to eat, normally after cooking in its sealed package in a microwave oven for as little as 1 minute for a portion. Another distinct product is parboiled rice (also called "converted" rice, a trademark for what was long sold as Uncle Ben's converted rice); brown rice is parboiled to preserve nutrients that are lost in the preparation of white rice, not to reduce cooking time.
Preparation process
Instant rice is made using several methods. The most common method is similar to the home cooking process. The rice is blanched in hot water, steamed, and rinsed. It is then placed in large ovens for dehydration until the moisture content reaches approximately twelve percent or less. The basic principle involves using hot water or steam to form cracks or holes in the kernels before dehydrating. In the subsequent cooking, water can more easily penetrate into the cracked grain, allowing for a short cooking time.
Advantages and disadvantages
The notable advantage of instant rice is the rapid cooking time: some brands can be ready in as little as three minutes. Currently, several companies, Asian as well as American, have developed brands which only require 90 seconds to cook, much like a cup of instant noodles.
However, instant rice is more expensive than regular white rice due to the cost of the processing. The "cracking" process can
|
https://en.wikipedia.org/wiki/List%20of%20first-order%20theories
|
In first-order logic, a first-order theory is given by a set of axioms in some
language. This entry lists some of the more common examples used in model theory and some of their properties.
Preliminaries
For every natural mathematical structure there is a signature σ listing the constants, functions, and relations of the theory together with their arities, so that the object is naturally a σ-structure. Given a signature σ there is a unique first-order language Lσ that can be used to capture the first-order expressible facts about the σ-structure.
There are two common ways to specify theories:
List or describe a set of sentences in the language Lσ, called the axioms of the theory.
Give a set of σ-structures, and define a theory to be the set of sentences in Lσ holding in all these models. For example, the "theory of finite fields" consists of all sentences in the language of fields that are true in all finite fields.
An Lσ theory may:
be consistent: no proof of contradiction exists;
be satisfiable: there exists a σ-structure for which the sentences of the theory are all true (by the completeness theorem, satisfiability is equivalent to consistency);
be complete: for any statement, either it or its negation is provable;
have quantifier elimination;
eliminate imaginaries;
be finitely axiomatizable;
be decidable: There is an algorithm to decide which statements are provable;
be recursively axiomatizable;
be model complete or sub-model complete;
be κ-categorical: All models of cardinality κ are isomorphic;
be stable or unstable;
be ω-stable (same as totally transcendental for countable theories);
be superstable
have an atomic model;
have a prime model;
have a saturated model.
Pure identity theories
The signature of the pure identity theory is empty, with no functions, constants, or relations.
Pure identity theory has no (non-logical) axioms. It is decidable.
One of the few interesting properties that can be stated in the language of pure identity theory
|
https://en.wikipedia.org/wiki/Continuous%20or%20discrete%20variable
|
In mathematics and statistics, a quantitative variable may be continuous or discrete if they are typically obtained by measuring or counting, respectively. If it can take on two particular real values such that it can also take on all real values between them (even values that are arbitrarily close together), the variable is continuous in that interval. If it can take on a value such that there is a non-infinitesimal gap on each side of it containing no values that the variable can take on, then it is discrete around that value. In some contexts a variable can be discrete in some ranges of the number line and continuous in others.
Continuous variable
A continuous variable is a variable whose value is obtained by measuring, i.e., one which can take on an uncountable set of values.
For example, a variable over a non-empty range of the real numbers is continuous, if it can take on any value in that range. The reason is that any range of real numbers between and with is uncountable.
Methods of calculus are often used in problems in which the variables are continuous, for example in continuous optimization problems.
In statistical theory, the probability distributions of continuous variables can be expressed in terms of probability density functions.
In continuous-time dynamics, the variable time is treated as continuous, and the equation describing the evolution of some variable over time is a differential equation. The instantaneous rate of change is a well-defined concept.
Discrete variable
In contrast, a variable is a discrete variable if and only if there exists a one-to-one correspondence between this variable and , the set of natural numbers. In other words; a discrete variable over a particular interval of real values is one for which, for any value in the range that the variable is permitted to take on, there is a positive minimum distance to the nearest other permissible value. The number of permitted values is either finite or countably infinite.
|
https://en.wikipedia.org/wiki/Spectrum%20analyzer
|
A spectrum analyzer measures the magnitude of an input signal versus frequency within the full frequency range of the instrument. The primary use is to measure the power of the spectrum of known and unknown signals. The input signal that most common spectrum analyzers measure is electrical; however, spectral compositions of other signals, such as acoustic pressure waves and optical light waves, can be considered through the use of an appropriate transducer. Spectrum analyzers for other types of signals also exist, such as optical spectrum analyzers which use direct optical techniques such as a monochromator to make measurements.
By analyzing the spectra of electrical signals, dominant frequency, power, distortion, harmonics, bandwidth, and other spectral components of a signal can be observed that are not easily detectable in time domain waveforms. These parameters are useful in the characterization of electronic devices, such as wireless transmitters.
The display of a spectrum analyzer has frequency displayed on the horizontal axis and the amplitude on the vertical axis. To the casual observer, a spectrum analyzer looks like an oscilloscope, which plots amplitude on the vertical axis but time on the horizontal axis. In fact, some lab instruments can function either as an oscilloscope or a spectrum analyzer.
History
The first spectrum analyzers, in the 1960s, were swept-tuned instruments.
Following the discovery of the fast Fourier transform (FFT) in 1965, the first FFT-based analyzers were introduced in 1967.
Today, there are three basic types of analyzer: the swept-tuned spectrum analyzer, the vector signal analyzer, and the real-time spectrum analyzer.
Types
Spectrum analyzer types are distinguished by the methods used to obtain the spectrum of a signal. There are swept-tuned and fast Fourier transform (FFT) based spectrum analyzers:
A swept-tuned analyzer uses a superheterodyne receiver to down-convert a portion of the input signal spectrum to the ce
|
https://en.wikipedia.org/wiki/Blue%20whale
|
The blue whale (Balaenoptera musculus) is a marine mammal and a baleen whale. Reaching a maximum confirmed length of and weighing up to , it is the largest animal known ever to have existed. The blue whale's long and slender body can be of various shades of greyish-blue dorsally and somewhat lighter underneath. Four subspecies are recognized: B. m. musculus in the North Atlantic and North Pacific, B. m. intermedia in the Southern Ocean, B. m. brevicauda (the pygmy blue whale) in the Indian Ocean and South Pacific Ocean, B. m. indica in the Northern Indian Ocean. There is also a population in the waters off Chile that may constitute a fifth subspecies.
In general, blue whale populations migrate between their summer feeding areas near the poles and their winter breeding grounds near the tropics. There is also evidence of year-round residencies, and partial or age/sex-based migration. Blue whales are filter feeders; their diet consists almost exclusively of krill. They are generally solitary or gather in small groups, and have no well-defined social structure other than mother-calf bonds. The fundamental frequency for blue whale vocalizations ranges from 8 to 25 Hz and the production of vocalizations may vary by region, season, behavior, and time of day. Orcas are their only natural predators.
The blue whale was once abundant in nearly all the Earth's oceans until the end of the 19th century. It was hunted almost to the point of extinction by whalers until the International Whaling Commission banned all blue whale hunting in 1966. The International Union for Conservation of Nature has listed blue whales as Endangered as of 2018. It continues to face numerous man-made threats such as ship strikes, pollution, ocean noise and climate change.
Taxonomy
Nomenclature
The genus name, Balaenoptera, means winged whale while the species name, musculus, could mean "muscle" or a diminutive form of "mouse", possibly a pun by Carl Linnaeus when he named the species in Systema N
|
https://en.wikipedia.org/wiki/Interconnect%20%28integrated%20circuits%29
|
In integrated circuits (ICs), interconnects are structures that connect two or more circuit elements (such as transistors) together electrically. The design and layout of interconnects on an IC is vital to its proper function, performance, power efficiency, reliability, and fabrication yield. The material interconnects are made from depends on many factors. Chemical and mechanical compatibility with the semiconductor substrate and the dielectric between the levels of interconnect is necessary, otherwise barrier layers are needed. Suitability for fabrication is also required; some chemistries and processes prevent the integration of materials and unit processes into a larger technology (recipe) for IC fabrication. In fabrication, interconnects are formed during the back-end-of-line after the fabrication of the transistors on the substrate.
Interconnects are classified as local or global interconnects depending on the signal propagation distance it is able to support. The width and thickness of the interconnect, as well as the material from which it is made, are some of the significant factors that determine the distance a signal may propagate. Local interconnects connect circuit elements that are very close together, such as transistors separated by ten or so other contiguously laid out transistors. Global interconnects can transmit further, such as over large-area sub-circuits. Consequently, local interconnects may be formed from materials with relatively high electrical resistivity such as polycrystalline silicon (sometimes silicided to extend its range) or tungsten. To extend the distance an interconnect may reach, various circuits such as buffers or restorers may be inserted at various points along a long interconnect.
Interconnect properties
The geometric properties of an interconnect are width, thickness, spacing (the distance between an interconnect and another on the same level), pitch (the sum of the width and spacing), and aspect ratio, or AR, (the thickn
|
https://en.wikipedia.org/wiki/Load-balanced%20switch
|
A load-balanced switch is a switch architecture which guarantees 100% throughput with no central arbitration at all, at the cost of sending each packet across the crossbar twice. Load-balanced switches are a subject of research for large routers scaled past the point of practical central arbitration.
Introduction
Internet routers are typically built using line cards connected with a switch. Routers supporting moderate total bandwidth may use a bus as their switch, but high bandwidth routers typically use some sort of crossbar interconnection. In a crossbar, each output connects to one input, so that information can flow through every output simultaneously. Crossbars used for packet switching are typically reconfigured tens of millions of times per second. The schedule of these configurations is determined by a central arbiter, for example a Wavefront arbiter, in response to requests by the line cards to send information to one another.
Perfect arbitration would result in throughput limited only by the maximum throughput of each crossbar input or output. For example, if all traffic coming into line cards A and B is destined for line card C, then the maximum traffic that cards A and B can process together is limited by C. Perfect arbitration has been shown to require massive amounts of computation, that scales up much faster than the number of ports on the crossbar. Practical systems use imperfect arbitration heuristics (such as iSLIP) that can be computed in reasonable amounts of time.
A load-balanced switch is not related to a load balancing switch, which refers to a kind of router used as a front end to a farm of web servers to spread requests to a single website across many servers.
Basic architecture
As shown in the figure to the right, a load-balanced switch has N input line cards, each of rate R, each connected to N buffers by a link of rate R/N. Those buffers are in turn each connected to N output line cards, each of rate R, by links of rate R/N.
|
https://en.wikipedia.org/wiki/DECbit
|
DECbit is a TCP congestion control technique implemented in routers to avoid congestion. Its utility is to predict possible congestion and prevent it.
When a router wants to signal congestion to the sender, it adds a bit in the header of packets sent. When a packet arrives at the router, the router calculates the average queue length for the last (busy + idle) period plus the current busy period. (The router is busy when it is transmitting packets, and idle otherwise). When the average queue length exceeds 1, then the router sets the congestion indication bit in the packet header of arriving packets.
When the destination replies, the corresponding ACK includes a set congestion bit. The sender receives the ACK and calculates how many packets it received with the congestion indication bit set to one. If less than half of the packets in the last window had the congestion indication bit set, then the window is increased linearly. Otherwise, the window is decreased exponentially.
This technique dynamically manages the window to avoid congestion and increasing freight if it detects congestion and tries to balance bandwidth with respect to the delay.
Note that this technique does not allow for effective use of the line, because it fails to take advantage of the available bandwidth. Besides, the fact that the tail has increased in size from one cycle to another does not always mean there is congestion.
|
https://en.wikipedia.org/wiki/Number%20theoretic%20Hilbert%20transform
|
The number theoretic Hilbert transform is an extension of the discrete Hilbert transform to integers modulo a prime . The transformation operator is a circulant matrix.
The number theoretic transform is meaningful in the ring , when the modulus is not prime, provided a principal root of order n exists.
The NHT matrix, where , has the form
The rows are the cyclic permutations of the first row, or the columns may be seen as the cyclic permutations of the first column. The NHT is its own inverse: where I is the identity matrix.
The number theoretic Hilbert transform can be used to generate sets of orthogonal discrete sequences that have applications in signal processing, wireless systems, and cryptography. Other ways to generate constrained orthogonal sequences also exist.
|
https://en.wikipedia.org/wiki/Tyranny%20of%20numbers
|
The tyranny of numbers was a problem faced in the 1960s by computer engineers. Engineers were unable to increase the performance of their designs due to the huge number of components involved. In theory, every component needed to be wired to every other component (or at least many other components) and were typically strung and soldered by hand. In order to improve performance, more components would be needed, and it seemed that future designs would consist almost entirely of wiring.
History
The first known recorded use of the term in this context was made by the Vice President of Bell Labs in an article celebrating the 10th anniversary of the invention of the transistor, for the "Proceedings of the IRE" (Institute of Radio Engineers), June 1958 . Referring to the problems many designers were having, he wrote:
At the time, computers were typically built up from a series of "modules", each module containing the electronics needed to perform a single function. A complex circuit like an adder would generally require several modules working in concert. The modules were typically built on printed circuit boards of a standardized size, with a connector on one edge that allowed them to be plugged into the power and signaling lines of the machine, and were then wired to other modules using twisted pair or coaxial cable.
Since each module was relatively custom, modules were assembled and soldered by hand or with limited automation. As a result, they suffered major reliability problems. Even a single bad component or solder joint could render the entire module inoperative. Even with properly working modules, the mass of wiring connecting them together was another source of construction and reliability problems. As computers grew in complexity, and the number of modules increased, the complexity of making a machine actually work grew more and more difficult. This was the "tyranny of numbers".
It was precisely this problem that Jack Kilby was thinking about while working
|
https://en.wikipedia.org/wiki/Null%20%28mathematics%29
|
In mathematics, the word null (from meaning "zero", which is from meaning "none") is often associated with the concept of zero or the concept of nothing. It is used in varying context from "having zero members in a set" (e.g., null set) to "having a value of zero" (e.g., null vector).
In a vector space, the null vector is the neutral element of vector addition; depending on the context, a null vector may also be a vector mapped to some null by a function under consideration (such as a quadratic form coming with the vector space, see null vector, a linear mapping given as matrix product or dot product, a seminorm in a Minkowski space, etc.). In set theory, the empty set, that is, the set with zero elements, denoted "{}" or "∅", may also be called null set. In measure theory, a null set is a (possibly nonempty) set with zero measure.
A null space of a mapping is the part of the domain that is mapped into the null element of the image (the inverse image of the null element). For example, in linear algebra, the null space of a linear mapping, also known as kernel, is the set of vectors which map to the null vector under that mapping.
In statistics, a null hypothesis is a proposition that no effect or relationship exists between populations and phenomena. It is the hypothesis which is presumed true—unless statistical evidence indicates otherwise.
See also
0
Null sign
|
https://en.wikipedia.org/wiki/Virtual%20particle
|
A virtual particle is a theoretical transient particle that exhibits some of the characteristics of an ordinary particle, while having its existence limited by the uncertainty principle. The concept of virtual particles arises in the perturbation theory of quantum field theory where interactions between ordinary particles are described in terms of exchanges of virtual particles. A process involving virtual particles can be described by a schematic representation known as a Feynman diagram, in which virtual particles are represented by internal lines.
Virtual particles do not necessarily carry the same mass as the corresponding real particle, although they always conserve energy and momentum. The closer its characteristics come to those of ordinary particles, the longer the virtual particle exists. They are important in the physics of many processes, including particle scattering and Casimir forces. In quantum field theory, forces—such as the electromagnetic repulsion or attraction between two charges—can be thought of as due to the exchange of virtual photons between the charges. Virtual photons are the exchange particle for the electromagnetic interaction.
The term is somewhat loose and vaguely defined, in that it refers to the view that the world is made up of "real particles". "Real particles" are better understood to be excitations of the underlying quantum fields. Virtual particles are also excitations of the underlying fields, but are "temporary" in the sense that they appear in calculations of interactions, but never as asymptotic states or indices to the scattering matrix. The accuracy and use of virtual particles in calculations is firmly established, but as they cannot be detected in experiments, deciding how to precisely describe them is a topic of debate. Although widely used, they are by no means a necessary feature of QFT, but rather are mathematical conveniences - as demonstrated by lattice field theory, which avoids using the concept altogether
|
https://en.wikipedia.org/wiki/Radio%20science%20subsystem
|
A radio science subsystem (RSS) is a subsystem placed on board a spacecraft for radio science purposes.
Function of the RSS
The RSS uses radio signals to probe a medium such as a planetary atmosphere. The spacecraft transmits a highly stable signal to ground stations, receives such a signal from ground stations, or both. Since the transmitted signal parameters are accurately known to the receiver, any changes to these parameters are attributable to the propagation medium or to the relative motion of the spacecraft and ground station.
The RSS is usually not a separate instrument; its functions are usually "piggybacked" on the existing telecommunications subsystem. More advanced systems use multiple antennas with orthogonal polarizations.
Radio science
Radio science is commonly used to determine the gravity field of a moon or planet by observing Doppler shift. This requires a highly stable oscillator on the spacecraft, or more commonly a "2-way coherent" transponder that phase locks the transmitted signal frequency to a rational multiple of a received uplink signal that usually also carries spacecraft commands.
Another common radio science observation is performed as a spacecraft is occulted by a planetary body. As the spacecraft moves behind the planet, its radio signals cuts through successively deeper layers of the planetary atmosphere. Measurements of signal strength and polarization vs time can yield data on the composition and temperature of the atmosphere at different altitudes.
It is also common to use multiple radio frequencies coherently derived from a common source to measure the dispersion of the propagation medium. This is especially useful in determining the free electron content of a planetary ionosphere.
Spacecraft using RSS
Cassini–Huygens
Mariner 2, 4,5,6,7,9, and 10
Voyager 1 and 2
MESSENGER
Venus Express
Functions
Determine composition of gas clouds such as atmospheres, solar coronas.
Characterize gravitational fields
Estimate m
|
https://en.wikipedia.org/wiki/MUSHRA
|
MUSHRA stands for Multiple Stimuli with Hidden Reference and Anchor and is a methodology for conducting a codec listening test to evaluate the perceived quality of the output from lossy audio compression algorithms. It is defined by ITU-R recommendation BS.1534-3. The MUSHRA methodology is recommended for assessing "intermediate audio quality". For very small audio impairments, Recommendation ITU-R BS.1116-3 (ABC/HR) is recommended instead.
The main advantage over the mean opinion score (MOS) methodology (which serves a similar purpose) is that MUSHRA requires fewer participants to obtain statistically significant results. This is because all codecs are presented at the same time, on the same samples, so that a paired t-test or a repeated measures analysis of variance can be used for statistical analysis. Also, the 0–100 scale used by MUSHRA makes it possible to rate very small differences.
In MUSHRA, the listener is presented with the reference (labeled as such), a certain number of test samples, a hidden version of the reference and one or more anchors. The recommendation specifies that a low-range and a mid-range anchor should be included in the test signals. These are typically a 7 kHz and a 3.5 kHz low-pass version of the reference. The purpose of the anchors is to calibrate the scale so that minor artifacts are not unduly penalized. This is particularly important when comparing or pooling results from different labs.
Listener behavior
Both, MUSHRA and ITU BS.1116 tests call for trained expert listeners who know what typical artifacts sound like and where they are likely to occur. Expert listeners also have a better internalization of the rating scale which leads to more repeatable results than with untrained listeners. Thus, with trained listeners, fewer listeners are needed to achieve statistically significant results.
It is assumed that preferences are similar for expert listeners and naive listeners and thus results of expert listeners are also predic
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.