source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Missing%20heritability%20problem
|
The missing heritability problem is the fact that single genetic variations cannot account for much of the heritability of diseases, behaviors, and other phenotypes. This is a problem that has significant implications for medicine, since a person's susceptibility to disease may depend more on the combined effect of all the genes in the background than on the disease genes in the foreground, or the role of genes may have been severely overestimated.
Discovery
The missing heritability problem was named as such in 2008 (after the "missing baryon problem" in physics). The Human Genome Project led to optimistic forecasts that the large genetic contributions to many traits and diseases (which were identified by quantitative genetics and behavioral genetics in particular) would soon be mapped and pinned down to specific genes and their genetic variants by methods such as candidate-gene studies which used small samples with limited genetic sequencing to focus on specific genes believed to be involved, examining single-nucleotide polymorphisms (SNPs). While many hits were found, they often failed to replicate in other studies.
The exponential fall in genome sequencing costs led to the use of genome-wide association studies (GWASes) which could simultaneously examine all candidate-genes in larger samples than the original finding, where the candidate-gene hits were found to almost always be false positives and only 2-6% replicate; in the specific case of intelligence candidate-gene hits, only 1 candidate-gene hit replicated, the top 25 schizophrenia candidate-genes were no more associated with schizophrenia than chance, and of 15 neuroimaging hits, none did. The editorial board of Behavior Genetics noted, in setting more stringent requirements for candidate-gene publications, that "the literature on candidate gene associations is full of reports that have not stood up to rigorous replication...it now seems likely that many of the published findings of the last decade are
|
https://en.wikipedia.org/wiki/Whisker%20%28metallurgy%29
|
Metal whiskering is a phenomenon that occurs in electrical devices when metals form long whisker-like projections over time. Tin whiskers were noticed and documented in the vacuum tube era of electronics early in the 20th century in equipment that used pure, or almost pure, tin solder in their production. It was noticed that small metal hairs or tendrils grew between metal solder pads, causing short circuits. Metal whiskers form in the presence of compressive stress. Germanium, zinc, cadmium, and even lead whiskers have been documented. Many techniques are used to mitigate the problem, including changes to the annealing process (heating and cooling), the addition of elements like copper and nickel, and the inclusion of conformal coatings. Traditionally, lead has been added to slow down whisker growth in tin-based solders.
Following the Restriction of Hazardous Substances Directive (RoHS), the European Union banned the use of lead in most consumer electronic products from 2006 due to health problems associated with lead and the "high-tech trash" problem, leading to a re-focusing on the issue of whisker formation in lead-free solders.
Mechanism
Metal whiskering is a crystalline metallurgical phenomenon involving the spontaneous growth of tiny, filiform hairs from a metallic surface. The effect is primarily seen on elemental metals but also occurs with alloys.
The mechanism behind metal whisker growth is not well understood, but seems to be encouraged by compressive mechanical stresses including:
energy gained due to electrostatic polarization of metal filaments in the electric field,
residual stresses caused by electroplating,
mechanically induced stresses,
stresses induced by diffusion of different metals,
thermally induced stresses, and
strain gradients in materials.
Metal whiskers differ from metallic dendrites in several respects: dendrites are fern-shaped and grow across the surface of the metal, while metal whiskers are hair-like and project normal to
|
https://en.wikipedia.org/wiki/JANOG
|
JANOG is the Internet network operators' group for the Japanese Internet service provider (ISP) community. JANOG was originally established in 1997.
JANOG holds regular meetings for the ISP community, with hundreds of attendees. Although JANOG has no formal budget of its own, it draws on the resources of its member companies to do so.
|
https://en.wikipedia.org/wiki/Hafner%E2%80%93Sarnak%E2%80%93McCurley%20constant
|
The Hafner–Sarnak–McCurley constant is a mathematical constant representing the probability that the determinants of two randomly chosen square integer matrices will be relatively prime. The probability depends on the matrix size, n, in accordance with the formula
where pk is the kth prime number. The constant is the limit of this expression as n approaches infinity. Its value is roughly 0.3532363719... .
|
https://en.wikipedia.org/wiki/Transport%20triggered%20architecture
|
In computer architecture, a transport triggered architecture (TTA) is a kind of processor design in which programs directly control the internal transport buses of a processor. Computation happens as a side effect of data transports: writing data into a triggering port of a functional unit triggers the functional unit to start a computation. This is similar to what happens in a systolic array. Due to its modular structure, TTA is an ideal processor template for application-specific instruction set processors (ASIP) with customized datapath but without the inflexibility and design cost of fixed function hardware accelerators.
Typically a transport triggered processor has multiple transport buses and multiple functional units connected to the buses, which provides opportunities for instruction level parallelism. The parallelism is statically defined by the programmer. In this respect (and obviously due to the large instruction word width), the TTA architecture resembles the very long instruction word (VLIW) architecture. A TTA instruction word is composed of multiple slots, one slot per bus, and each slot determines the data transport that takes place on the corresponding bus. The fine-grained control allows some optimizations that are not possible in a conventional processor. For example, software can transfer data directly between functional units without using registers.
Transport triggering exposes some microarchitectural details that are normally hidden from programmers. This greatly simplifies the control logic of a processor, because many decisions normally done at run time are fixed at compile time. However, it also means that a binary compiled for one TTA processor will not run on another one without recompilation if there is even a small difference in the architecture between the two. The binary incompatibility problem, in addition to the complexity of implementing a full context switch, makes TTAs more suitable for embedded systems than for general purpos
|
https://en.wikipedia.org/wiki/IC%20programming
|
IC programming is the process of transferring a computer program into an integrated computer circuit. Older types of IC including PROMs and EPROMs and some early programmable logic was typically programmed through parallel busses that used many of the device's pins and basically required inserting the device in a separate programmer.
Modern ICs are typically programmed in circuit though a serial protocol (sometimes JTAG sometimes something manufacturer specific). Some (particularly FPGAs) even load the data serially from a separate flash or prom chip on every startup.
Notes
Embedded systems
|
https://en.wikipedia.org/wiki/Mathematical%20sculpture
|
A mathematical sculpture is a sculpture which uses mathematics as an essential conception. Helaman Ferguson, George W. Hart, Bathsheba Grossman, Peter Forakis and Jacobus Verhoeff are well-known mathematical sculptors.
|
https://en.wikipedia.org/wiki/Recurrence%20quantification%20analysis
|
Recurrence quantification analysis (RQA) is a method of nonlinear data analysis (cf. chaos theory) for the investigation of dynamical systems. It quantifies the number and duration of recurrences of a dynamical system presented by its phase space trajectory.
Background
The recurrence quantification analysis (RQA) was developed in order to quantify differently appearing recurrence plots (RPs), based on the small-scale structures therein. Recurrence plots are tools which visualise the recurrence behaviour of the phase space trajectory of dynamical systems:
,
where is the Heaviside function and a predefined tolerance.
Recurrence plots mostly contain single dots and lines which are parallel to the mean diagonal (line of identity, LOI) or which are vertical/horizontal. Lines parallel to the LOI are referred to as diagonal lines and the vertical structures as vertical lines. Because an RP is usually symmetric, horizontal and vertical lines correspond to each other, and, hence, only vertical lines are considered. The lines correspond to a typical behaviour of the phase space trajectory: whereas the diagonal lines represent such segments of the phase space trajectory which run parallel for some time, the vertical lines represent segments which remain in the same phase space region for some time.
If only a time series is available, the phase space can be reconstructed by using a time delay embedding (see Takens' theorem):
where is the time series, the embedding dimension and the time delay.
The RQA quantifies the small-scale structures of recurrence plots, which present the number and duration of the recurrences of a dynamical system. The measures introduced for the RQA were developed heuristically between 1992 and 2002 (Zbilut & Webber 1992; Webber & Zbilut 1994; Marwan et al. 2002). They are actually measures of complexity. The main advantage of the recurrence quantification analysis is that it can provide useful information even for short and non-stationary d
|
https://en.wikipedia.org/wiki/Geophysics
|
Geophysics () is a subject of natural science concerned with the physical processes and physical properties of the Earth and its surrounding space environment, and the use of quantitative methods for their analysis. Geophysicists, who usually study geophysics, physics, or one of the earth sciences at the graduate level, complete investigations across a wide range of scientific disciplines. The term geophysics classically refers to solid earth applications: Earth's shape; its gravitational, magnetic fields, and electromagnetic fields ; its internal structure and composition; its dynamics and their surface expression in plate tectonics, the generation of magmas, volcanism and rock formation. However, modern geophysics organizations and pure scientists use a broader definition that includes the water cycle including snow and ice; fluid dynamics of the oceans and the atmosphere; electricity and magnetism in the ionosphere and magnetosphere and solar-terrestrial physics; and analogous problems associated with the Moon and other planets.
Although geophysics was only recognized as a separate discipline in the 19th century, its origins date back to ancient times. The first magnetic compasses were made from lodestones, while more modern magnetic compasses played an important role in the history of navigation. The first seismic instrument was built in 132 AD. Isaac Newton applied his theory of mechanics to the tides and the precession of the equinox; and instruments were developed to measure the Earth's shape, density and gravity field, as well as the components of the water cycle. In the 20th century, geophysical methods were developed for remote exploration of the solid Earth and the ocean, and geophysics played an essential role in the development of the theory of plate tectonics.
Geophysics is applied to societal needs, such as mineral resources, mitigation of natural hazards and environmental protection. In exploration geophysics, geophysical survey data are used to
|
https://en.wikipedia.org/wiki/Ridley%E2%80%93Watkins%E2%80%93Hilsum%20theory
|
In solid state physics the Ridley–Watkins–Hilsum theory (RWH) explains the mechanism by which differential negative resistance is developed in a bulk solid state semiconductor material when a voltage is applied to the terminals of the sample. It is the theory behind the operation of the Gunn diode as well as several other microwave semiconductor devices, which are used practically in electronic oscillators to produce microwave power. It is named for British physicists Brian Ridley, Tom Watkins and Cyril Hilsum who wrote theoretical papers on the effect in 1961.
Negative resistance oscillations in bulk semiconductors had been observed in the laboratory by J. B. Gunn in 1962, and were thus named the "Gunn effect", but physicist Herbert Kroemer pointed out in 1964 that Gunn's observations could be explained by the RWH theory.
In essence, RWH mechanism is the transfer of conduction electrons in a semiconductor from a high mobility valley to lower-mobility, higher-energy satellite valleys. This phenomenon can only be observed in materials that have such energy band structures.
Normally, in a conductor, increasing electric field causes higher charge carrier (usually electron) speeds and results in higher current consistent with Ohm's law. In a multi-valley semiconductor, though, higher energy may push the carriers into a higher energy state where they actually have higher effective mass and thus slow down. In effect, carrier velocities and current drop as the voltage is increased. While this transfer occurs, the material exhibits a decrease in current – that is, a negative differential resistance. At higher voltages, the normal increase of current with voltage relation resumes once the bulk of the carriers are kicked into the higher energy-mass valley. Therefore the negative resistance only occurs over a limited range of voltages.
Of the type of semiconducting materials satisfying these conditions, gallium arsenide (GaAs) is the most widely understood and used. Ho
|
https://en.wikipedia.org/wiki/Copper%20interconnects
|
In semiconductor technology, copper interconnects are interconnects made of copper. They are used in silicon integrated circuits (ICs) to reduce propagation delays and power consumption. Since copper is a better conductor than aluminium, ICs using copper for their interconnects can have interconnects with narrower dimensions, and use less energy to pass electricity through them. Together, these effects lead to ICs with better performance. They were first introduced by IBM, with assistance from Motorola, in 1997.
The transition from aluminium to copper required significant developments in fabrication techniques, including radically different methods for patterning the metal as well as the introduction of barrier metal layers to isolate the silicon from potentially damaging copper atoms.
Although the methods of superconformal copper electrodepostion were known since late 1960, their application at the (sub)micron via scale (e.g. in microchips) started only in 1988-1995 (see figure). By year 2002 it became a mature technology, and research and development efforts in this field started to decline.
Patterning
Although some form of volatile copper compound has been known to exist since 1947, with more discovered as the century progressed, none were in industrial use, so copper could not be patterned by the previous techniques of photoresist masking and plasma etching that had been used with great success with aluminium. The inability to plasma etch copper called for a drastic rethinking of the metal patterning process and the result of this rethinking was a process referred to as an additive patterning, also known as a "Damascene" or "dual-Damascene" process by analogy to a traditional technique of metal inlaying.
In this process, the underlying silicon oxide insulating layer is patterned with open trenches where the conductor should be. A thick coating of copper that significantly overfills the trenches is deposited on the insulator, and chemical-mechanical planar
|
https://en.wikipedia.org/wiki/Artificially%20Expanded%20Genetic%20Information%20System
|
Artificially Expanded Genetic Information System (AEGIS) is a synthetic DNA analog experiment that uses some unnatural base pairs from the laboratories of the Foundation for Applied Molecular Evolution in Gainesville, Florida. AEGIS is a NASA-funded project to try to understand how extraterrestrial life may have developed.
The system uses twelve different nucleobases in its genetic code. These include the four canonical nucleobases found in DNA (adenine, cytosine, guanine and thymine) plus eight synthetic nucleobases). AEGIS includes S:B, Z:P, V:J and K:X base pairs.
See also
Abiogenesis
Astrobiology
Hachimoji DNA
xDNA
Hypothetical types of biochemistry
Xeno nucleic acid
|
https://en.wikipedia.org/wiki/OMNeT%2B%2B
|
OMNeT++ (Objective Modular Network Testbed in C++) is a modular, component-based C++ simulation library and framework, primarily for building network simulators. OMNeT++ can be used for free for non-commercial simulations like at academic institutions and for teaching. OMNEST is an extended version of OMNeT++ for commercial use.
OMNeT++ itself is a simulation framework without models for network protocols like IP or HTTP. The main computer network simulation models are available in several external frameworks. The most commonly used one is INET which offers a variety of models for all kind of network protocols and technologies like for IPv6, BGP. INET also offers a set of mobility models to simulate the node movement in simulations. The INET models are licensed under the LGPL or GPL. NED (NEtwork Description) is the topology description language of OMNeT++.
To manage and reduce the time to carry out large-scale simulations, additional tools have been developed, for example, based on Python.
See also
MLDesigner
QualNet
NEST (software)
|
https://en.wikipedia.org/wiki/List%20of%20permutation%20topics
|
This is a list of topics on mathematical permutations.
Particular kinds of permutations
Alternating permutation
Circular shift
Cyclic permutation
Derangement
Even and odd permutations—see Parity of a permutation
Josephus permutation
Parity of a permutation
Separable permutation
Stirling permutation
Superpattern
Transposition (mathematics)
Unpredictable permutation
Combinatorics of permutations
Bijection
Combination
Costas array
Cycle index
Cycle notation
Cycles and fixed points
Cyclic order
Direct sum of permutations
Enumerations of specific permutation classes
Factorial
Falling factorial
Permutation matrix
Generalized permutation matrix
Inversion (discrete mathematics)
Major index
Ménage problem
Permutation graph
Permutation pattern
Permutation polynomial
Permutohedron
Rencontres numbers
Robinson–Schensted correspondence
Sum of permutations:
Direct sum of permutations
Skew sum of permutations
Stanley–Wilf conjecture
Symmetric function
Szymanski's conjecture
Twelvefold way
Permutation groups and other algebraic structures
Groups
Alternating group
Automorphisms of the symmetric and alternating groups
Block (permutation group theory)
Cayley's theorem
Cycle index
Frobenius group
Galois group of a polynomial
Jucys–Murphy element
Landau's function
Oligomorphic group
O'Nan–Scott theorem
Parker vector
Permutation group
Place-permutation action
Primitive permutation group
Rank 3 permutation group
Representation theory of the symmetric group
Schreier vector
Strong generating set
Symmetric group
Symmetric inverse semigroup
Weak order of permutations
Wreath product
Young symmetrizer
Zassenhaus group
Zolotarev's lemma
Other algebraic structures
Burnside ring
Mathematical analysis
Conditionally convergent series
Riemann series theorem
Lévy–Steinitz theorem
Mathematics applicable to physical sciences
Antisymmetrizer
Identical particles
Levi-Civita symbol
Number theory
Permutable prime
Algorithms and information processing
Bit-reversal permutation
Claw-
|
https://en.wikipedia.org/wiki/ULN2003A
|
The ULN2003A is an integrated circuit produced by Texas Instruments. It consists of an array of seven NPN Darlington transistors capable of 500 mA, 50 V output. It features common-cathode flyback diodes for switching inductive loads (such as servomotors). It can come in PDIP, SOIC, SOP or TSSOP packaging. In the same family are ULN2002A, ULN2004A, as well as ULQ2003A and ULQ2004A, designed for different logic input levels.
The ULN2003A is also similar to the ULN2001A (4 inputs) and the ULN2801A, ULN2802A, ULN2803A, ULN2804A and ULN2805A, only differing in logic input levels (TTL, CMOS, PMOS) and number of in/outputs (4/7/8).
Darlington Transistor
A Darlington transistor (also known as Darlington pair) achieves very high current amplification by connecting two bipolar transistors in direct DC coupling so the current amplified by the first transistor is amplified further by the second one. The resultant current gain is the product of those of the two component transistors:
The seven Darlington pairs in ULN2003 can operate independently except the common cathode diodes that connect to their respective collectors.
Features
The ULN2003 is known for its high-current, high-voltage capacity. The drivers can be paralleled for even higher current output. Even further, stacking one chip on top of another, both electrically and physically, has been done. Generally it can also be used for interfacing with a stepper motor, where the motor requires high ratings which cannot be provided by other interfacing devices.
Main specifications:
500 mA rated collector current (single output)
50 V output (there is a version that supports 100 V output)
Includes output flyback diodes
Inputs compatible with TTL and 5-V CMOS logic
Applications
Typical usage of the ULN2003A is in driver circuits for relays, solenoids, lamp and LED displays, stepper motors, logic buffers and line drivers.
See also
Solid state relay
|
https://en.wikipedia.org/wiki/Front-end%20processor
|
A front-end processor (FEP), or a communications processor, is a small-sized computer which interfaces to the host computer a number of networks, such as SNA, or a number of peripheral devices, such as terminals, disk units, printers and tape units. Data is transferred between the host computer and the front-end processor using a high-speed parallel interface. The front-end processor communicates with peripheral devices using slower serial interfaces, usually also through communication networks. The purpose is to off-load from the host computer the work of managing the peripheral devices, transmitting and receiving messages, packet assembly and disassembly, error detection, and error correction. Two examples are the IBM 3705 Communications Controller and the Burroughs Data Communications Processor.
Sometimes FEP is synonymous with a communications controller, although the latter is not necessarily as flexible. Early communications controllers such as the IBM 270x series were hard wired, but later units were programmable devices.
Front-end processor is also used in a more general sense in asymmetric multi-processor systems. The FEP is a processing device (usually a computer) which is closer to the input source than is the main processor. It performs some task such as telemetry control, data collection, reduction of raw sensor data, analysis of keyboard input, etc.
Front-end processes relates to the software interface between the user (client) and the application processes (server) in the client/server architecture. The user enters input (data) into the front-end process where it is collected and processed in such a way that it conforms to what the receiving application (back end) on the server can accept and process. As an example, the user enters a URL into a GUI (front-end process) such as Microsoft Internet Explorer. The GUI then processes the URL in such a way that the user is able to reach or access the intended web pages on the web server (application serve
|
https://en.wikipedia.org/wiki/Message%20switching
|
In telecommunications, message switching involves messages routed in their entirety, one hop at a time. It evolved from circuit switching and was the precursor of packet switching.
An example of message switching is email in which the message is sent through different intermediate servers to reach the mail server for storing. Unlike packet switching, the message is not divided into smaller units and sent independently over the network.
History
Western Union operated a message switching system, Plan 55-A, for processing telegrams in the 1950s. Leonard Kleinrock wrote a doctoral thesis at the Massachusetts Institute of Technology in 1962 that analyzed queueing delays in this system.
Message switching was built by Collins Radio Company, Newport Beach, California, during the period 1959–1963 for sale to large airlines, banks and railroads.
The original design for the ARPANET was Wesley Clark's April 1967 proposal for using Interface Message Processors to create a message switching network. After the seminal meeting at the first ACM Symposium on Operating Systems Principles in October 1967, where Roger Scantlebury presented Donald Davies work and mentioned the work of Paul Baran, Larry Roberts incorporated packet switching into the design.
The SITA High-Level Network (HLN) became operational in 1969, handling data traffic for airlines in real time via a message-switched network over common carrier leased lines. It was organised to act like a packet-switching network.
Message switching systems are nowadays mostly implemented over packet-switched or circuit-switched data networks. Each message is treated as a separate entity. Each message contains addressing information, and at each switch this information is read and the transfer path to the next switch is decided. Depending on network conditions, a conversation of several messages may not be transferred over the same path. Each message is stored (usually on hard drive due to RAM limitations) before being transmi
|
https://en.wikipedia.org/wiki/Seth%20Lloyd
|
Seth Lloyd (born August 2, 1960) is a professor of mechanical engineering and physics at the Massachusetts Institute of Technology.
His research area is the interplay of information with complex systems, especially quantum systems. He has performed seminal work in the fields of quantum computation, quantum communication and quantum biology, including proposing the first technologically feasible design for a quantum computer, demonstrating the viability of quantum analog computation, proving quantum analogs of Shannon's noisy channel theorem, and designing novel methods for quantum error correction and noise reduction.
Biography
Lloyd was born on August 2, 1960. He graduated from Phillips Academy in 1978 and received a bachelor of arts degree from Harvard College in 1982. He earned a certificate of advanced study in mathematics and a master of philosophy degree from Cambridge University in 1983 and 1984, while on a Marshall Scholarship. Lloyd was awarded a doctorate by Rockefeller University in 1988 (advisor Heinz Pagels) after submitting a thesis on Black Holes, Demons, and the Loss of Coherence: How Complex Systems Get Information, and What They Do With It.
From 1988 to 1991, Lloyd was a postdoctoral fellow in the High Energy Physics Department at the California Institute of Technology, where he worked with Murray Gell-Mann on applications of information to quantum-mechanical systems. From 1991 to 1994, he was a postdoctoral fellow at Los Alamos National Laboratory, where he worked at the Center for Nonlinear Systems on quantum computation. In 1994, he joined the faculty of the Department of Mechanical Engineering at MIT. Starting in 1988, Lloyd was an external faculty member at the Santa Fe Institute for more than 30 years.
In his 2006 book, Programming the Universe, Lloyd contends that the universe itself is one big quantum computer producing what we see around us, and ourselves, as it runs a cosmic program. According to Lloyd, once we understand the laws of
|
https://en.wikipedia.org/wiki/Marquois%20scales
|
Marquois scales (also known as Marquois parallel scales or Marquois scale and triangle or military scales) are a mathematical instrument that found widespread use in Britain, particularly in military surveying, from the late 18th century to World War II.
Description
Invented around 1778 by Thomas Marquois, the Marquois scales consist of a right-angle triangle (with sides at a 3:1 ratio) and two rulers (each with multiple scales). The system could be used to aid precision when marking distances off scales, and to rapidly draw parallel lines a precise distance apart. Quick construction of precise parallel lines was useful in cartography and engineering (especially before the availability of graph paper) and Marquois scales were convenient in some challenging environments where larger equipment like a drawing board and T-square was impractical, such as field survey work and classrooms. Marquois scales fell out of favour among draftsmen in the early 20th century, although familiarity with their use was an entry requirement for the Royal Military Academy at Woolwich around the same time.
Material
Marquois scales were normally made of boxwood, though sets were sometimes made in ivory or metal.
Use
The triangle would be used for many regular set square operations, the rulers likewise would function as rulers, but the unique function was the 3:1 reduction ratio between measured distance and drawn line.
A line is drawn along the beveled edge (the side of middle-length) of the triangle. By placing a ruler against the hypotenuse of the triangle and sliding the triangle along the ruler for 3 units of the ruler's scale, drawing another line along the beveled edge results in a parallel line with a distance of only 1 unit from the original line. Using larger distances on a ruler to draw lines smaller distances apart means that margin of error reading off the scale is reduced. Additionally, the end-state is the instruments already in place to slide the triangle again to quickly
|
https://en.wikipedia.org/wiki/Order%20%28mathematics%29
|
Order in mathematics may refer to:
Set theory
Total order and partial order, a binary relation generalizing the usual ordering of numbers and of words in a dictionary
Ordered set
Order in Ramsey theory, uniform structures in consequence to critical set cardinality
Algebra
Order (group theory), the cardinality of a group or period of an element
Order of a polynomial (disambiguation)
Order of a square matrix, its dimension
Order (ring theory), an algebraic structure
Ordered group
Ordered field
Analysis
Order (differential equation) or order of highest derivative, of a differential equation
Leading-order terms
NURBS order, a number one greater than the degree of the polynomial representation of a non-uniform rational B-spline
Order of convergence, a measurement of convergence
Order of derivation
Order of an entire function
Order of a power series, the lowest degree of its terms
Ordered list, a sequence or tuple
Orders of approximation in Big O notation
Z-order (curve), a space-filling curve
Arithmetic
Multiplicative order in modular arithmetic
Order of operations
Orders of magnitude, a class of scale or magnitude of any amount
Combinatorics
Order in the Josephus permutation
Ordered selections and partitions of the twelvefold way in combinatorics
Ordered set, a bijection, cyclic order, or permutation
Unordered subset or combination
Weak order of permutations
Fractals
Complexor, or complex order in fractals
Order of extension in Lakes of Wada
Order of fractal dimension (Rényi dimensions)
Orders of construction in the Pythagoras tree
Geometry
Long-range aperiodic order, in pinwheel tiling, for instance
Graphs
Graph order, the number of nodes in a graph
First order and second order logic of graphs
Topological ordering of directed acyclic graphs
Degeneracy ordering of undirected graphs
Elimination ordering of chordal graphs
Order, the complexity of a structure within a graph: see haven (graph theory) and bramble (graph theory)
Logic
In logic, model theory and
|
https://en.wikipedia.org/wiki/Sinc%20filter
|
In signal processing, a sinc filter can refer to either a sinc-in-time filter whose impulse response is a sinc function and whose frequency response is rectangular, or to a sinc-in-frequency filter whose impulse response is rectangular and whose frequency response is a sinc function. Calling them according to which domain the filter resembles a sinc avoids confusion. If the domain if it is unspecified, sinc-in-time is often assumed, or context hopefully can infer the correct domain.
Sinc-in-time
Sinc-in-time is an ideal filter that removes all frequency components above a given cutoff frequency, without attenuating lower frequencies, and has linear phase response. It may thus be considered a brick-wall filter or rectangular filter.
Its impulse response is a sinc function in the time domain:
while its frequency response is a rectangular function:
where (representing its bandwidth) is an arbitrary cutoff frequency.
Its impulse response is given by the inverse Fourier transform of its frequency response:
where sinc is the normalized sinc function.
Brick-wall filters
An idealized electronic filter with full transmission in the pass band, complete attenuation in the stop band, and abrupt transitions is known colloquially as a "brick-wall filter" (in reference to the shape of the transfer function). The sinc-in-time filter is a brick-wall low-pass filter, from which brick-wall band-pass filters and high-pass filters are easily constructed.
The lowpass filter with brick-wall cutoff at frequency BL has impulse response and transfer function given by:
The band-pass filter with lower band edge BL and upper band edge BH is just the difference of two such sinc-in-time filters (since the filters are zero phase, their magnitude responses subtract directly):
The high-pass filter with lower band edge BH is just a transparent filter minus a sinc-in-time filter, which makes it clear that the Dirac delta function is the limit of a narrow-in-time sinc-in-time filter:
U
|
https://en.wikipedia.org/wiki/Transmission%20curve
|
The transmission curve or transmission characteristic is the mathematical function or graph that describes the transmission fraction of an optical or electronic filter as a function of frequency or wavelength. It is an instance of a transfer function but, unlike the case of, for example, an amplifier, output never exceeds input (maximum transmission is 100%). The term is often used in commerce, science, and technology to characterise filters.
The term has also long been used in fields such as geophysics and astronomy to characterise the properties of regions through which radiation passes, such as the ionosphere.
See also
Electronic filter — examples of transmission characteristics of electronic filters
|
https://en.wikipedia.org/wiki/Special%20input/output
|
Special input/output (Special I/O or SIO) are inputs and/or outputs of a microcontroller designated to perform specialized functions or have specialized features.
Specialized functions can include:
Hardware interrupts,
analog input or output
PWM output
Serial communication, such as UART, USART, SPI bus, or SerDes.
External reset
Switch debounce
Input pull-up (or -down) resistors
open collector output
Pulse counting
Timing pulses
Some kinds of special I/O functions can sometimes be emulated with general-purpose input/output and bit banging software.
See also
Atari SIO
|
https://en.wikipedia.org/wiki/Staining
|
Staining is a technique used to enhance contrast in samples, generally at the microscopic level. Stains and dyes are frequently used in histology (microscopic study of biological tissues), in cytology (microscopic study of cells), and in the medical fields of histopathology, hematology, and cytopathology that focus on the study and diagnoses of diseases at the microscopic level. Stains may be used to define biological tissues (highlighting, for example, muscle fibers or connective tissue), cell populations (classifying different blood cells), or organelles within individual cells.
In biochemistry, it involves adding a class-specific (DNA, proteins, lipids, carbohydrates) dye to a substrate to qualify or quantify the presence of a specific compound. Staining and fluorescent tagging can serve similar purposes. Biological staining is also used to mark cells in flow cytometry, and to flag proteins or nucleic acids in gel electrophoresis. Light microscopes are used for viewing stained samples at high magnification, typically using bright-field or epi-fluorescence illumination.
Staining is not limited to only biological materials, since it can also be used to study the structure of other materials; for example, the lamellar structures of semi-crystalline polymers or the domain structures of block copolymers.
In vivo vs In vitro
In vivo staining (also called vital staining or intravital staining) is the process of dyeing living tissues. By causing certain cells or structures to take on contrasting colours, their form (morphology) or position within a cell or tissue can be readily seen and studied. The usual purpose is to reveal cytological details that might otherwise not be apparent; however, staining can also reveal where certain chemicals or specific chemical reactions are taking place within cells or tissues.
In vitro staining involves colouring cells or structures that have been removed from their biological context. Certain stains are often combined to reveal mo
|
https://en.wikipedia.org/wiki/Imaginary%20unit
|
The imaginary unit or unit imaginary number () is a solution to the quadratic equation . Although there is no real number with this property, can be used to extend the real numbers to what are called complex numbers, using addition and multiplication. A simple example of the use of in a complex number is .
Imaginary numbers are an important mathematical concept; they extend the real number system to the complex number system , in which at least one root for every nonconstant polynomial exists (see Algebraic closure and Fundamental theorem of algebra). Here, the term "imaginary" is used because there is no real number having a negative square.
There are two complex square roots of −1: and , just as there are two complex square roots of every real number other than zero (which has one double square root).
In contexts in which use of the letter is ambiguous or problematic, the letter is sometimes used instead. For example, in electrical engineering and control systems engineering, the imaginary unit is normally denoted by instead of , because is commonly used to denote electric current.
Definition
The imaginary number is defined solely by the property that its square is −1:
With defined this way, it follows directly from algebra that and are both square roots of −1.
Although the construction is called "imaginary", and although the concept of an imaginary number may be intuitively more difficult to grasp than that of a real number, the construction is valid from a mathematical standpoint. Real number operations can be extended to imaginary and complex numbers, by treating as an unknown quantity while manipulating an expression (and using the definition to replace any occurrence of with −1). Higher integral powers of can also be replaced with , 1, , or −1:
or, equivalently,
Similarly, as with any non-zero real number:
As a complex number, can be represented in rectangular form as , with a zero real component and a unit imaginary component. In
|
https://en.wikipedia.org/wiki/Porism
|
A porism is a mathematical proposition or corollary. It has been used to refer to a direct consequence of a proof, analogous to how a corollary refers to a direct consequence of a theorem. In modern usage, it is a relationship that holds for an infinite range of values but only if a certain condition is assumed, such as Steiner's porism. The term originates from three books of Euclid that have been lost. A proposition may not have been proven, so a porism may not be a theorem or true.
Origins
The book that talks about porisms first is Euclid's Porisms. What is known of it is in Pappus of Alexandria's Collection, who mentions it along with other geometrical treatises, and gives several lemmas necessary for understanding it. Pappus states:
The porisms of all classes are neither theorems nor problems, but occupy a position intermediate between the two, so that their enunciations can be stated either as theorems or problems, and consequently some geometers think that they are theorems, and others that they are problems, being guided solely by the form of the enunciation. But it is clear from the definitions that the old geometers understood better the difference between the three classes. The older geometers regarded a theorem as directed to proving what is proposed, a problem as directed to constructing what is proposed, and finally a porism as directed to finding what is proposed ().
Pappus said that the last definition was changed by certain later geometers, who defined a porism as an accidental characteristic as (to leîpon hypothései topikoû theōrḗmatos), that which falls short of a locus-theorem by a (or in its) hypothesis. Proclus pointed out that the word porism was used in two senses: one sense is that of "corollary", as a result unsought but seen to follow from a theorem. In the other sense, he added nothing to the definition of "the older geometers", except to say that the finding of the center of a circle and the finding of the greatest common measure are
|
https://en.wikipedia.org/wiki/List%20of%20tallest%20people
|
This is a list of the tallest people, verified by Guinness World Records or other reliable sources.
According to the Guinness World Records, the tallest human in recorded history was Robert Wadlow of the United States (1918–1940), who was . He received media attention in 1939 when he was measured to be the tallest man in the world, beating John Rogan's record, after reaching a height of .
There are reports about even taller people but most of such claims are unverified or erroneous. Since antiquity, it has been reported about the finds of gigantic human skeletons. Originally thought to belong to mythical giants, these bones were later identified as the exaggerated remains of prehistoric animals, usually whales or elephants. Regular reports in American newspapers in the 18th and 19th centuries of giant human skeletons may have inspired the case of the "petrified" Cardiff Giant, a famous archaeological hoax.
Men
Women
Disputed and unverified claims
Tallest in various sports
Tallest living people from various nations
See also
Giant
Gigantism
Giant human skeletons
Goliath
Human height
Sotos syndrome
List of tallest players in National Basketball Association history
List of heaviest people
List of the verified shortest people
List of people with dwarfism
|
https://en.wikipedia.org/wiki/Sampling%20%28signal%20processing%29
|
In signal processing, sampling is the reduction of a continuous-time signal to a discrete-time signal. A common example is the conversion of a sound wave to a sequence of "samples".
A sample is a value of the signal at a point in time and/or space; this definition differs from the term's usage in statistics, which refers to a set of such values.
A sampler is a subsystem or operation that extracts samples from a continuous signal. A theoretical ideal sampler produces samples equivalent to the instantaneous value of the continuous signal at the desired points.
The original signal can be reconstructed from a sequence of samples, up to the Nyquist limit, by passing the sequence of samples through a type of low-pass filter called a reconstruction filter.
Theory
Functions of space, time, or any other dimension can be sampled, and similarly in two or more dimensions.
For functions that vary with time, let S(t) be a continuous function (or "signal") to be sampled, and let sampling be performed by measuring the value of the continuous function every T seconds, which is called the sampling interval or sampling period. Then the sampled function is given by the sequence:
S(nT), for integer values of n.
The sampling frequency or sampling rate, fs, is the number of samples divided by the interval length over in which occur, thus , with the unit sample per second, sometimes referred to as hertz, for example e.g. 48 kHz is 48,000 samples per second.
Reconstructing a continuous function from samples is done by interpolation algorithms. The Whittaker–Shannon interpolation formula is mathematically equivalent to an ideal low-pass filter whose input is a sequence of Dirac delta functions that are modulated (multiplied) by the sample values. When the time interval between adjacent samples is a constant (T), the sequence of delta functions is called a Dirac comb. Mathematically, the modulated Dirac comb is equivalent to the product of the comb function with s(t). That math
|
https://en.wikipedia.org/wiki/List%20of%20computer%20graphics%20and%20descriptive%20geometry%20topics
|
This is a list of computer graphics and descriptive geometry topics, by article name.
2D computer graphics
2D geometric model
3D computer graphics
3D projection
Alpha compositing
Anisotropic filtering
Anti-aliasing
Axis-aligned bounding box
Axonometric projection
Bézier curve
Bézier surface
Bicubic interpolation
Bilinear interpolation
Binary space partitioning
Bitmap graphics editor
Bounding volume
Bresenham's line algorithm
Bump mapping
Collision detection
Color space
Colour banding
Computational geometry
Computer animation
Computer-generated art
Computer painting
Convex hull
Curvilinear perspective
Cylindrical perspective
Data compression
Digital raster graphic
Dimetric projection
Distance fog
Dithering
Elevation
Engineering drawing
Flat shading
Flood fill
Geometric model
Geometric primitive
Global illumination
Gouraud shading
Graphical projection
Graphics suite
Heightfield
Hidden face removal
Hidden line removal
High-dynamic-range rendering
Isometric projection
Lathe (graphics)
Line drawing algorithm
Linear perspective
Mesh generation
Motion blur
Orthographic projection
Orthographic projection (geometry)
Orthogonal projection
Perspective (graphical)
Phong reflection model
Phong shading
Pixel shaders
Polygon (computer graphics)
Procedural surface
Projection
Projective geometry
Quadtree
Radiosity
Raster graphics
Raytracing
Rendering (computer graphics)
Reverse perspective
Scan line rendering
Scrolling
Technical drawing
Texture mapping
Trimetric projection
Vanishing point
Vector graphics
Vector graphics editor
Vertex shaders
Volume rendering
Voxel
See also
List of geometry topics
List of graphical methods
Computing-related lists
Mathematics-related lists
|
https://en.wikipedia.org/wiki/SAT%20Subject%20Test%20in%20Biology%20E/M
|
The SAT Subject Test in Biology was the name of a one-hour multiple choice test given on biology by the College Board. A student chose whether to take the test depending upon college entrance requirements for the schools in which the student is planning to apply. Until 1994, the SAT Subject Tests were known as Achievement Tests; and from 1995 until January 2005, they were known as SAT IIs. Of all SAT subject tests, the Biology E/M test was the only SAT II that allowed the test taker a choice between the ecological or molecular tests. A set of 60 questions was taken by all test takers for Biology and a choice of 20 questions was allowed between either the E or M tests. This test was graded on a scale between 200 and 800. The average for Molecular is 630 while Ecological is 591.
On January 19 2021, the College Board discontinued all SAT Subject tests, including the SAT Subject Test in Biology E/M. This was effective immediately in the United States, and the tests were to be phased out by the following summer for international students. This was done as a response to changes in college admissions due to the impact of the COVID-19 pandemic on education.
Format
This test had 80 multiple-choice questions that were to be answered in one hour. All questions had five answer choices. Students received one point for each correct answer, lost ¼ of a point for each incorrect answer, and received 0 points for questions left blank. The student's score was based entirely on his or her performance in answering the multiple-choice questions.
The questions covered a broad range of topics in general biology. There were more specific questions related respectively on ecological concepts (such as population studies and general Ecology) on the E test and molecular concepts such as DNA structure, translation, and biochemistry on the M test.
Preparation
The College Board suggested a year-long course in biology at the college preparatory level, as well as a one-year course in algebra, a
|
https://en.wikipedia.org/wiki/Micro-bursting%20%28networking%29
|
In computer networking, micro-bursting is a behavior seen on fast packet-switched networks, where rapid bursts of data packets are sent in quick succession, leading to periods of full line-rate transmission that can overflow packet buffers of the network stack, both in network endpoints and routers and switches inside the network. It can be mitigated by the network scheduler. In particular, micro-bursting is often caused by the use of TCP on such a network.
See also
Head-of-line blocking
TCP pacing
|
https://en.wikipedia.org/wiki/Sysload%20Software
|
Sysload Software, was a computer software company specializing in systems measurement, performance and capacity management solutions for servers and data centers, based in Créteil, France. It has been acquired in September 2009 by ORSYP, a computer software company specialist in workload scheduling and IT Operations Management, based in La Défense, France.
History
Sysload was created in 1999 as a result of the split of Groupe Loan System into two distinct entities: Loan Solutions, a developer of financial software and Sysload Software, a developer of performance management and monitoring software.
As of March 31, 2022, all Sysload products are in end of life.
Products
The following products are developed by Sysload:
SP Analyst
Is a performance and diagnostic solution for physical and virtual servers. It is a productivity tool destined to IT teams to diagnose performance problems and manage server resource capacity.
SP Monitor
A monitoring solution for incident management and IT service availability. It aims at providing real-time management of IT infrastructure events while correlating them to business processes. SP Monitor receives and stores event data, makes correlations and groups them within customizable views which can be accessed via an ordinary web browser.
SP Portal
A capacity and performance reporting solution for servers and data centers to allow IT managers analyze server resource allocation within information systems.
Sysload products are based on a 3-tiered (user interfaces, management modules and collection and analysis modules) architecture metric collection technology that provides detailed information on large and complex environments. Sysload software products are available for various virtualized and physical platforms including: VMware, Windows, AIX, HP-UX, Solaris, Linux, IBM i, PowerVM, etc.
|
https://en.wikipedia.org/wiki/Computer%20compatibility
|
A family of computer models is said to be compatible if certain software that runs on one of the models can also be run on all other models of the family. The computer models may differ in performance, reliability or some other characteristic. These differences may affect the outcome of the running of the software.
Software compatibility
Software compatibility can refer to the compatibility that a particular software has running on a particular CPU architecture such as Intel or PowerPC. Software compatibility can also refer to ability for the software to run on a particular operating system. Very rarely is a compiled software compatible with multiple different CPU architectures. Normally, an application is compiled for different CPU architectures and operating systems to allow it to be compatible with the different system. Interpreted software, on the other hand, can normally run on many different CPU architectures and operating systems if the interpreter is available for the architecture or operating system. Software incompatibility occurs many times for new software released for a newer version of an operating system which is incompatible with the older version of the operating system because it may miss some of the features and functionality that the software depends on.
Hardware compatibility
Hardware compatibility can refer to the compatibility of computer hardware components with a particular CPU architecture, bus, motherboard or operating system. Hardware that is compatible may not always run at its highest stated performance, but it can nevertheless work with legacy components. An example is RAM chips, some of which can run at a lower (or sometimes higher) clock rate than rated. Hardware that was designed for one operating system may not work for another, if device or kernel drivers are unavailable. As an example, much of the hardware for macOS is proprietary hardware with drivers unavailable for use in operating systems such as Linux.
Free and open-sou
|
https://en.wikipedia.org/wiki/Downsampling%20%28signal%20processing%29
|
In digital signal processing, downsampling, compression, and decimation are terms associated with the process of resampling in a multi-rate digital signal processing system. Both downsampling and decimation can be synonymous with compression, or they can describe an entire process of bandwidth reduction (filtering) and sample-rate reduction. When the process is performed on a sequence of samples of a signal or a continuous function, it produces an approximation of the sequence that would have been obtained by sampling the signal at a lower rate (or density, as in the case of a photograph).
Decimation is a term that historically means the removal of every tenth one. But in signal processing, decimation by a factor of 10 actually means keeping only every tenth sample. This factor multiplies the sampling interval or, equivalently, divides the sampling rate. For example, if compact disc audio at 44,100 samples/second is decimated by a factor of 5/4, the resulting sample rate is 35,280. A system component that performs decimation is called a decimator. Decimation by an integer factor is also called compression.
Downsampling by an integer factor
Rate reduction by an integer factor M can be explained as a two-step process, with an equivalent implementation that is more efficient:
Reduce high-frequency signal components with a digital lowpass filter.
Decimate the filtered signal by M; that is, keep only every Mth sample.
Step 2 alone creates undesirable aliasing (i.e. high-frequency signal components will copy into the lower frequency band and be mistaken for lower frequencies). Step 1, when necessary, suppresses aliasing to an acceptable level. In this application, the filter is called an anti-aliasing filter, and its design is discussed below. Also see undersampling for information about decimating bandpass functions and signals.
When the anti-aliasing filter is an IIR design, it relies on feedback from output to input, prior to the second step. With FIR filtering,
|
https://en.wikipedia.org/wiki/List%20of%20taxa%20with%20candidatus%20status
|
This is a list of taxa with Candidatus status. For taxa not covered by this list, see also:
the GenBank taxonomy for "effective" names as published;
the Candidatus Lists and LPSN for latinate names, some sanitized to match the Code.
Phyla
"Ca. Absconditabacteria" (previously Candidate phylum SR1)
ABY1 aka OD1-ABY1, subgroup of OD1 ("Ca. Parcubacteria")
Candidate phylum AC1
"Ca. Acetothermia" (previously Candidate phylum OP1)
"Ca. Aerophobetes" (previously Candidate phylum CD12 or BHI80-139)
"Ca. Aminicenantes" (previously Candidate phylum OP8)
aquifer1
aquifer2
"Ca. Berkelbacteria" (previously Candidate phylum ACD58)
BRC1
CAB-I
"Ca. Calescamantes" (previously Candidate phylum EM19)
Candidate phylum CPR2
Candidate phylum CPR3
Candidate phylum NC10
Candidate phylum OP2
Candidate phylum RF3
Candidate phylum SAM
Candidate phylum SPAM
Candidate phylum TG2
Candidate phylum VC2
Candidate phylum WS1
Candidate phylum WS2
Candidate phylum WS4
Candidate phylum WYO
CKC4
"Ca. Cloacimonetes" (previously Candidate phylum WWE1)
CPR1
"Ca. Dependentiae" (previously Candidate phylum TM6)
EM 3
"Ca. Endomicrobia" Stingl et al. 2005
"Ca. Fermentibacteria" (Hyd24-12)
"Ca. Fervidibacteria" (previously Candidate phylum OctSpa1-106)
GAL08
GAL15
GN01
GN03
GN04
GN05
GN06
GN07
GN08
GN09
GN10
GN11
GN12
GN13
GN14
GN15
GOUTA4
"Ca. Gracilibacteria" (previously Candidate phylum GN02, BD1-5, or BD1-5 group)
Guaymas1
"Ca. Hydrogenedentes" (previously Candidate phylum NKB19)
JL-ETNP-Z39
"Ca. Katanobacteria" (previously Candidate phylum WWE3)
Kazan-3B-09
KD3-62
kpj58rc
KSA1
KSA2
KSB1
KSB2
KSB4
"Ca. Latescibacteria" (previously Candidate phylum WS3)
LCP-89
LD1-PA38
"Ca. Marinamargulisbacteria" (previously Candidate division ZB3)
"Ca. Marinimicrobia" (previously Marine Group A or Candidate phylum SAR406)
"Ca. Melainabacteria"
"Ca. Microgenomates" (previously Candidate phylum OP11)
"Ca. Modulibacteria" (previously Candidate phylum K
|
https://en.wikipedia.org/wiki/Newman%E2%80%93Penrose%20formalism
|
The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.
Newman and Penrose introduced the following functions as primary quantities using this tetrad:
Twelve complex spin coefficients (in three groups) which describe the change in the tetrad from point to point: .
Five complex functions encoding Weyl tensors in the tetrad basis: .
Ten functions encoding Ricci tensors in the tetrad basis: (real); (complex).
In many situations—especially algebraically special spacetimes or vacuum spacetimes—the Newman–Penrose formalism simplifies dramatically, as many of the functions go to zero. This simplification allows for various theorems to be proven more easily than using the standard form of Einstein's equations.
In this article, we will only employ the tensorial rather than spinorial version of NP formalism, because the former is easier to understand and more popular in relevant papers. One can refe
|
https://en.wikipedia.org/wiki/Well-defined%20expression
|
In mathematics, a well-defined expression or unambiguous expression is an expression whose definition assigns it a unique interpretation or value. Otherwise, the expression is said to be not well defined, ill defined or ambiguous. A function is well defined if it gives the same result when the representation of the input is changed without changing the value of the input. For instance, if takes real numbers as input, and if does not equal then is not well defined (and thus not a function). The term well defined can also be used to indicate that a logical expression is unambiguous or uncontradictory.
A function that is not well defined is not the same as a function that is undefined. For example, if , then even though is undefined does not mean that the function is not well defined – but simply that 0 is not in the domain of .
Example
Let be sets, let and "define" as if and if .
Then is well defined if . For example, if and , then would be well defined and equal to .
However, if , then would not be well defined because is "ambiguous" for . For example, if and , then would have to be both 0 and 1, which makes it ambiguous. As a result, the latter is not well defined and thus not a function.
"Definition" as anticipation of definition
In order to avoid the quotation marks around "define" in the previous simple example, the "definition" of could be broken down into two simple logical steps:
While the definition in step 1 is formulated with the freedom of any definition and is certainly effective (without the need to classify it as "well defined"), the assertion in step 2 has to be proved. That is, is a function if and only if , in which case – as a function – is well defined.
On the other hand, if , then for an , we would have that and , which makes the binary relation not functional (as defined in Binary relation#Special types of binary relations) and thus not well defined as a function. Colloquially, the "function" is also called ambiguo
|
https://en.wikipedia.org/wiki/Killough%20platform
|
A Killough platform is a three-wheel drive system that uses traditional wheels to achieve omni-directional movement without the use of omni-directional wheels (such as omni wheels/Mecanum wheels). Designed by Stephen Killough, after which the platform is named, with help from Francois Pin, wanted to achieve omni-directional movement without using the complicated six motor arrangement required to achieve a controllable three caster wheel system (one motor to control wheel rotation and one motor to control pivoting of the wheel). He first looked into solutions by other inventors that used rollers on the rims larger wheels but considered them flawed in some critical way. This led to the Killough system:
With Francois Pin, who helped with the computer control and choreography aspects of the design, Killough and Pin readied a public demonstration in 1994. This led to a partnership with Cybertrax Innovative Technologies in 1996 which was developing a motorized wheelchair.
By combining two the motion of two-wheel the vehicle can move in the direction of the perpendicular wheel or by rotating all the wheels in the same direction the vehicle can rotate in place. By using the resultant motion of the vector addition of the wheels a Killough platform is able to achieve omni-directional motion.
|
https://en.wikipedia.org/wiki/List%20of%20formulae%20involving%20%CF%80
|
The following is a list of significant formulae involving the mathematical constant . Many of these formulae can be found in the article Pi, or the article Approximations of .
Euclidean geometry
where is the circumference of a circle, is the diameter, and is the radius. More generally,
where and are, respectively, the perimeter and the width of any curve of constant width.
where is the area of a circle. More generally,
where is the area enclosed by an ellipse with semi-major axis and semi-minor axis .
where is the area between the witch of Agnesi and its asymptotic line; is the radius of the defining circle.
where is the area of a squircle with minor radius , is the gamma function and is the arithmetic–geometric mean.
where is the area of an epicycloid with the smaller circle of radius and the larger circle of radius (), assuming the initial point lies on the larger circle.
where is the area of a rose with angular frequency () and amplitude .
where is the perimeter of the lemniscate of Bernoulli with focal distance .
where is the volume of a sphere and is the radius.
where is the surface area of a sphere and is the radius.
where is the hypervolume of a 3-sphere and is the radius.
where is the surface volume of a 3-sphere and is the radius.
Regular convex polygons
Sum of internal angles of a regular convex polygon with sides:
Area of a regular convex polygon with sides and side length :
Inradius of a regular convex polygon with sides and side length :
Circumradius of a regular convex polygon with sides and side length :
Physics
The cosmological constant:
Heisenberg's uncertainty principle:
Einstein's field equation of general relativity:
Coulomb's law for the electric force in vacuum:
Magnetic permeability of free space:
Approximate period of a simple pendulum with small amplitude:
Exact period of a simple pendulum with amplitude ( is the arithmetic–geometric mean):
Kepler's third law of planetary motion
|
https://en.wikipedia.org/wiki/List%20of%20mathematics%20reference%20tables
|
See also: List of reference tables
Mathematics
List of mathematical topics
List of statistical topics
List of mathematical functions
List of mathematical theorems
List of mathematical proofs
List of matrices
List of numbers
List of relativistic equations
List of small groups
Mathematical constants
Sporadic group
Table of bases
Table of Clebsch-Gordan coefficients
Table of derivatives
Table of divisors
Table of integrals
Table of mathematical symbols
Table of prime factors
Taylor series
Timeline of mathematics
Trigonometric identities
Truth table
Reference tables
List
|
https://en.wikipedia.org/wiki/Zenzizenzizenzic
|
Zenzizenzizenzic is an obsolete form of mathematical notation representing the eighth power of a number (that is, the zenzizenzizenzic of x is x8), dating from a time when powers were written out in words rather than as superscript numbers. This term was suggested by Robert Recorde, a 16th-century Welsh physician, mathematician and writer of popular mathematics textbooks, in his 1557 work The Whetstone of Witte (although his spelling was zenzizenzizenzike); he wrote that it "doeth represent the square of squares squaredly".
History
At the time Recorde proposed this notation, there was no easy way of denoting the powers of numbers other than squares and cubes. The root word for Recorde's notation is zenzic, which is a German spelling of the medieval Italian word , meaning 'squared'. Since the square of a square of a number is its fourth power, Recorde used the word zenzizenzic (spelled by him as zenzizenzike) to express it. Some of the terms had prior use in Latin , and . Similarly, as the sixth power of a number is equal to the square of its cube, Recorde used the word zenzicubike to express it; a more modern spelling, zenzicube, is found in Samuel Jeake's Arithmetick Surveighed and Reviewed. Finally, the word zenzizenzizenzic denotes the square of the square of a number's square, which is its eighth power: in modern notation,
Samuel Jeake gives zenzizenzizenzizenzike (the square of the square of the square of the square, or 16th power) in a table in A Compleat Body of Arithmetick (1701):
The word, as well as the system, is obsolete except as a curiosity; the Oxford English Dictionary (OED) has only one citation for it.
As well as being a mathematical oddity, it survives as a linguistic oddity: zenzizenzizenzic has more Zs than any other word in the OED.
Notation for other powers
Recorde proposed three mathematical terms by which any power (that is, index or exponent) greater than 1 could be expressed: zenzic, i.e. squared; cubic; and sursolid, i.e. ra
|
https://en.wikipedia.org/wiki/Census%20of%20Coral%20Reefs
|
The Census of Coral Reefs (CReefs) is a field project of the Census of Marine Life that surveys the biodiversity of coral reef ecosystems internationally. The project works to study what species live in coral reef ecosystems, to develop standardized protocols for studying coral reef ecosystems, and to increase access to and exchange of information about coral reefs scattered throughout the globe.
The CReefs project uses the implementation of autonomous reef-monitoring structures (ARMS) to study the species that inhabit coral reefs. These structures are placed on the sea floor in areas where coral reefs exist, where they are left for one year. At the end of the year, the ARMvS is pulled to the surface, along with the species which have inhabited it, for analysis.
Coral reefs are thought to be the most organically different of all marine ecosystems. Major declines in key reef ecosystems suggest a decline in reef population throughout the world due to environmental stresses. The vulnerability of coral reef ecosystems is expected to increase significantly in response to climate change. The reefs are also being threatened by induced coral bleaching, ocean acidification, sea level rise, and changing storm tracks. Reef biodiversity could be in danger of being lost before it is even documented, and researchers will be left with a limited and poor understanding of these complex ecosystems.
In an attempt to enhance global understanding of reef biodiversity, the goals of the CReefs Census of Coral Reef Ecosystems were to conduct a diverse global census of coral reef ecosystems. And increase access to and exchange of coral reef data throughout the world. Because coral reefs are the most diverse and among the most threatened of all marine ecosystems, there is great justification to learn more about them.
|
https://en.wikipedia.org/wiki/Turbo%20equalizer
|
In digital communications, a turbo equalizer is a type of receiver used to receive a message corrupted by a communication channel with intersymbol interference (ISI). It approaches the performance of a maximum a posteriori (MAP) receiver via iterative message passing between a soft-in soft-out (SISO) equalizer and a SISO decoder. It is related to turbo codes in that a turbo equalizer may be considered a type of iterative decoder if the channel is viewed as a non-redundant convolutional code. The turbo equalizer is different from classic a turbo-like code, however, in that the 'channel code' adds no redundancy and therefore can only be used to remove non-gaussian noise.
History
Turbo codes were invented by Claude Berrou in 1990–1991. In 1993, turbo codes were introduced publicly via a paper listing authors Berrou, Glavieux, and Thitimajshima. In 1995 a novel extension of the turbo principle was applied to an equalizer by Douillard, Jézéquel, and Berrou. In particular, they formulated the ISI receiver problem as a turbo code decoding problem, where the channel is thought of as a rate 1 convolutional code and the error correction coding is the second code. In 1997, Glavieux, Laot, and Labat demonstrated that a linear equalizer could be used in a turbo equalizer framework. This discovery made turbo equalization computationally efficient enough to be applied to a wide range of applications.
Overview
Standard communication system overview
Before discussing turbo equalizers, it is necessary to understand the basic receiver in the context of a communication system. This is the topic of this section.
At the transmitter, information bits are encoded. Encoding adds redundancy by mapping the information bits to a longer bit vector – the code bit vector . The encoded bits are then interleaved. Interleaving permutes the order of the code bits resulting in bits . The main reason for doing this is to insulate the information bits from bursty noise. Next, the symbol
|
https://en.wikipedia.org/wiki/List%20of%20triangle%20topics
|
This list of triangle topics includes things related to the geometric shape, either abstractly, as in idealizations studied by geometers, or in triangular arrays such as Pascal's triangle or triangular matrices, or concretely in physical space. It does not include metaphors like love triangle in which the word has no reference to the geometric shape.
Geometry
Triangle
Acute and obtuse triangles
Altern base
Altitude (triangle)
Area bisector of a triangle
Angle bisector of a triangle
Angle bisector theorem
Apollonius point
Apollonius' theorem
Automedian triangle
Barrow's inequality
Barycentric coordinates (mathematics)
Bernoulli's quadrisection problem
Brocard circle
Brocard points
Brocard triangle
Carnot's theorem (conics)
Carnot's theorem (inradius, circumradius)
Carnot's theorem (perpendiculars)
Catalogue of Triangle Cubics
Centroid
Ceva's theorem
Cevian
Circumconic and inconic
Circumscribed circle
Clawson point
Cleaver (geometry)
Congruence (geometry)
Congruent isoscelizers point
Contact triangle
Conway triangle notation
CPCTC
Delaunay triangulation
de Longchamps point
Desargues' theorem
Droz-Farny line theorem
Encyclopedia of Triangle Centers
Equal incircles theorem
Equal parallelians point
Equidissection
Equilateral triangle
Euler's line
Euler's theorem in geometry
Erdős–Mordell inequality
Exeter point
Exterior angle theorem
Fagnano's problem
Fermat point
Fermat's right triangle theorem
Fuhrmann circle
Fuhrmann triangle
Geometric mean theorem
GEOS circle
Gergonne point
Golden triangle (mathematics)
Gossard perspector
Hadley's theorem
Hadwiger–Finsler inequality
Heilbronn triangle problem
Heptagonal triangle
Heronian triangle
Heron's formula
Hofstadter points
Hyperbolic triangle (non-Euclidean geometry)
Hypotenuse
Incircle and excircles of a triangle
Inellipse
Integer triangle
Isodynamic point
Isogonal conjugate
Isoperimetric point
Isoscel
|
https://en.wikipedia.org/wiki/Operations%20security
|
Operations security (OPSEC) or operational security is a process that identifies critical information to determine whether friendly actions can be observed by enemy intelligence, determines if information obtained by adversaries could be interpreted to be useful to them, and then executes selected measures that eliminate or reduce adversary exploitation of friendly critical information.
The term "operations security" was coined by the United States military during the Vietnam War.
History
Vietnam
In 1966, United States Admiral Ulysses Sharp established a multidisciplinary security team to investigate the failure of certain combat operations during the Vietnam War. This operation was dubbed Operation Purple Dragon, and included personnel from the National Security Agency and the Department of Defense.
When the operation concluded, the Purple Dragon team codified their recommendations. They called the process "Operations Security" in order to distinguish the process from existing processes and ensure continued inter-agency support.
NSDD 298
In 1988, President Ronald Reagan signed National Security Decision Directive (NSDD) 298. This document established the National Operations Security Program and named the Director of the National Security Agency as the executive agent for inter-agency OPSEC support. This document also established the Interagency OPSEC Support Staff (IOSS).
Private-sector application
The private sector has also adopted OPSEC as a defensive measure against competitive intelligence collection efforts.
See also
For Official Use Only – FOUO
Information security
Intelligence cycle security
Security
Security Culture
Sensitive but unclassified – SBU
Controlled Unclassified Information - CUI
Social engineering
|
https://en.wikipedia.org/wiki/Contact%20region
|
A Contact Region is a concept in robotics which describes the region between an object and a robot’s end effector. This is used in object manipulation planning, and with the addition of sensors built into the manipulation system, can be used to produce a surface map or contact model of the object being grasped.
In Robotics
For a robot to autonomously grasp an object, it is necessary for the robot to have an understanding of its own construction and movement capabilities (described through the math of inverse kinematics), and an understanding of the object to be grasped. The relationship between these two is described through a contact model, which is a set of the potential points of contact between the robot and the object being grasped. This, in turn, is used to create a more concrete mathematical representation of the grasp to be attempted, which can then be computed through path planning techniques and executed.
In Mathematics
Depending on the complexity of the end effector, or through usage of external sensors such as a Lidar or Depth camera, a more complex model of the planes involved in the object being grasped can be produced. In particular, sensors embedded in the fingertips of an end effector have been demonstrated to be an effective approach for producing a surface map from a given contact region. Through knowledge of the robot's position of each individual finger, the location of the sensors in each finger, and the amount of force being exerted by the object onto each sensor, points of contact can be calculated. These points of contact can then be turned into a three-dimensional ellipsis, producing a surface map of the object.
Applications
In hand manipulation is a typical use case. A robot hand interacts with static and deformable objects, described with soft-body dynamics. Sometimes, additional tools has to be controlled by the robot hand for example a screwdriver. Such interaction produces a complex situation in which the robot hand has similar c
|
https://en.wikipedia.org/wiki/Transmission%20delay
|
In a network based on packet switching, transmission delay (or store-and-forward delay, also known as packetization delay or serialization delay) is the amount of time required to push all the packet's bits into the wire. In other words, this is the delay caused by the data-rate of the link.
Transmission delay is a function of the packet's length and has nothing to do with the distance between the two nodes. This delay is proportional to the packet's length in bits,
It is given by the following formula:
seconds
where
is the transmission delay in seconds
N is the number of bits, and
R is the rate of transmission (say in bits per second)
Most packet switched networks use store-and-forward transmission at the input of the link. A switch using store-and-forward transmission will receive (save) the entire packet to the buffer and check it for CRC errors or other problems before sending the first bit of the packet into the outbound link. Thus, store-and-forward packet switches introduce a store-and-forward delay at the input to each link along the packet's route.
See also
End-to-end delay
Processing delay
Queuing delay
Propagation delay
Network delay
|
https://en.wikipedia.org/wiki/Radio-frequency%20sweep
|
Radio frequency sweep or frequency sweep or RF sweep apply to scanning a radio frequency band for detecting signals being transmitted there. A radio receiver with an adjustable receiving frequency is used to do this. A display shows the strength of the signals received at each frequency as the receiver's frequency is modified to sweep (scan) the desired frequency band.
Methods and tools
A spectrum analyzer is a standard instrument used for RF sweep. It includes an electronically tunable receiver and a display. The display presents measured power (y axis) vs frequency (x axis).
The power spectrum display is a two-dimensional display of measured power vs. frequency. The power may be either in linear units, or logarithmic units (dBm). Usually the logarithmic display is more useful, because it presents a larger dynamic range with better detail at each value. An RF sweep relates to a receiver which changes its frequency of operation continuously from a minimum frequency to a maximum (or from maximum to minimum). Usually the sweep is performed at a fixed, controllable rate, for example 5 MHz/sec.
Some systems use frequency hopping, switching from one frequency of operation to another. One method of CDMA uses frequency hopping. Usually frequency hopping is performed in a random or pseudo-random pattern.
Applications
Frequency sweeps may be used by regulatory agencies to monitor the radio spectrum, to ensure that users only transmit according to their licenses. The FCC for example controls and monitors the use of the spectrum in the U.S. In testing of new electronic devices, a frequency sweep may be done to measure the performance of electronic components or systems. For example, RF oscillators are measured for phase noise, harmonics and spurious signals; computers for consumer sale are tested to avoid radio frequency interference with radio systems. Portable sweep equipment may be used to detect some types of covert listening device (bugs).
In professional audio, the
|
https://en.wikipedia.org/wiki/Computer%20engineering
|
Computer engineering (CoE or CpE) is a branch of electronic engineering and computer science that integrates several fields of computer science and electronic engineering required to develop computer hardware and software.
Computer engineering is referred to as computer science and engineering at some universities.
Computer engineers require training in electronic engineering, computer science, hardware-software integration, software design, and software engineering. It uses the techniques and principles of electrical engineering and computer science, and can encompass areas such as artificial intelligence (AI), robotics, computer networks, computer architecture and operating systems. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also on how to integrate them into the larger picture. Robotics are one of the applications of computer engineering.
Computer engineering usually deals with areas including writing software and firmware for embedded microcontrollers, designing VLSI chips, designing analog sensors, designing mixed signal circuit boards, and designing operating systems. Computer engineers are also suited for robotics research, which relies heavily on using digital systems to control and monitor electrical systems like motors, communications, and sensors.
In many institutions of higher learning, computer engineering students are allowed to choose areas of in-depth study in their junior and senior year because the full breadth of knowledge used in the design and application of computers is beyond the scope of an undergraduate degree. Other institutions may require engineering students to complete one or two years of general engineering before declaring computer engineering as their primary focus.
History
Comp
|
https://en.wikipedia.org/wiki/Dowker%E2%80%93Thistlethwaite%20notation
|
In the mathematical field of knot theory, the Dowker–Thistlethwaite (DT) notation or code, for a knot is a sequence of even integers. The notation is named after Clifford Hugh Dowker and Morwen Thistlethwaite, who refined a notation originally due to Peter Guthrie Tait.
Definition
To generate the Dowker–Thistlethwaite notation, traverse the knot using an arbitrary starting point and direction. Label each of the n crossings with the numbers 1, ..., 2n in order of traversal (each crossing is visited and labelled twice), with the following modification: if the label is an even number and the strand followed crosses over at the crossing, then change the sign on the label to be a negative. When finished, each crossing will be labelled a pair of integers, one even and one odd. The Dowker–Thistlethwaite notation is the sequence of even integer labels associated with the labels 1, 3, ..., 2n − 1 in turn.
Example
For example, a knot diagram may have crossings labelled with the pairs (1, 6) (3, −12) (5, 2) (7, 8) (9, −4) and (11, −10). The Dowker–Thistlethwaite notation for this labelling is the sequence: 6 −12 2 8 −4 −10.
Uniqueness and counting
Dowker and Thistlethwaite have proved that the notation specifies prime knots uniquely, up to reflection.
In the more general case, a knot can be recovered from a Dowker–Thistlethwaite sequence, but the recovered knot may differ from the original by either being a reflection or by having any connected sum component reflected in the line between its entry/exit points – the Dowker–Thistlethwaite notation is unchanged by these reflections. Knots tabulations typically consider only prime knots and disregard chirality, so this ambiguity does not affect the tabulation.
The ménage problem, posed by Tait, concerns counting the number of different number sequences possible in this notation.
See also
Alexander–Briggs notation
Conway notation
Gauss notation
|
https://en.wikipedia.org/wiki/CyTOF
|
Cytometry by time of flight, or CyTOF, is an application of mass cytometry used to quantify labeled targets on the surface and interior of single cells. CyTOF allows the quantification of multiple cellular components simultaneously using an ICP-MS detector.
CyTOF takes advantage of immunolabeling to quantify proteins, carbohydrates or lipids in a cell. Targets are selected to answer a specific research question and are labeled with lanthanide metal tagged antibodies. Labeled cells are nebulized and mixed with heated argon gas to dry the cell containing particles. The sample-gas mixture is focused and ignited with an argon plasma torch. This breaks the cells into their individual atoms and creates an ion cloud. Abundant low weight ions generated from environmental air and biological molecules are removed using a quadrupole mass analyzer. The remaining heavy ions from the antibody tags are quantified by Time-of-flight mass spectrometry. Ion abundances correlate with the amount of target per cell and can be used to infer cellular qualities.
Mass spectrometry's sensitivity to detect different ions allows measurements of upwards of 50 targets per cell while avoiding issues with spectral overlap seen when using fluorescent probes. However, this sensitivity also means trace heavy metal contamination is a concern. Using large numbers of probes creates new problems in analyzing the high dimensional data generated.
History
In 1994 Tsutomu Nomizu and colleagues at Nagoya University performed the first mass spectrometry experiments of single cells. Nomizu realized that single cells could be nebulized, dried, and ignited in plasma to generate clouds of ions which could be detected by emission spectrometry. In this type of experiment elements such as calcium within the cell could be quantified. Inspired by Flow cytometry, in 2007 Scott D. Tanner built upon this ICP-MS with the first multiplexed assay using lanthanide metals to label DNA and cell surface markers. In 2008 Tann
|
https://en.wikipedia.org/wiki/Organism
|
An organism () is any biological living system that functions as an individual life form. All organisms are composed of cells. The idea of organism is based on the concept of minimal functional unit of life. Three traits have been proposed to play the main role in qualification as an organism:
noncompartmentability – structure that cannot be divided without its functionality loss,
individuality – the entity has simultaneous holding of genetic uniqueness, genetic homogeneity and autonomy,
distinctness – genetic information has to maintain open-system (a cell).
Organisms include multicellular animals, plants, and fungi; or unicellular microorganisms such as protists, bacteria, and archaea. All types of organisms are capable of reproduction, growth and development, maintenance, and some degree of response to stimuli. Most multicellular organisms differentiate into specialized tissues and organs during their development.
In 2016, a set of 355 genes from the last universal common ancestor (LUCA) of all organisms from Earth was identified.
Etymology
The term "organism" (from Greek ὀργανισμός, organismos, from ὄργανον, organon, i.e. "instrument, implement, tool, organ of sense or apprehension") first appeared in the English language in 1703 and took on its current definition by 1834 (Oxford English Dictionary). It is directly related to the term "organization". There is a long tradition of defining organisms as self-organizing beings, going back at least to Immanuel Kant's 1790 Critique of Judgment.
Definitions
An organism may be defined as an assembly of molecules functioning as a more or less stable whole that exhibits the properties of life. Dictionary definitions can be broad, using phrases such as "any living structure, such as a plant, animal, fungus or bacterium, capable of growth and reproduction". Many definitions exclude viruses and possible synthetic non-organic life forms, as viruses are dependent on the biochemical machinery of a host cell for repr
|
https://en.wikipedia.org/wiki/Transparent%20heating%20film
|
Transparent heating film, also called transparent heating plastic or heating transparent polymer film is a thin and flexible polymer film with a conductive optical coating. Transparent heating films may be rated at 2.5kW/m at voltages below 48 volts direct current (VDC). This allows heating with secure transformers delivering voltages which will not hurt the human body. Transparent conductive polymer films may be used for heating transparent glasses. A combination with transparent SMD electronic for multipurpose applications, is also possible. It is also a variant of carbon heating film.
See also
Optical coating
Heating film
|
https://en.wikipedia.org/wiki/List%20of%20works%20by%20Nicolas%20Minorsky
|
List of works by Nicolas Minorsky.
Books
Papers
Conferences
Patents
|
https://en.wikipedia.org/wiki/Mathematical%20Models%20%28Cundy%20and%20Rollett%29
|
Mathematical Models is a book on the construction of physical models of mathematical objects for educational purposes. It was written by Martyn Cundy and A. P. Rollett, and published by the Clarendon Press in 1951, with a second edition in 1961. Tarquin Publications published a third edition in 1981.
The vertex configuration of a uniform polyhedron, a generalization of the Schläfli symbol that describes the pattern of polygons surrounding each vertex, was devised in this book as a way to name the Archimedean solids, and has sometimes been called the Cundy–Rollett symbol as a nod to this origin.
Topics
The first edition of the book had five chapters, including its introduction which discusses model-making in general and the different media and tools with which one can construct models. The media used for the constructions described in the book include "paper, cardboard, plywood, plastics, wire, string, and sheet metal".
The second chapter concerns plane geometry, and includes material on the golden ratio, the Pythagorean theorem, dissection problems, the mathematics of paper folding, tessellations, and plane curves, which are constructed by stitching, by graphical methods, and by mechanical devices.
The third chapter, and the largest part of the book, concerns polyhedron models, made from cardboard or plexiglass. It includes information about the Platonic solids, Archimedean solids, their stellations and duals, uniform polyhedron compounds, and deltahedra.
The fourth chapter is on additional topics in solid geometry and curved surfaces, particularly quadrics but also including topological manifolds such as the torus, Möbius strip and Klein bottle, and physical models helping to visualize the map coloring problem on these surfaces. Also included are sphere packings. The models in this chapter are constructed as the boundaries of solid objects, via two-dimensional paper cross-sections, and by string figures.
The fifth chapter, and the final one of the first editi
|
https://en.wikipedia.org/wiki/PCMOS
|
Probabilistic complementary metal-oxide semiconductor (PCMOS) is a semiconductor manufacturing technology invented by Pr. Krishna Palem of Rice University and Director of NTU's Institute for Sustainable Nanoelectronics (ISNE). The technology hopes to compete against current CMOS technology. Proponents claim it uses one thirtieth as much electricity while running seven times faster than the current fastest technology.
PCMOS-based system on a chip architectures were shown to be gains that are as high as a substantial multiplicative factor of 560 when compared to a competing energy-efficient CMOS based realization on applications based on probabilistic algorithms such as hyper-encryption, bayesian networks, random neural networks and probabilistic cellular automata.
|
https://en.wikipedia.org/wiki/Computer%20data%20storage
|
Computer data storage is a technology consisting of computer components and recording media that are used to retain digital data. It is a core function and fundamental component of computers.
The central processing unit (CPU) of a computer is what manipulates data by performing computations. In practice, almost all computers use a storage hierarchy, which puts fast but expensive and small storage options close to the CPU and slower but less expensive and larger options further away. Generally, the fast technologies are referred to as "memory", while slower persistent technologies are referred to as "storage".
Even the first computer designs, Charles Babbage's Analytical Engine and Percy Ludgate's Analytical Machine, clearly distinguished between processing and memory (Babbage stored numbers as rotations of gears, while Ludgate stored numbers as displacements of rods in shuttles). This distinction was extended in the Von Neumann architecture, where the CPU consists of two main parts: The control unit and the arithmetic logic unit (ALU). The former controls the flow of data between the CPU and memory, while the latter performs arithmetic and logical operations on data.
Functionality
Without a significant amount of memory, a computer would merely be able to perform fixed operations and immediately output the result. It would have to be reconfigured to change its behavior. This is acceptable for devices such as desk calculators, digital signal processors, and other specialized devices. Von Neumann machines differ in having a memory in which they store their operating instructions and data. Such computers are more versatile in that they do not need to have their hardware reconfigured for each new program, but can simply be reprogrammed with new in-memory instructions; they also tend to be simpler to design, in that a relatively simple processor may keep state between successive computations to build up complex procedural results. Most modern computers are von Neumann
|
https://en.wikipedia.org/wiki/Biospeleology
|
Biospeleology, also known as cave biology, is a branch of biology dedicated to the study of organisms that live in caves and are collectively referred to as troglofauna.
Biospeleology as a science
History
The first documented mention of a cave organisms dates back to 1689, with the documentation of the olm, a cave salamander. Discovered in a cave in Slovenia, in the region of Carniola, it was mistaken for a baby dragon and was recorded by Johann Weikhard von Valvasor in his work The Glory of the Duchy of Carniola.
The first formal study on cave organisms was conducted on the blind cave beetle. Found in 1831 by Luka Čeč, an assistant to the lamplighter, when exploring the newly discovered inner portions of the Postojna cave system in southwestern Slovenia. The specimen was turned over to Ferdinand J. Schmidt, who described it in the paper Illyrisches Blatt (1832). He named it Leptodirus Hochenwartii after the donor, and also gave it the Slovene name drobnovratnik and the German name Enghalskäfer, both meaning "slender-necked (beetle)". The article represents the first formal description of a cave animal (the olm, described in 1768, wasn't recognized as a cave animal at the time).
Subsequent research by Schmidt revealed further previously unknown cave inhabitants, which aroused considerable interest among natural historians. For this reason, the discovery of L. hochenwartii (along with the olm) is considered as the starting point of biospeleology as a scientific discipline. Biospeleology was formalized as a science in 1907 by Emil Racoviță with his seminal work Essai sur les problèmes biospéologiques ("Essay on biospeleological problems”).
Subdivisions
Organisms Categories
Cave organisms fall into three basic classes:
Troglobite
Troglobites are obligatory cavernicoles, specialized for cave life. Some can leave caves for short periods, and may complete parts of their life cycles above ground, but cannot live their entire lives outside of a cave environment. Examp
|
https://en.wikipedia.org/wiki/Linear%20canonical%20transformation
|
In Hamiltonian mechanics, the linear canonical transformation (LCT) is a family of integral transforms that generalizes many classical transforms. It has 4 parameters and 1 constraint, so it is a 3-dimensional family, and can be visualized as the action of the special linear group SL2(R) on the time–frequency plane (domain). As this defines the original function up to a sign, this translates into an action of its double cover on the original function space.
The LCT generalizes the Fourier, fractional Fourier, Laplace, Gauss–Weierstrass, Bargmann and the Fresnel transforms as particular cases. The name "linear canonical transformation" is from canonical transformation, a map that preserves the symplectic structure, as SL2(R) can also be interpreted as the symplectic group Sp2, and thus LCTs are the linear maps of the time–frequency domain which preserve the symplectic form, and their action on the Hilbert space is given by the Metaplectic group.
The basic properties of the transformations mentioned above, such as scaling, shift, coordinate multiplication are considered. Any linear canonical transformation is related to affine transformations in phase space, defined by time-frequency or position-momentum coordinates.
Definition
The LCT can be represented in several ways; most easily, it can be parameterized by a 2×2 matrix with determinant 1, i.e., an element of the special linear group SL2(C). Then for any such matrix with ad − bc = 1, the corresponding integral transform from a function to is defined as
Special cases
Many classical transforms are special cases of the linear canonical transform:
Scaling
Scaling, , corresponds to scaling the time and frequency dimensions inversely (as time goes faster, frequencies are higher and the time dimension shrinks):
Fourier transform
The Fourier transform corresponds to a clockwise rotation by 90° in the time–frequency plane, represented by the matrix
Fractional Fourier transform
The fractional Fourier transform
|
https://en.wikipedia.org/wiki/Phi%20Tau%20Sigma
|
Phi Tau Sigma () is the Honor Society for food science and technology. The organization was founded in at the University of Massachusetts Amherst by Dr. Gideon E. (Guy) Livingston, a food technology professor. It was incorporated under the General Laws of the Commonwealth of Massachusetts , as "Phi Tau Sigma Honorary Society, Inc."
Greek letters designation
Why the choice of to designate the Honor Society? Some have speculated or assumed that the Greek letters correspond to the initials of "Food Technology Society". However very recent research by Mary K. Schmidl, making use of documents retrieved from the Oregon State University archives by Robert McGorrin, including the 1958 Constitution, has elucidated the real basis of the choice. The 1958 Constitution is headed with three Greek words
"ΦΙΛΕΙΝ ΤΡΟΦΗΣ ΣΠΟΥΔΗΝ" under which are the English words "Devotion to the Study of Foods". With the assistance of Petros Taoukis, the Greek words are translated as follows:
ΦΙΛΕΙΝ: Love or devotion (pronounced Philleen, accent on the last syllable)
ΤΡΟΦΗΣ:of Food (pronounced Trophees, accent on the last syllable)
ΣΠΟΥΔΗΝ: Study (pronounced Spootheen, accent on the last syllable - th as in the word “the” or “this” not like in the word “thesis”).
represent the initials of those three Greek words.
Charter Members
Besides Livingston, the charter members of the Honor Society were M.P. Baldorf, Robert V. Decareau, E. Felicotti, W.D. Powrie, M.A. Steinberg, and D.E. Westcott.
Purposes
To recognize and honor professional achievements of Food Scientists and Technologists,
To encourage the application of fundamental scientific principles to Food Science and Technology in each of its branches,
To stimulate the exchange of scientific knowledge through meetings, lectures, and publications,
To establish and maintain a network of like-minded professionals, and
To promote exclusively charitable, scientific, literary and educational programs.
Members
Phi Tau Sigma has (currentl
|
https://en.wikipedia.org/wiki/Apostolos%20Doxiadis
|
Apostolos K. Doxiadis (; born 1953) is a Greek writer. He is best known for his international bestsellers Uncle Petros and Goldbach's Conjecture (2000) and Logicomix (2009).
Early life
Doxiadis was born in Australia, where his father, the architect Constantinos Apostolou Doxiadis was working. Soon after his birth, the family returned to Athens, where Doxiadis grew up. Though his earliest interests were in poetry, fiction and the theatre, an intense interest in mathematics led Doxiadis to leave school at age fifteen, to attend Columbia University, in New York, from which he obtained a bachelor's degree in mathematics. He then attended the École Pratique des Hautes Études in Paris from which he got a master's degree, with a thesis on the mathematical modelling of the nervous system. His father's death and family reasons made him return to Greece in 1975, interrupting his graduate studies. In Greece, although involved for some years with the computer software industry, Doxiadis returned to his childhood and adolescence loves of theatre and the cinema, before becoming a full-time writer.
Work
Fiction in Greek
Doxiadis began to write in Greek. His first published work was A Parallel Life (Βίος Παράλληλος, 1985), a novella set in the monastic communities of 4th-century CE Egypt. His first novel, Makavettas (Μακαβέττας, 1988), recounted the adventures of a fictional power-hungry colonel at the time of the Greek military junta of 1967–1974. Written in a tongue-in-cheek imitation of Greek folk military memoirs, such as that of Yannis Makriyannis, it follows the plot of Shakespeare's Macbeth, of which the eponymous hero's name is a Hellenized form. Doxiadis next novel, Uncle Petros and Goldbach's Conjecture (Ο Θείος Πέτρος και η Εικασία του Γκόλντμπαχ, 1992), was the first long work of fiction whose plot takes place in the world of pure mathematics research. The first Greek critics did not find the mathematical themes appealing, and it received mediocre reviews, unlike Dox
|
https://en.wikipedia.org/wiki/Brownout%20%28software%20engineering%29
|
Brownout in software engineering is a technique that involves disabling certain features of an application.
Description
Brownout is used to increase the robustness of an application to computing capacity shortage. If too many users are simultaneously accessing an application hosted online, the underlying computing infrastructure may become overloaded, rendering the application unresponsive. Users are likely to abandon the application and switch to competing alternatives, hence incurring long-term revenue loss. To better deal with such a situation, the application can be given brownout capabilities: The application will disable certain features – e.g., an online shop will no longer display recommendations of related products – to avoid overload. Although reducing features generally has a negative impact on the short-term revenue of the application owner, long-term revenue loss can be avoided.
The technique is inspired by brownouts in power grids, which consists in reducing the power grid's voltage in case electricity demand exceeds production. Some consumers, such as incandescent light bulbs, will dim – hence originating the term – and draw less power, thus helping match demand with production. Similarly, a brownout application helps match its computing capacity requirements to what is available on the target infrastructure.
Brownout complements elasticity. The former can help the application withstand short-term capacity shortage, but does so without changing the capacity available to the application. In contrast, elasticity consists of adding (or removing) capacity to the application, preferably in advance, so as to avoid capacity shortage altogether. The two techniques can be combined; e.g., brownout is triggered when the number of users increases unexpectedly until elasticity can be triggered, the latter usually requiring minutes to show an effect.
Brownout is relatively non-intrusive for the developer, for example, it can be implemented as an advice in asp
|
https://en.wikipedia.org/wiki/Network%20processor
|
A network processor is an integrated circuit which has a feature set specifically targeted at the networking application domain.
Network processors are typically software programmable devices and would have generic characteristics similar to general purpose central processing units that are commonly used in many different types of equipment and products.
History of development
In modern telecommunications networks, information (voice, video, data) is transferred as packet data (termed packet switching) which is in contrast to older telecommunications networks that carried information as analog signals such as in the public switched telephone network (PSTN) or analog TV/Radio networks. The processing of these packets has resulted in the creation of integrated circuits (IC) that are optimised to deal with this form of packet data. Network processors have specific features or architectures that are provided to enhance and optimise packet processing within these networks.
Network processors have evolved into ICs with specific functions. This evolution has resulted in more complex and more flexible ICs being created. The newer circuits are programmable and thus allow a single hardware IC design to undertake a number of different functions, where the appropriate software is installed.
Network processors are used in the manufacture of many different types of network equipment such as:
Routers, software routers and switches (Inter-network processors)
Firewalls
Session border controllers
Intrusion detection devices
Intrusion prevention devices
Network monitoring systems
Network security (secure cryptoprocessors)
Reconfigurable Match-Tables
Reconfigurable Match-Tables were introduced in 2013 to allow switches to operate at high speeds while maintaining flexibility when it comes to the network protocols running on them, or the processing to does to them. P4 is used to program the chips. The company Barefoot Networks was based around these processors and was later
|
https://en.wikipedia.org/wiki/Die%20shrink
|
The term die shrink (sometimes optical shrink or process shrink) refers to the scaling of metal–oxide–semiconductor (MOS) devices. The act of shrinking a die creates a somewhat identical circuit using a more advanced fabrication process, usually involving an advance of lithographic nodes. This reduces overall costs for a chip company, as the absence of major architectural changes to the processor lowers research and development costs while at the same time allowing more processor dies to be manufactured on the same piece of silicon wafer, resulting in less cost per product sold.
Die shrinks are the key to lower prices and higher performance at semiconductor companies such as Samsung, Intel, TSMC, and SK Hynix, and fabless manufacturers such as AMD (including the former ATI), NVIDIA and MediaTek.
Details
Examples in the 2000s include the downscaling of the PlayStation 2's Emotion Engine processor from Sony and Toshiba (from 180 nm CMOS in 2000 to 90 nm CMOS in 2003), the codenamed Cedar Mill Pentium 4 processors (from 90 nm CMOS to 65 nm CMOS) and Penryn Core 2 processors (from 65 nm CMOS to 45 nm CMOS), the codenamed Brisbane Athlon 64 X2 processors (from 90 nm SOI to 65 nm SOI), various generations of GPUs from both ATI and NVIDIA, and various generations of RAM and flash memory chips from Samsung, Toshiba and SK Hynix. In January 2010, Intel released Clarkdale Core i5 and Core i7 processors fabricated with a 32 nm process, down from a previous 45 nm process used in older iterations of the Nehalem processor microarchitecture. Intel, in particular, formerly focused on leveraging die shrinks to improve product performance at a regular cadence through its Tick-Tock model. In this business model, every new microarchitecture (tock) is followed by a die shrink (tick) to improve performance with the same microarchitecture.
Die shrinks are beneficial to end-users as shrinking a die reduces the current used by each transistor switching on or off in semiconductor device
|
https://en.wikipedia.org/wiki/Continuum%20%28measurement%29
|
Continuum (: continua or continuums) theories or models explain variation as involving gradual quantitative transitions without abrupt changes or discontinuities. In contrast, categorical theories or models explain variation using qualitatively different states.
In physics
In physics, for example, the space-time continuum model describes space and time as part of the same continuum rather than as separate entities. A spectrum in physics, such as the electromagnetic spectrum, is often termed as either continuous (with energy at all wavelengths) or discrete (energy at only certain wavelengths).
In contrast, quantum mechanics uses quanta, certain defined amounts (i.e. categorical amounts) which are distinguished from continuous amounts.
In mathematics and philosophy
A good introduction to the philosophical issues involved is John Lane Bell's essa in the Stanford Encyclopedia of Philosophy. A significant divide is provided by the law of excluded middle. It determines the divide between intuitionistic continua such as Brouwer's and Lawvere's, and classical ones such as Stevin's and Robinson's.
Bell isolates two distinct historical conceptions of infinitesimal, one by Leibniz and one by Nieuwentijdt, and argues that Leibniz's conception was implemented in Robinson's hyperreal continuum, whereas Nieuwentijdt's, in Lawvere's smooth infinitesimal analysis, characterized by the presence of nilsquare infinitesimals: "It may be said that Leibniz recognized the need for the first, but not the second type of infinitesimal and Nieuwentijdt, vice versa. It is of interest to note that Leibnizian infinitesimals (differentials) are realized in nonstandard analysis, and nilsquare infinitesimals in smooth infinitesimal analysis".
In social sciences, psychology and psychiatry
In social sciences in general, psychology and psychiatry included, data about differences between individuals, like any data, can be collected and measured using different levels of measurement. Those lev
|
https://en.wikipedia.org/wiki/Corollary
|
In mathematics and logic, a corollary ( , ) is a theorem of less importance which can be readily deduced from a previous, more notable statement. A corollary could, for instance, be a proposition which is incidentally proved while proving another proposition; it might also be used more casually to refer to something which naturally or incidentally accompanies something else (e.g., violence as a corollary of revolutionary social changes).
Overview
In mathematics, a corollary is a theorem connected by a short proof to an existing theorem. The use of the term corollary, rather than proposition or theorem, is intrinsically subjective. More formally, proposition B is a corollary of proposition A, if B can be readily deduced from A or is self-evident from its proof.
In many cases, a corollary corresponds to a special case of a larger theorem, which makes the theorem easier to use and apply, even though its importance is generally considered to be secondary to that of the theorem. In particular, B is unlikely to be termed a corollary if its mathematical consequences are as significant as those of A. A corollary might have a proof that explains its derivation, even though such a derivation might be considered rather self-evident in some occasions (e.g., the Pythagorean theorem as a corollary of law of cosines).
Peirce's theory of deductive reasoning
Charles Sanders Peirce held that the most important division of kinds of deductive reasoning is that between corollarial and theorematic. He argued that while all deduction ultimately depends in one way or another on mental experimentation on schemata or diagrams, in corollarial deduction:
"it is only necessary to imagine any case in which the premises are true in order to perceive immediately that the conclusion holds in that case"
while in theorematic deduction:
"It is necessary to experiment in the imagination upon the image of the premise in order from the result of such experiment to make corollarial deductions to t
|
https://en.wikipedia.org/wiki/Komornik%E2%80%93Loreti%20constant
|
In the mathematical theory of non-standard positional numeral systems, the Komornik–Loreti constant is a mathematical constant that represents the smallest base q for which the number 1 has a unique representation, called its q-development. The constant is named after Vilmos Komornik and Paola Loreti, who defined it in 1998.
Definition
Given a real number q > 1, the series
is called the q-expansion, or -expansion, of the positive real number x if, for all , , where is the floor function and need not be an integer. Any real number such that has such an expansion, as can be found using the greedy algorithm.
The special case of , , and or is sometimes called a -development. gives the only 2-development. However, for almost all , there are an infinite number of different -developments. Even more surprisingly though, there exist exceptional for which there exists only a single -development. Furthermore, there is a smallest number known as the Komornik–Loreti constant for which there exists a unique -development.
Value
The Komornik–Loreti constant is the value such that
where is the Thue–Morse sequence, i.e., is the parity of the number of 1's in the binary representation of . It has approximate value
The constant is also the unique positive real root of
This constant is transcendental.
See also
Euler-Mascheroni constant
Fibonacci word
Golay–Rudin–Shapiro sequence
Prouhet–Thue–Morse constant
|
https://en.wikipedia.org/wiki/Eigenmoments
|
EigenMoments is a set of orthogonal, noise robust, invariant to rotation, scaling and translation and distribution sensitive moments. Their application can be found in signal processing and computer vision as descriptors of the signal or image. The descriptors can later be used for classification purposes.
It is obtained by performing orthogonalization, via eigen analysis on geometric moments.
Framework summary
EigenMoments are computed by performing eigen analysis on the moment space of an image by maximizing signal-to-noise ratio in the feature space in form of Rayleigh quotient.
This approach has several benefits in Image processing applications:
Dependency of moments in the moment space on the distribution of the images being transformed, ensures decorrelation of the final feature space after eigen analysis on the moment space.
The ability of EigenMoments to take into account distribution of the image makes it more versatile and adaptable for different genres.
Generated moment kernels are orthogonal and therefore analysis on the moment space becomes easier. Transformation with orthogonal moment kernels into moment space is analogous to projection of the image onto a number of orthogonal axes.
Nosiy components can be removed. This makes EigenMoments robust for classification applications.
Optimal information compaction can be obtained and therefore a few number of moments are needed to characterize the images.
Problem formulation
Assume that a signal vector is taken from a certain distribution having coorelation , i.e. where E[.] denotes expected value.
Dimension of signal space, n, is often too large to be useful for practical application such as pattern classification, we need to transform the signal space into a space with lower dimensionality.
This is performed by a two-step linear transformation:
where is the transformed signal, a fixed transformation matrix which transforms the signal into the moment space, and the transformation matrix
|
https://en.wikipedia.org/wiki/Eukaryogenesis
|
Eukaryogenesis, the process which created the eukaryotic cell and lineage, is a milestone in the evolution of life, since eukaryotes include all complex cells and almost all multicellular organisms. The process is widely agreed to have involved symbiogenesis, in which archaea and bacteria came together to create the first eukaryotic common ancestor (FECA). This cell had a new level of complexity and capability, with a nucleus, at least one centriole and cilium, facultatively aerobic mitochondria, sex (meiosis and syngamy), a dormant cyst with a cell wall of chitin and/or cellulose and peroxisomes. It evolved into a population of single-celled organisms that included the last eukaryotic common ancestor (LECA), gaining capabilities along the way, though the sequence of the steps involved has been disputed, and may not have started with symbiogenesis. In turn, the LECA gave rise to the eukaryotes' crown group, containing the ancestors of animals, fungi, plants, and a diverse range of single-celled organisms.
Context
Life arose on Earth once it had cooled enough for oceans to form. The last universal common ancestor (LUCA) was an organism which had ribosomes and the genetic code; it lived some 4 billion years ago. It gave rise to two main branches of prokaryotic life, the bacteria and the archaea. From among these small-celled, rapidly-dividing ancestors arose the Eukaryotes, with much larger cells, nuclei, and distinctive biochemistry. The eukaryotes form a domain that contains all complex cells and most types of multicellular organism, including the animals, plants, and fungi.
Symbiogenesis
According to the theory of symbiogenesis (also known as the endosymbiotic theory) championed by Lynn Margulis, a member of the archaea gained a bacterial cell as a component. The archaeal cell was a member of the Asgard group. The bacterium was one of the Alphaproteobacteria, which had the ability to use oxygen in its respiration. This enabled it – and the archaeal cells that
|
https://en.wikipedia.org/wiki/Kleptotype
|
In taxonomy, a kleptotype is an unofficial term referring to a stolen, unrightfully displaced type specimen or part of a type specimen.
Etymology
The term is composed of klepto-, from the Ancient Greek (kléptō) meaning "to steal", and -type referring to type specimens. It translates to "stolen type".
History
During the second world war biological collections, like the herbarium in Berlin have been destroyed. This led to the loss of type specimens. In some cases only kleptotypes have survived the destruction, as the type material had been removed from their original collections. For instance, the type of Taxus celebica was thought to be destroyed during the second world war, but a kleptotype has survived the war in Stockholm.
Kleptotypes have been taken by researchers, who subsequently added their unauthorised type duplicates to their own collections.
Consequences
Taking kleptotypes has been criticised as destructive, wasteful, and unethical. The displacement of type material complicates the work of taxonomists, as species identities may become ambiguous due to the lacking type material. It can cause problems, as researchers have to search in multiple collections to get a complete perspective on the displaced material. To combat this issue it has been proposed to weigh specimens before loaning types, and to identify loss of material through comparing the types weight upon return. Also, in some herbaria, such as the herbarium Kew, specimens are glued to the herbarium sheets to hinder the removal of plant material. However, this also makes it difficult to handle the specimens.
Rules concerning type specimens
The International Code of Nomenclature for algae, fungi, and plants (ICN) does not explicitly prohibit the removal of material from type specimens, however it strongly recommends to conserve the type specimens properly. It is paramount that types remain intact, as they are an irreplaceable resource and point of reference.
|
https://en.wikipedia.org/wiki/Power%20management%20integrated%20circuit
|
Power management integrated circuits (power management ICs or PMICs or PMU as unit) are integrated circuits for power management. Although PMIC refers to a wide range of chips (or modules in system-on-a-chip devices), most include several DC/DC converters or their control part. A PMIC is often included in battery-operated devices (such as mobile phone, portable media players) and embedded devices (such as routers) to decrease the amount of space required.
Overview
The term PMIC refers to a class of integrated circuits that perform various functions related to power requirements.
A PMIC may have one or more of the following functions:
DC to DC conversion
Battery charging
Power-source selection
Voltage scaling
Power sequencing
Miscellaneous functions
Power management ICs are solid state devices that control the flow and direction of electrical power. Many electrical devices use multiple internal voltages (e.g., 5 V, 3.3 V, 1.8 V, etc.) and sources of external power (e.g., wall outlet, battery, etc.), meaning that the power design of the device has multiple requirements for operation. A PMIC can refer to any chip that is an individual power related function, but generally refer to ICs that incorporate more than one function such as different power conversions and power controls such as voltage supervision and undervoltage protection. By incorporating these functions into one IC, a number of improvements to the overall design can be made such as better conversion efficiency, smaller solution size, and better heat dissipation.
Features
A PMIC may include battery management, voltage regulation, and charging functions. It may include a DC to DC converter to allow dynamic voltage scaling. Some models are known to feature up to 95% power conversion efficiency. Some models integrate with dynamic frequency scaling in a combination known as DVFS (dynamic voltage and frequency scaling).
It may be manufactured using BiCMOS process. They may come as QFN package. Some mod
|
https://en.wikipedia.org/wiki/Thermal%20runaway
|
Thermal runaway describes a process that is accelerated by increased temperature, in turn releasing energy that further increases temperature. Thermal runaway occurs in situations where an increase in temperature changes the conditions in a way that causes a further increase in temperature, often leading to a destructive result. It is a kind of uncontrolled positive feedback.
In chemistry (and chemical engineering), thermal runaway is associated with strongly exothermic reactions that are accelerated by temperature rise. In electrical engineering, thermal runaway is typically associated with increased current flow and power dissipation. Thermal runaway can occur in civil engineering, notably when the heat released by large amounts of curing concrete is not controlled. In astrophysics, runaway nuclear fusion reactions in stars can lead to nova and several types of supernova explosions, and also occur as a less dramatic event in the normal evolution of solar-mass stars, the "helium flash".
Chemical engineering
Chemical reactions involving thermal runaway are also called thermal explosions in chemical engineering, or runaway reactions in organic chemistry. It is a process by which an exothermic reaction goes out of control: the reaction rate increases due to an increase in temperature, causing a further increase in temperature and hence a further rapid increase in the reaction rate. This has contributed to industrial chemical accidents, most notably the 1947 Texas City disaster from overheated ammonium nitrate in a ship's hold, and the 1976 explosion of zoalene, in a drier, at King's Lynn. Frank-Kamenetskii theory provides a simplified analytical model for thermal explosion. Chain branching is an additional positive feedback mechanism which may also cause temperature to skyrocket because of rapidly increasing reaction rate.
Chemical reactions are either endothermic or exothermic, as expressed by their change in enthalpy. Many reactions are highly exothermic, so ma
|
https://en.wikipedia.org/wiki/List%20of%20periodic%20functions
|
This is a list of some well-known periodic functions. The constant function , where is independent of , is periodic with any period, but lacks a fundamental period. A definition is given for some of the following functions, though each function may have many equivalent definitions.
Smooth functions
All trigonometric functions listed have period , unless otherwise stated. For the following trigonometric functions:
is the th up/down number,
is the th Bernoulli number
in Jacobi elliptic functions,
Non-smooth functions
The following functions have period and take as their argument. The symbol is the floor function of and is the sign function.
K means Elliptic integral K(m)
Vector-valued functions
Epitrochoid
Epicycloid (special case of the epitrochoid)
Limaçon (special case of the epitrochoid)
Hypotrochoid
Hypocycloid (special case of the hypotrochoid)
Spirograph (special case of the hypotrochoid)
Doubly periodic functions
Jacobi's elliptic functions
Weierstrass's elliptic function
Notes
Mathematics-related lists
Types of functions
|
https://en.wikipedia.org/wiki/Dry%20basis
|
Dry basis is an expression of the calculation in chemistry, chemical engineering and related subjects, in which the presence of water (H2O) (and/or other solvents) is neglected for the purposes of the calculation. Water (and/or other solvents) is neglected because addition and removal of water (and/or other solvents) are common processing steps, and also happen naturally through evaporation and condensation; it is frequently useful to express compositions on a dry basis to remove these effects.
Example
An aqueous solution containing 2 g of glucose and 2 g of fructose per 100 g of solution contains 2/100=2% glucose on a wet basis, but 2/4=50% glucose on a dry basis. If the solution had contained 2 g of glucose and 3 g of fructose, it would still have contained 2% glucose on a wet basis, but only 2/5=40% glucose on a dry basis.
Frequently concentrations are calculated to a dry basis using the moisture (water) content :
In the example above the glucose concentration is 2% as is and the moisture content is 96%.
|
https://en.wikipedia.org/wiki/Commensalism
|
Commensalism is a long-term biological interaction (symbiosis) in which members of one species gain benefits while those of the other species neither benefit nor are harmed. This is in contrast with mutualism, in which both organisms benefit from each other; amensalism, where one is harmed while the other is unaffected; and parasitism, where one is harmed and the other benefits.
The commensal (the species that benefits from the association) may obtain nutrients, shelter, support, or locomotion from the host species, which is substantially unaffected. The commensal relation is often between a larger host and a smaller commensal; the host organism is unmodified, whereas the commensal species may show great structural adaptation consistent with its habits, as in the remoras that ride attached to sharks and other fishes. Remoras feed on their hosts' fecal matter, while pilot fish feed on the leftovers of their hosts' meals. Numerous birds perch on bodies of large mammal herbivores or feed on the insects turned up by grazing mammals.
Etymology
The word "commensalism" is derived from the word "commensal", meaning "eating at the same table" in human social interaction, which in turn comes through French from the Medieval Latin commensalis, meaning "sharing a table", from the prefix com-, meaning "together", and mensa, meaning "table" or "meal". Commensality, at the Universities of Oxford and Cambridge, refers to professors eating at the same table as students (as they live in the same "college").
Pierre-Joseph van Beneden introduced the term "commensalism" in 1876.
Examples of commensal relationships
The commensal pathway was traveled by animals that fed on refuse around human habitats or by animals that preyed on other animals drawn to human camps. Those animals established a commensal relationship with humans in which the animals benefited but the humans received little benefit or harm. Those animals that were most capable of taking advantage of the resources associ
|
https://en.wikipedia.org/wiki/List%20of%20stochastic%20processes%20topics
|
In the mathematics of probability, a stochastic process is a random function. In practical applications, the domain over which the function is defined is a time interval (time series) or a region of space (random field).
Familiar examples of time series include stock market and exchange rate fluctuations, signals such as speech, audio and video; medical data such as a patient's EKG, EEG, blood pressure or temperature; and random movement such as Brownian motion or random walks.
Examples of random fields include static images, random topographies (landscapes), or composition variations of an inhomogeneous material.
Stochastic processes topics
This list is currently incomplete. See also :Category:Stochastic processes
Basic affine jump diffusion
Bernoulli process: discrete-time processes with two possible states.
Bernoulli schemes: discrete-time processes with N possible states; every stationary process in N outcomes is a Bernoulli scheme, and vice versa.
Bessel process
Birth–death process
Branching process
Branching random walk
Brownian bridge
Brownian motion
Chinese restaurant process
CIR process
Continuous stochastic process
Cox process
Dirichlet processes
Finite-dimensional distribution
First passage time
Galton–Watson process
Gamma process
Gaussian process – a process where all linear combinations of coordinates are normally distributed random variables.
Gauss–Markov process (cf. below)
GenI process
Girsanov's theorem
Hawkes process
Homogeneous processes: processes where the domain has some symmetry and the finite-dimensional probability distributions also have that symmetry. Special cases include stationary processes, also called time-homogeneous.
Karhunen–Loève theorem
Lévy process
Local time (mathematics)
Loop-erased random walk
Markov processes are those in which the future is conditionally independent of the past given the present.
Markov chain
Markov chain central limit theorem
Conti
|
https://en.wikipedia.org/wiki/String%20art
|
__notoc__
String art or pin and thread art, is characterized by an arrangement of colored thread strung between points to form geometric patterns or representational designs such as a ship's sails, sometimes with other artist material comprising the remainder of the work. Thread, wire, or string is wound around a grid of nails hammered into a velvet-covered wooden board. Though straight lines are formed by the string, the slightly different angles and metric positions at which strings intersect gives the appearance of Bézier curves (as in the mathematical concept of envelope of a family of straight lines). Quadratic Bézier curve are obtained from strings based on two intersecting segments. Other forms of string art include Spirelli, which is used for cardmaking and scrapbooking, and curve stitching, in which string is stitched through holes.
String art has its origins in the 'curve stitch' activities invented by Mary Everest Boole at the end of the 19th century to make mathematical ideas more accessible to children. It was popularised as a decorative craft in the late 1960s through kits and books.
A computational form of string art that can produce photo-realistic artwork was introduced by Petros Vrellis, in 2016.
Gallery
See also
Bézier curve
Envelope (mathematics)
N-connectedness
|
https://en.wikipedia.org/wiki/Without%20loss%20of%20generality
|
Without loss of generality (often abbreviated to WOLOG, WLOG or w.l.o.g.; less commonly stated as without any loss of generality or with no loss of generality) is a frequently used expression in mathematics. The term is used to indicate the assumption that follows is chosen arbitrarily, narrowing the premise to a particular case, but does not affect the validity of the proof in general. The other cases are sufficiently similar to the one presented that proving them follows by essentially the same logic. As a result, once a proof is given for the particular case, it is trivial to adapt it to prove the conclusion in all other cases.
In many scenarios, the use of "without loss of generality" is made possible by the presence of symmetry. For example, if some property P(x,y) of real numbers is known to be symmetric in x and y, namely that P(x,y) is equivalent to P(y,x), then in proving that P(x,y) holds for every x and y, one may assume "without loss of generality" that x ≤ y. There is no loss of generality in this assumption, since once the case x ≤ y ⇒ P(x,y) has been proved, the other case follows by interchanging x and y : y ≤ x ⇒ P(y,x), and by symmetry of P, this implies P(x,y), thereby showing that P(x,y) holds for all cases.
On the other hand, if neither such a symmetry nor another form of equivalence can be established, then the use of "without loss of generality" is incorrect and can amount to an instance of proof by example – a logical fallacy of proving a claim by proving a non-representative example.
Example
Consider the following theorem (which is a case of the pigeonhole principle):
A proof:
The above argument works because the exact same reasoning could be applied if the alternative assumption, namely, that the first object is blue, were made, or, similarly, that the words 'red' and 'blue' can be freely exchanged in the wording of the proof. As a result, the use of "without loss of generality" is valid in this case.
See also
Up to
Mat
|
https://en.wikipedia.org/wiki/Neuromechanics
|
Neuromechanics is an interdisciplinary field that combines biomechanics and neuroscience to understand how the nervous system interacts with the skeletal and muscular systems to enable animals to move. In a motor task, like reaching for an object, neural commands are sent to motor neurons to activate a set of muscles, called muscle synergies. Given which muscles are activated and how they are connected to the skeleton, there will be a corresponding and specific movement of the body. In addition to participating in reflexes, neuromechanical process may also be shaped through motor adaptation and learning.
Neuromechanics underlying behavior
Walking
The inverted pendulum theory of gait is a neuromechanical approach to understand how humans walk. As the name of the theory implies, a walking human is modeled as an inverted pendulum consisting of a center of mass (COM) suspended above the ground via a support leg (Fig. 2). As the inverted pendulum swings forward, ground reaction forces occur between the modeled leg and the ground. Importantly, the magnitude of the ground reaction forces depends on the COM position and size. The velocity vector of the center of mass is always perpendicular to the ground reaction force.
Walking consists of alternating single-support and double-support phases. The single-support phase occurs when one leg is in contact with the ground while the double-support phase occurs when two legs are in contact with the ground.
Neurological influences
The inverted pendulum is stabilized by constant feedback from the brain and can operate even in the presence of sensory loss. In animals who have lost all sensory input to the moving limb, the variables produced by gait (center of mass acceleration, velocity of animal, and position of the animal) remain constant between both groups.
During postural control, delayed feedback mechanisms are used in the temporal reproduction of task-level functions such as walking. The nervous system takes into a
|
https://en.wikipedia.org/wiki/Pectin
|
Pectin ( : "congealed" and "curdled") is a heteropolysaccharide, a structural acid contained in the primary lamella, in the middle lamella, and in the cell walls of terrestrial plants. The principal, chemical component of pectin is galacturonic acid (a sugar acid derived from galactose) which was isolated and described by Henri Braconnot in 1825. Commercially produced pectin is a white-to-light-brown powder, produced from citrus fruits for use as an edible gelling agent, especially in jams and jellies, dessert fillings, medications, and sweets; and as a food stabiliser in fruit juices and milk drinks, and as a source of dietary fiber.
Biology
Pectin is composed of complex polysaccharides that are present in the primary cell walls of a plant, and are abundant in the green parts of terrestrial plants.
Pectin is the principal component of the middle lamella, where it binds cells. Pectin is deposited by exocytosis into the cell wall via vesicles produced in the Golgi apparatus. The amount, structure and chemical composition of pectin is different among plants, within a plant over time, and in various parts of a plant. Pectin is an important cell wall polysaccharide that allows primary cell wall extension and plant growth. During fruit ripening, pectin is broken down by the enzymes pectinase and pectinesterase, in which process the fruit becomes softer as the middle lamellae break down and cells become separated from each other. A similar process of cell separation caused by the breakdown of pectin occurs in the abscission zone of the petioles of deciduous plants at leaf fall.
Pectin is a natural part of the human diet, but does not contribute significantly to nutrition. The daily intake of pectin from fruits and vegetables can be estimated to be around 5 g if approximately 500 g of fruits and vegetables are consumed per day.
In human digestion, pectin binds to cholesterol in the gastrointestinal tract and slows glucose absorption by trapping carbohydrates. Pectin is
|
https://en.wikipedia.org/wiki/Von%20Baer%27s%20laws%20%28embryology%29
|
In developmental biology, von Baer's laws of embryology (or laws of development) are four rules proposed by Karl Ernst von Baer to explain the observed pattern of embryonic development in different species.
von Baer formulated the laws in his book On the Developmental History of Animals (), published in 1828, while working at the University of Königsberg. He specifically intended to rebut Johann Friedrich Meckel's 1808 recapitulation theory. According to that theory, embryos pass through successive stages that represent the adult forms of less complex organisms in the course of development, and that ultimately reflects (the great chain of being). von Baer believed that such linear development is impossible. He posited that instead of linear progression, embryos started from one or a few basic forms that are similar in different animals, and then developed in a branching pattern into increasingly different organisms. Defending his ideas, he was also opposed to Charles Darwin's 1859 theory of common ancestry and descent with modification, and particularly to Ernst Haeckel's revised recapitulation theory with its slogan "ontogeny recapitulates phylogeny". Darwin was however broadly supportive of von Baer's view of the relationship between embryology and evolution.
The laws
Von Baer described his laws in his book Über Entwickelungsgeschichte der Thiere. Beobachtung und Reflexion published in 1828. They are a series of statements generally summarised into four points, as translated by Thomas Henry Huxley in his Scientific Memoirs:
The more general characters of a large group appear earlier in the embryo than the more special characters.
From the most general forms the less general are developed, and so on, until finally the most special arises.
Every embryo of a given animal form, instead of passing through the other forms, rather becomes separated from them.
The embryo of a higher form never resembles any other form, but only its embryo.
Description
Von Baer
|
https://en.wikipedia.org/wiki/Multiplex%20baseband
|
In telecommunication, the term multiplex baseband has the following meanings:
In frequency-division multiplexing, the frequency band occupied by the aggregate of the signals in the line interconnecting the multiplexing and radio or line equipment.
In frequency division multiplexed carrier systems, at the input to any stage of frequency translation, the frequency band occupied.
For example, the output of a group multiplexer consists of a band of frequencies from 60 kHz to 108 kHz. This is the group-level baseband that results from combining 12 voice-frequency input channels, having a bandwidth of 4 kHz each, including guard bands. In turn, 5 groups are multiplexed into a super group having a baseband of 312 kHz to 552 kHz. This baseband, however, does not represent a group-level baseband. Ten super groups are in turn multiplexed into one master group, the output of which is a baseband that may be used to modulate a microwave-frequency carrier.
Multiplexing
Signal processing
|
https://en.wikipedia.org/wiki/Intraspecific%20competition
|
Intraspecific competition is an interaction in population ecology, whereby members of the same species compete for limited resources. This leads to a reduction in fitness for both individuals, but the more fit individual survives and is able to reproduce.
By contrast, interspecific competition occurs when members of different species compete for a shared resource. Members of the same species have rather similar requirements for resources, whereas different species have a smaller contested resource overlap, resulting in intraspecific competition generally being a stronger force than interspecific competition.
Individuals can compete for food, water, space, light, mates, or any other resource which is required for survival or reproduction. The resource must be limited for competition to occur; if every member of the species can obtain a sufficient amount of every resource then individuals do not compete and the population grows exponentially. Prolonged exponential growth is rare in nature because resources are finite and so not every individual in a population can survive, leading to intraspecific competition for the scarce resources.
When resources are limited, an increase in population size reduces the quantity of resources available for each individual, reducing the per capita fitness in the population. As a result, the growth rate of a population slows as intraspecific competition becomes more intense, making it a negatively density dependent process. The falling population growth rate as population increases can be modelled effectively with the logistic growth model. The rate of change of population density eventually falls to zero, the point ecologists have termed the carrying capacity (K). However, a population can only grow to a very limited number within an environment. The carrying capacity, defined by the variable k, of an environment is the maximum number of individuals or species an environment can sustain and support over a longer period of time. The r
|
https://en.wikipedia.org/wiki/Enriques%E2%80%93Kodaira%20classification
|
In mathematics, the Enriques–Kodaira classification is a classification of compact complex surfaces into ten classes. For each of these classes, the surfaces in the class can be parametrized by a moduli space. For most of the classes the moduli spaces are well understood, but for the class of surfaces of general type the moduli spaces seem too complicated to describe explicitly, though some components are known.
Max Noether began the systematic study of algebraic surfaces, and Guido Castelnuovo proved important parts of the classification. described the classification of complex projective surfaces. later extended the classification to include non-algebraic compact surfaces. The analogous classification of surfaces in positive characteristic was begun by and completed by ; it is similar to the characteristic 0 projective case, except that one also gets singular and supersingular Enriques surfaces in characteristic 2, and quasi-hyperelliptic surfaces in characteristics 2 and 3.
Statement of the classification
The Enriques–Kodaira classification of compact complex surfaces states that every nonsingular minimal compact complex surface is of exactly one of the 10 types listed on this page; in other words, it is one of the rational, ruled (genus > 0), type VII, K3, Enriques, Kodaira, toric, hyperelliptic, properly quasi-elliptic, or general type surfaces.
For the 9 classes of surfaces other than general type, there is a fairly complete description of what all the surfaces look like (which for class VII depends on the global spherical shell conjecture, still unproved in 2009). For surfaces of general type not much is known about their explicit classification, though many examples have been found.
The classification of algebraic surfaces in positive characteristic (, ) is similar to that of algebraic surfaces in characteristic 0, except that there are no Kodaira surfaces or surfaces of type VII, and there are some extra families of Enriques surfaces in characterist
|
https://en.wikipedia.org/wiki/Cytoplasmic%20hybrid
|
A cytoplasmic hybrid (or cybrid, a portmanteau of the two words) is a eukaryotic cell line produced by the fusion of a whole cell with a cytoplast. Cytoplasts are enucleated cells. This enucleation can be effected by simultaneous application of centrifugal force and treatment of the cell with an agent that disrupts the cytoskeleton. A special case of cybrid formation involves the use of rho-zero cells as the whole cell partner in the fusion. Rho-zero cells are cells which have been depleted of their own mitochondrial DNA by prolonged incubation with ethidium bromide, a chemical which inhibits mitochondrial DNA replication. The rho-zero cells do retain mitochondria and can grow in rich culture medium with certain supplements. They do retain their own nuclear genome. A cybrid is then a hybrid cell which mixes the nuclear genes from one cell with the mitochondrial genes from another cell. Using this powerful tool, it makes it possible to dissociate contribution from the mitochondrial genes vs that of the nuclear genes.
Cybrids are valuable in mitochondrial research and have been used to provide suggestive evidence of mitochondrial involvement in Alzheimer's disease, Parkinson's disease, and other conditions.
Legal issues
Research utilizing cybrid embryos has been hotly contested due to the ethical implications of further cybrid research. Recently, the House of Lords passed the Human Fertilisation and Embryology Act 2008, which allows the creation of mixed human-animal embryos for medical purposes only. Such cybrids are 99.9% human and 0.1% animal. A cybrid may be kept for a maximum of 14 days, owing to the development of the brain and spinal cord, after which time the cybrid must be destroyed. During the two-week period, stem cells may be harvested from the cybrid, for research or medical purposes. Under no circumstances may a cybrid be implanted into a human uterus.
|
https://en.wikipedia.org/wiki/WSSUS%20model
|
The WSSUS (Wide-Sense Stationary Uncorrelated Scattering) model provides a statistical description of the transmission behavior of wireless channels. "Wide-sense stationarity" means the second-order moments of the channel are stationary, which means that they depends only on the time difference, while "uncorrelated scattering" refers to the delay τ due to scatterers.
Modelling of mobile channels as WSSUS (wide sense stationary uncorrelated scattering) has become popular among specialists. The model was introduced by Phillip A. Bello in 1963.
A commonly used description of time variant channel applies the set of Bello functions and the theory of stochastic processes.
|
https://en.wikipedia.org/wiki/Lebombo%20bone
|
The Lebombo bone is a bone tool made of a baboon fibula with incised markings discovered in Border Cave in the Lebombo Mountains located between South Africa and Eswatini. Changes in the section of the notches indicate the use of different cutting edges, which the bone's discoverer, Peter Beaumont, views as evidence for their having been made, like other markings found all over the world, during participation in rituals.
The bone is between 43,000 and 42,000 years old, according to 24 radiocarbon datings. This is far older than the Ishango bone with which it is sometimes confused. Other notched bones are 80,000 years old but it is unclear if the notches are merely decorative or if they
bear a functional meaning.
The bone has been conjectured to be a tally stick. According to The Universal Book of Mathematics the Lebombo bone's 29 notches suggest "it may have been used as a lunar phase counter, in which case African women may have been the first mathematicians, because keeping track of menstrual cycles requires a lunar calendar". However, the bone is broken at one end, so the 29 notches may or may not be the total number. In the cases of other notched bones since found globally, there has been no consistent notch tally, many being in the 1–10 range.
See also
History of mathematics
Tally sticks
|
https://en.wikipedia.org/wiki/Location%20information%20server
|
The location information server, or LIS is a network node originally defined in the National Emergency Number Association i2 network architecture that addresses the intermediate solution for providing e911 service for users of VoIP telephony. The LIS is the node that determines the location of the VoIP terminal.
Beyond the NENA architecture and VoIP, the LIS is capable of providing location information to any IP device within its served access network.
The role of the LIS
Distributed systems for locating people and equipment will be at the heart of tomorrow's active offices. Computer and communications systems continue to proliferate in the office and home. Systems are varied and complex, involving wireless networks and mobile computers. However, systems are underused because the choices of control mechanisms and application interfaces are too diverse. It is therefore pertinent to consider which mechanisms might allow the user to manipulate systems in simple and ubiquitous ways, and how computers can be made more aware of the facilities in their surroundings. Knowledge of the location of people and equipment within an organization is such a mechanism. Annotating a resource database with location information allows location-based heuristics for control and interaction to be constructed. This approach is particularly attractive because location techniques can be devised that are physically unobtrusive and do not rely on explicit user action. The article describes the technology of a system for locating people and equipment, and the design of a distributed system service supporting access to that information. The application interfaces made possible by or that benefit from this facility are presented
Location determination
The method used to determine the location of a device in an access network varies between the different types of networks. For a wired network, such as Ethernet or DSL a wiremap method is common. In wiremap location determination, the locat
|
https://en.wikipedia.org/wiki/Link%20level
|
For computer networking, Link level: In the hierarchical structure of a primary or secondary station, the conceptual level of control or data processing logic that controls the data link.
Note: Link-level functions provide an interface between the station high-level logic and the data link. Link-level functions include (a) transmit bit injection and receive bit extraction, (b) address and control field interpretation, (c) command response generation, transmission and interpretation, and (d) frame check sequence computation and interpretation.
|
https://en.wikipedia.org/wiki/Resource%20Location%20and%20Discovery%20Framing
|
Resource Location and Discovery (RELOAD) is a peer-to-peer (P2P) signalling protocol for use on the Internet. A P2P signalling protocol provides its clients with an abstract storage and messaging service between a set of cooperating peers that form the overlay network. RELOAD is designed to support a peer-to-peer SIP network, but can be utilized by other applications with similar requirements by defining new usages that specify the kinds of data that must be stored for a particular application. RELOAD defines a security model based on a certificate enrollment service that provides unique identities. NAT traversal is a fundamental service of the protocol. RELOAD also allows access from "client" nodes that do not need to route traffic or store data for others.
|
https://en.wikipedia.org/wiki/Ad%20hoc%20network
|
An ad hoc network refers to technologies that allow network communications on an ad hoc basis. Associated technologies include:
Wireless ad hoc network
Mobile ad hoc network
Vehicular ad hoc network
Intelligent vehicular ad hoc network
Protocols associated with ad hoc networking
Ad hoc On-Demand Distance Vector Routing
Ad Hoc Configuration Protocol
Smart phone ad hoc network
Ad hoc wireless distribution service
|
https://en.wikipedia.org/wiki/Quantum%20Aspects%20of%20Life
|
Quantum Aspects of Life, a book published in 2008 with a foreword by Roger Penrose, explores the open question of the role of quantum mechanics at molecular scales of relevance to biology. The book contains chapters written by various world-experts from a 2003 symposium and includes two debates from 2003 to 2004; giving rise to a mix of both sceptical and sympathetic viewpoints. The book addresses questions of quantum physics, biophysics, nanoscience, quantum chemistry, mathematical biology, complexity theory, and philosophy that are inspired by the 1944 seminal book What Is Life? by Erwin Schrödinger.
Contents
Foreword by Roger Penrose
Section 1: Emergence and Complexity
Chapter 1: "A Quantum Origin of Life?" by Paul C. W. Davies
Chapter 2: "Quantum Mechanics and Emergence" by Seth Lloyd
Section 2: Quantum Mechanisms in Biology
Chapter 3: "Quantum Coherence and the Search for the First Replicator" by Jim Al-Khalili and Johnjoe McFadden
Chapter 4: "Ultrafast Quantum Dynamics in Photosynthesis" by Alexandra Olaya-Castro, Francesca Fassioli Olsen, Chiu Fan Lee, and Neil F. Johnson
Chapter 5: "Modeling Quantum Decoherence in Biomolecules" by Jacques Bothma, Joel Gilmore, and Ross H. McKenzie
Section 3: The Biological Evidence
Chapter 6: "Molecular Evolution: A Role for Quantum Mechanics in the Dynamics of Molecular Machines that Read and Write DNA" by Anita Goel
Chapter 7: "Memory Depends on the Cytoskeleton, but is it Quantum?" by Andreas Mershin and Dimitri V. Nanopoulos
Chapter 8: "Quantum Metabolism and Allometric Scaling Relations in Biology" by Lloyd Demetrius
Chapter 9: "Spectroscopy of the Genetic Code" by Jim D. Bashford and Peter D. Jarvis
Chapter 10: "Towards Understanding the Origin of Genetic Languages" by Apoorva D. Patel
Section 4: Artificial Quantum Life
Chapter 11: "Can Arbitrary Quantum Systems Undergo Self-Replication?" by Arun K. Pati and Samuel L. Braunstein
Chapter 12: "A Semi-Quantum Version of the Game of Life" by Adrian P. Flitne
|
https://en.wikipedia.org/wiki/Algorism
|
Algorism is the technique of performing basic arithmetic by writing numbers in place value form and applying a set of memorized rules and facts to the digits. One who practices algorism is known as an algorist. This positional notation system has largely superseded earlier calculation systems that used a different set of symbols for each numerical magnitude, such as Roman numerals, and in some cases required a device such as an abacus.
Etymology
The word algorism comes from the name Al-Khwārizmī (c. 780–850), a Persian mathematician, astronomer, geographer and scholar in the House of Wisdom in Baghdad, whose name means "the native of Khwarezm", which is now in modern-day Uzbekistan. He wrote a treatise in Arabic language in the 9th century, which was translated into Latin in the 12th century under the title Algoritmi de numero Indorum. This title means "Algoritmi on the numbers of the Indians", where "Algoritmi" was the translator's Latinization of Al-Khwarizmi's name. Al-Khwarizmi was the most widely read mathematician in Europe in the late Middle Ages, primarily through his other book, the Algebra. In late medieval Latin, algorismus, the corruption of his name, simply meant the "decimal number system" that is still the meaning of modern English algorism. During the 17th century, the French form for the word – but not its meaning – was changed to algorithm, following the model of the word logarithm, this form alluding to the ancient Greek . English adopted the French very soon afterwards, but it wasn't until the late 19th century that "algorithm" took on the meaning that it has in modern English. In English, it was first used about 1230 and then by Chaucer in 1391. Another early use of the word is from 1240, in a manual titled Carmen de Algorismo composed by Alexandre de Villedieu. It begins thus:
which translates as:
The word algorithm also derives from algorism, a generalization of the meaning to any set of rules specifying a computational procedure. Occasiona
|
https://en.wikipedia.org/wiki/Parameter%20space
|
The parameter space is the space of possible parameter values that define a particular mathematical model, often a subset of finite-dimensional Euclidean space. Often the parameters are inputs of a function, in which case the technical term for the parameter space is domain of a function. The ranges of values of the parameters may form the axes of a plot, and particular outcomes of the model may be plotted against these axes to illustrate how different regions of the parameter space produce different types of behavior in the model.
In statistics, parameter spaces are particularly useful for describing parametric families of probability distributions. They also form the background for parameter estimation. In the case of extremum estimators for parametric models, a certain objective function is maximized or minimized over the parameter space. Theorems of existence and consistency of such estimators require some assumptions about the topology of the parameter space. For instance, compactness of the parameter space, together with continuity of the objective function, suffices for the existence of an extremum estimator.
Examples
A simple model of health deterioration after developing lung cancer could include the two parameters gender and smoker/non-smoker, in which case the parameter space is the following set of four possibilities: .
The logistic map has one parameter, r, which can take any positive value. The parameter space is therefore positive real numbers.
For some values of r, this function ends up cycling round a few values, or fixed on one value. These long-term values can be plotted against r in a bifurcation diagram to show the different behaviours of the function for different values of r.
In a sine wave model the parameters are amplitude A > 0, angular frequency ω > 0, and phase φ ∈ S1. Thus the parameter space is
In complex dynamics, the parameter space is the complex plane C = { z = x + y i : x, y ∈ R }, where i2 = −1.
The famous Mandelbrot
|
https://en.wikipedia.org/wiki/Positive-real%20function
|
Positive-real functions, often abbreviated to PR function or PRF, are a kind of mathematical function that first arose in electrical network synthesis. They are complex functions, Z(s), of a complex variable, s. A rational function is defined to have the PR property if it has a positive real part and is analytic in the right half of the complex plane and takes on real values on the real axis.
In symbols the definition is,
In electrical network analysis, Z(s) represents an impedance expression and s is the complex frequency variable, often expressed as its real and imaginary parts;
in which terms the PR condition can be stated;
The importance to network analysis of the PR condition lies in the realisability condition. Z(s) is realisable as a one-port rational impedance if and only if it meets the PR condition. Realisable in this sense means that the impedance can be constructed from a finite (hence rational) number of discrete ideal passive linear elements (resistors, inductors and capacitors in electrical terminology).
Definition
The term positive-real function was originally defined by Otto Brune to describe any function Z(s) which
is rational (the quotient of two polynomials),
is real when s is real
has positive real part when s has a positive real part
Many authors strictly adhere to this definition by explicitly requiring rationality, or by restricting attention to rational functions, at least in the first instance. However, a similar more general condition, not restricted to rational functions had earlier been considered by Cauer, and some authors ascribe the term positive-real to this type of condition, while others consider it to be a generalization of the basic definition.
History
The condition was first proposed by Wilhelm Cauer (1926) who determined that it was a necessary condition. Otto Brune (1931) coined the term positive-real for the condition and proved that it was both necessary and sufficient for realisability.
Properties
The sum of two
|
https://en.wikipedia.org/wiki/Outline%20of%20trigonometry
|
The following outline is provided as an overview of and topical guide to trigonometry:
Trigonometry – branch of mathematics that studies the relationships between the sides and the angles in triangles. Trigonometry defines the trigonometric functions, which describe those relationships and have applicability to cyclical phenomena, such as waves.
Basics
Geometry – mathematics concerned with questions of shape, size, the relative position of figures, and the properties of space. Geometry is used extensively in trigonometry.
Angle – the angle is the figure formed by two rays, called the sides of the angle, sharing a common endpoint, called the vertex of the angle. Angles formed by two rays lie in a plane, but this plane does not have to be a Euclidean plane.
Ratio – a ratio indicates how many times one number contains another
Content of trigonometry
Trigonometry
Trigonometric functions
Trigonometric identities
Euler's formula
Scholars
Archimedes
Aristarchus
Aryabhata
Bhaskara I
Claudius Ptolemy
Euclid
Hipparchus
Madhava of Sangamagrama
Ptolemy
Pythagoras
Regiomontanus
History
Aristarchus's inequality
Bhaskara I's sine approximation formula
Greek astronomy
Indian astronomy
Jyā, koti-jyā and utkrama-jyā
Madhava's sine table
Ptolemy's table of chords
Rule of marteloio
Āryabhaṭa's sine table
Fields
Uses of trigonometry
Acoustics
Architecture
Astronomy
Biology
Cartography
Chemistry
Civil engineering
Computer graphics
Cryptography
Crystallography
Economics
Electrical engineering
Electronics
Game development
Geodesy
Mechanical engineering
Medical imaging
Meteorology
Music theory
Number theory
Oceanography
Optics
Pharmacy
Phonetics
Physical science
Probability theory
Seismology
Statistics
Surveying
Physics
Abbe sine condition
Greninger chart
Phasor
Snell's law
Astronomy
Equant
Parallax
Dialing scales
Chemistry
Greninger chart
Geography, geodesy, and land surveying
Hansen's problem
Sn
|
https://en.wikipedia.org/wiki/Multilink%20striping
|
Multilink striping is a type of data striping used in telecommunications to achieve higher throughput or increase the resilience of a network connection by data aggregation over multiple network links simultaneously.
Multipath routing and multilink striping are often used synonymously. However, there are some differences. When applied to end-hosts, multilink striping requires multiple physical interfaces and access to multiple networks at once. On the other hand, multiple routing paths can be obtained with a single end-host interface, either within the network, or, in case of a wireless interface and multiple neighboring nodes, at the end-host itself.
See also
RFC 1990, The PPP Multilink Protocol (MP)
Link aggregation
Computer networking
|
https://en.wikipedia.org/wiki/List%20of%207400-series%20integrated%20circuits
|
The following is a list of 7400-series digital logic integrated circuits. In the mid-1960s, the original 7400-series integrated circuits were introduced by Texas Instruments with the prefix "SN" to create the name SN74xx. Due to the popularity of these parts, other manufacturers released pin-to-pin compatible logic devices and kept the 7400 sequence number as an aid to identification of compatible parts. However, other manufacturers use different prefixes and suffixes on their part numbers.
Overview
Some TTL logic parts were made with an extended military-specification temperature range. These parts are prefixed with 54 instead of 74 in the part number.
A short-lived 64 prefix on Texas Instruments parts indicated an industrial temperature range; this prefix had been dropped from the TI literature by 1973. Most recent 7400-series parts are fabricated in CMOS or BiCMOS technology rather than TTL. Surface-mount parts with a single gate (often in a 5-pin or 6-pin package) are prefixed with 741G instead of 74.
Some manufacturers released some 4000-series equivalent CMOS circuits with a 74 prefix, for example, the 74HC4066 was a replacement for the 4066 with slightly different electrical characteristics (different power-supply voltage ratings, higher frequency capabilities, lower "on" resistances in analog switches, etc.). See List of 4000-series integrated circuits.
Conversely, the 4000-series has "borrowed" from the 7400 series such as the CD40193 and CD40161 being pin-for-pin functional replacements for 74C193 and 74C161.
Older TTL parts made by manufacturers such as Signetics, Motorola, Mullard and Siemens may have different numeric prefix and numbering series entirely, such as in the European FJ family FJH101 is an 8-input NAND gate like a 7430.
A few alphabetic characters to designate a specific logic subfamily may immediately follow the 74 or 54 in the part number, e.g., 74LS74 for low-power Schottky. Some CMOS parts such as 74HCT74 for high-speed CMOS wit
|
https://en.wikipedia.org/wiki/Software%20engineering%20professionalism
|
Software engineering professionalism is a movement to make software engineering a profession, with aspects such as degree and certification programs, professional associations, professional ethics, and government licensing. The field is a licensed discipline in Texas in the United States (Texas Board of Professional Engineers, since 2013), Engineers Australia(Course Accreditation since 2001, not Licensing), and many provinces in Canada.
History
In 1993 the IEEE and ACM began a joint effort called JCESEP, which evolved into SWECC in 1998 to explore making software engineering into a profession. The ACM pulled out of SWECC in May 1999, objecting to its support for the Texas professionalization efforts, of having state licenses for software engineers. ACM determined that the state of knowledge and practice in software engineering was too immature to warrant licensing,
and that licensing would give false assurances of competence even if the body of knowledge were mature.
The IEEE continued to support making software engineering a branch of traditional engineering.
In Canada the Canadian Information Processing Society established the Information Systems Professional certification process. Also, by the late 1990s (1999 in British Columbia) the discipline of software engineering as a professional engineering discipline was officially created. This has caused some disputes between the provincial engineering associations and companies who call their developers software engineers, even though these developers have not been licensed by any engineering association.
In 1999, the Panel of Software Engineering was formed as part of the settlement between Engineering Canada and the Memorial University of Newfoundland over the school's use of the term "software engineering" in the name of a computer science program. Concerns were raised over inappropriate use of the name "software engineering" to describe non-engineering programs could lead to student and public confusion, a
|
https://en.wikipedia.org/wiki/List%20of%20topologies
|
The following is a list of named topologies or topological spaces, many of which are counterexamples in topology and related branches of mathematics. This is not a list of properties that a topology or topological space might possess; for that, see List of general topology topics and Topological property.
Discrete and indiscrete
Discrete topology − All subsets are open.
Indiscrete topology, chaotic topology, or Trivial topology − Only the empty set and its complement are open.
Cardinality and ordinals
Cocountable topology
Given a topological space the on is the topology having as a subbasis the union of and the family of all subsets of whose complements in are countable.
Cofinite topology
Double-pointed cofinite topology
Ordinal number topology
Pseudo-arc
Ran space
Tychonoff plank
Finite spaces
Discrete two-point space − The simplest example of a totally disconnected discrete space.
Either–or topology
Finite topological space
Pseudocircle − A finite topological space on 4 elements that fails to satisfy any separation axiom besides T0. However, from the viewpoint of algebraic topology, it has the remarkable property that it is indistinguishable from the circle
Sierpiński space, also called the connected two-point set − A 2-point set with the particular point topology
Integers
Arens–Fort space − A Hausdorff, regular, normal space that is not first-countable or compact. It has an element (i.e. ) for which there is no sequence in that converges to but there is a sequence in such that is a cluster point of
Arithmetic progression topologies
The Baire space − with the product topology, where denotes the natural numbers endowed with the discrete topology. It is the space of all sequences of natural numbers.
Divisor topology
Partition topology
Deleted integer topology
Odd–even topology
Fractals and Cantor set
Apollonian gasket
Cantor set − A subset of the closed interval with remarkable properties.
Cantor dust
Cantor space
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.