source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Shift%20register%20lookup%20table
|
A shift register lookup table, also shift register LUT or SRL, refers to a component in digital circuitry. It is essentially a shift register of variable length. The length of SRL is set by driving address pins high or low and can be changed dynamically, if necessary.
The SRL component is used in FPGA devices.
The SRL can be used as a programmable delay element.
See also
Lookup table
Shift register
|
https://en.wikipedia.org/wiki/Fernando%20Zalamea
|
Fernando Zalamea Traba (Bogota, 28 February 1959) is a Colombian mathematician, essayist, critic, philosopher and popularizer, known by his contributions to the philosophy of mathematics, being the creator of the synthetic philosophy of mathematics. He is the author of around twenty books and is one of the world's leading experts on the mathematical and philosophical work of Alexander Grothendieck, as well as in the logical work of Charles S. Peirce.
Currently, he is a full professor in the Department of Mathematics of the National University of Colombia, where he has established a mathematical school, primarily through his ongoing seminar of epistemology, history and philosophy of mathematics, which he conducted for eleven years at the university. He is also known for his creative, critical, and constructive teaching of mathematics. Zalamea has supervised approximately 50 thesis projects at the undergraduate, master's and doctoral levels in various fields, including mathematics, philosophy, logic, category theory, semiology, medicine, culture, among others. Since 2018, he has been an honorary member of the Colombian Academy of Physical Exact Sciences and Natural. In 2016, he was recognized as one of the 100 most outstanding contemporary interdisciplinary global minds by "100 Global Minds, the most daring cross-disciplinary thinkers in the world," being the only Latin American included in this recognition.
|
https://en.wikipedia.org/wiki/List%20of%20PSPACE-complete%20problems
|
Here are some of the more commonly known problems that are PSPACE-complete when expressed as decision problems. This list is in no way comprehensive.
Games and puzzles
Generalized versions of:
Amazons
Atomix
Checkers if a draw is forced after a polynomial number of non-jump moves
Dyson Telescope Game
Cross Purposes
Geography
Two-player game version of Instant Insanity
Ko-free Go
Ladder capturing in Go
Gomoku
Hex
Konane
Lemmings
Node Kayles
Poset Game
Reversi
River Crossing
Rush Hour
Finding optimal play in Mahjong solitaire
Sokoban
Super Mario Bros.
Black Pebble game
Black-White Pebble game
Acyclic pebble game
One-player pebble game
Token on acyclic directed graph games:
Logic
Quantified boolean formulas
First-order logic of equality
Provability in intuitionistic propositional logic
Satisfaction in modal logic S4
First-order theory of the natural numbers under the successor operation
First-order theory of the natural numbers under the standard order
First-order theory of the integers under the standard order
First-order theory of well-ordered sets
First-order theory of binary strings under lexicographic ordering
First-order theory of a finite Boolean algebra
Stochastic satisfiability
Linear temporal logic satisfiability and model checking
Lambda calculus
Type inhabitation problem for simply typed lambda calculus
Automata and language theory
Circuit theory
Integer circuit evaluation
Automata theory
Word problem for linear bounded automata
Word problem for quasi-realtime automata
Emptiness problem for a nondeterministic two-way finite state automaton
Equivalence problem for nondeterministic finite automata
Word problem and emptiness problem for non-erasing stack automata
Emptiness of intersection of an unbounded number of deterministic finite automata
A generalized version of Langton's Ant
Minimizing nondeterministic finite automata
Formal languages
Word problem for context-sensitive language
Intersect
|
https://en.wikipedia.org/wiki/List%20of%20states%20of%20matter
|
States of matter are distinguished by changes in the properties of matter associated with external factors like pressure and temperature. States are usually distinguished by a discontinuity in one of those properties: for example, raising the temperature of ice produces a discontinuity at 0°C, as energy goes into a phase transition, rather than temperature increase. The three classical states of matter are solid, liquid and gas. In the 20th century, however, increased understanding of the more exotic properties of matter resulted in the identification of many additional states of matter, none of which are observed in normal conditions.
Low-energy states of matter
Classical states
Solid: A solid holds a definite shape and volume without a container. The particles are held very close to each other.
Amorphous solid: A solid in which there is no far-range order of the positions of the atoms.
Crystalline solid: A solid in which atoms, molecules, or ions are packed in regular order.
Plastic crystal: A molecular solid with long-range positional order but with constituent molecules retaining rotational freedom.
Quasicrystal: A solid in which the positions of the atoms have long-range order, but this is not in a repeating pattern.
Liquid: A mostly non-compressible fluid. Able to conform to the shape of its container but retains a (nearly) constant volume independent of pressure.
Liquid crystal: Properties intermediate between liquids and crystals. Generally, able to flow like a liquid but exhibiting long-range order.
Gas: A compressible fluid. Not only will a gas take the shape of its container but it will also expand to fill the container.
Modern states
Plasma: Free charged particles, usually in equal numbers, such as ions and electrons. Unlike gases, plasma may self-generate magnetic fields and electric currents and respond strongly and collectively to electromagnetic forces. Plasma is very uncommon on Earth (except for the ionosphere), although it is the mo
|
https://en.wikipedia.org/wiki/List%20of%20logic%20symbols
|
In logic, a set of symbols is commonly used to express logical representation. The following table lists many common symbols, together with their name, how they should be read out loud, and the related field of mathematics. Additionally, the subsequent columns contains an informal explanation, a short example, the Unicode location, the name for use in HTML documents, and the LaTeX symbol.
Basic logic symbols
Advanced and rarely used logical symbols
These symbols are sorted by their Unicode value:
Usage in various countries
Poland
in Poland, the universal quantifier is sometimes written ∧, and the existential quantifier as ∨. The same applies for Germany.
Japan
The ⇒ symbol is often used in text to mean "result" or "conclusion", as in "We examined whether to sell the product ⇒ We will not sell it". Also, the → symbol is often used to denote "changed to", as in the sentence "The interest rate changed. March 20% → April 21%".
See also
Józef Maria Bocheński
List of notation used in Principia Mathematica
List of mathematical symbols
Logic alphabet, a suggested set of logical symbols
Logical connective
Mathematical operators and symbols in Unicode
Non-logical symbol
Polish notation
Truth function
Truth table
Wikipedia:WikiProject Logic/Standards for notation
|
https://en.wikipedia.org/wiki/Biomedicine
|
Biomedicine (also referred to as Western medicine, mainstream medicine or conventional medicine) is a branch of medical science that applies biological and physiological principles to clinical practice. Biomedicine stresses standardized, evidence-based treatment validated through biological research, with treatment administered via formally trained doctors, nurses, and other such licensed practitioners.
Biomedicine also can relate to many other categories in health and biological related fields. It has been the dominant system of medicine in the Western world for more than a century.
It includes many biomedical disciplines and areas of specialty that typically contain the "bio-" prefix such as molecular biology, biochemistry, biotechnology, cell biology, embryology, nanobiotechnology, biological engineering, laboratory medical biology, cytogenetics, genetics, gene therapy, bioinformatics, biostatistics, systems biology, neuroscience, microbiology, virology, immunology, parasitology, physiology, pathology, anatomy, toxicology, and many others that generally concern life sciences as applied to medicine.
Overview
Biomedicine is the cornerstone of modern health care and laboratory diagnostics. It concerns a wide range of scientific and technological approaches: from in vitro diagnostics to in vitro fertilisation, from the molecular mechanisms of cystic fibrosis to the population dynamics of the HIV virus, from the understanding of molecular interactions to the study of carcinogenesis, from a single-nucleotide polymorphism (SNP) to gene therapy.
Biomedicine is based on molecular biology and combines all issues of developing molecular medicine into large-scale structural and functional relationships of the human genome, transcriptome, proteome, physiome and metabolome with the particular point of view of devising new technologies for prediction, diagnosis and therapy.
Biomedicine involves the study of (patho-) physiological processes with methods from biology and
|
https://en.wikipedia.org/wiki/Sombrero%20function
|
A sombrero function (sometimes called besinc function or jinc function) is the 2-dimensional polar coordinate analog of the sinc function, and is so-called because it is shaped like a sombrero hat. This function is frequently used in image processing. It can be defined through the Bessel function of the first kind () where .
The normalization factor makes . Sometimes the factor is omitted, giving the following alternative definition:
The factor of 2 is also often omitted, giving yet another definition and causing the function maximum to be 0.5:
The Fourier transform of the 2D circle function () is a sombrero function. Thus a sombrero function also appears in the intensity profile of far-field diffraction through a circular aperture, known as an Airy disk.
|
https://en.wikipedia.org/wiki/Narada%20multicast%20protocol
|
The Narada multicast protocol is a set of specifications which can be used to implement overlay multicast functionality on computer networks.
It constructs an overlay tree from a redundantly meshed graph of nodes, source specific shortest path trees are then constructed from reverse paths. The group management is equally distributed on all nodes because each overlay node keeps track of all its group members through periodic heartbeats of all members. The discovery and tree building is similar to DVMRP.
External links
"An Evaluation of Three Application-Layer Multicast Protocols"
"Overlay Multicast & Content distribution"
|
https://en.wikipedia.org/wiki/Time%20Cube
|
Time Cube was a pseudoscientific personal web page founded in 1997 by the self-proclaimed "wisest man on earth," Otis Eugene "Gene" Ray. It was a self-published outlet for Ray's "theory of everything", also called "Time Cube," which polemically claims that all modern sciences are participating in a worldwide conspiracy to teach lies, by omitting his theory's alleged truth that each day actually consists of four days occurring simultaneously. Alongside these statements, Ray described himself as a "godlike being with superior intelligence who has absolute evidence and proof" for his views. Ray asserted repeatedly and variously that the academic world had not taken Time Cube seriously.
Ray died on March 18, 2015, at the age of 87. His website domain names expired in August 2015, and Time Cube was last archived by the Wayback Machine on January 12, 2016 (January 10–14).
Content
Style
The Time Cube website contained no home page. It consisted of a number of web pages that contained a single vertical centre-aligned column of body text in various sizes and colors, resulting in extremely long main pages. Finding any particular passage was almost impossible without manually searching.
A large amount of self-invented jargon is used throughout: some words and phrases are used frequently but never defined, likely terms materially referring to the weakness of widely propagated ideas that Ray detests throughout the text, and are usually capitalized even when used as adjectives. In one paragraph, he claimed that his own wisdom "so antiquates known knowledge" that a psychiatrist examining his behavior diagnosed him with schizophrenia.
Various commentators have asserted that it is futile to analyze the text rationally, interpret meaningful proofs from the text, or test any claims.
Time Cube concept
Ray's personal model of reality, called "Time Cube", states that all of modern physics and education is wrong, and argues that, among many other things, Greenwich Time is a global
|
https://en.wikipedia.org/wiki/Artificial%20brain
|
An artificial brain (or artificial mind) is software and hardware with cognitive abilities similar to those of the animal or human brain.
Research investigating "artificial brains" and brain emulation plays three important roles in science:
An ongoing attempt by neuroscientists to understand how the human brain works, known as cognitive neuroscience.
A thought experiment in the philosophy of artificial intelligence, demonstrating that it is possible, at least in theory, to create a machine that has all the capabilities of a human being.
A long-term project to create machines exhibiting behavior comparable to those of animals with complex central nervous system such as mammals and most particularly humans. The ultimate goal of creating a machine exhibiting human-like behavior or intelligence is sometimes called strong AI.
An example of the first objective is the project reported by Aston University in Birmingham, England where researchers are using biological cells to create "neurospheres" (small clusters of neurons) in order to develop new treatments for diseases including Alzheimer's, motor neurone and Parkinson's disease.
The second objective is a reply to arguments such as John Searle's Chinese room argument, Hubert Dreyfus's critique of AI or Roger Penrose's argument in The Emperor's New Mind. These critics argued that there are aspects of human consciousness or expertise that can not be simulated by machines. One reply to their arguments is that the biological processes inside the brain can be simulated to any degree of accuracy. This reply was made as early as 1950, by Alan Turing in his classic paper "Computing Machinery and Intelligence".
The third objective is generally called artificial general intelligence by researchers. However, Ray Kurzweil prefers the term "strong AI". In his book The Singularity is Near, he focuses on whole brain emulation using conventional computing machines as an approach to implementing artificial brains, and claims (on groun
|
https://en.wikipedia.org/wiki/Approximate%20max-flow%20min-cut%20theorem
|
Approximate max-flow min-cut theorems are mathematical propositions in network flow theory. They deal with the relationship between maximum flow rate ("max-flow") and minimum cut ("min-cut") in a multi-commodity flow problem. The theorems have enabled the development of approximation algorithms for use in graph partition and related problems.
Multicommodity flow problem
A "commodity" in a network flow problem is a pair of source and sink nodes. In a multi-commodity flow problem, there are commodities, each with its own source , sink , and demand . The objective is to simultaneously route units of commodity from to for each , such that the total amount of all commodities passing through any edge is no greater than its capacity. (In the case of undirected edges, the sum of the flows in both directions cannot exceed the capacity of the edge).
Specially, a 1-commodity (or single commodity) flow problem is also known as a maximum flow problem. According to the Ford–Fulkerson algorithm, the max-flow and min-cut are always equal in a 1-commodity flow problem.
Max-flow and min-cut
In a multicommodity flow problem, max-flow is the maximum value of , where is the common fraction of each commodity that is routed, such that units of commodity can be simultaneously routed for each without violating any capacity constraints.
min-cut is the minimum of all cuts of the ratio of the capacity of the cut to the demand of the cut.
Max-flow is always upper bounded by the min-cut for a multicommodity flow problem.
Uniform multicommodity flow problem
In a uniform multicommodity flow problem, there is a commodity for every pair of nodes and the demand for every commodity is the same. (Without loss of generality, the demand for every commodity is set to one.) The underlying network and capacities are arbitrary.
Product multicommodity flow problem
In a product multicommodity flow problem, there is a nonnegative weight for each node in graph . The demand for the commodity betwee
|
https://en.wikipedia.org/wiki/Quality%20of%20results
|
Quality of Results (QoR) is a term used in evaluating technological processes. It is generally represented as a vector of components, with the special case of uni-dimensional value as a synthetic measure.
History
The term was coined by the Electronic Design Automation (EDA) industry in the late 1980s. QoR was meant to be an indicator of the performance of integrated circuits (chips), and initially measured the area and speed of a chip. As the industry evolved, new chip parameters were considered for coverage by the QoR, illustrating new areas of focus for chip designers (for example power dissipation, power efficiency, routing overhead, etc.). Because of the broad scope of quality assessment, QoR eventually evolved into a generic vector representation comprising a number of different values, where the meaning of each vector value was explicitly specified in the QoR analysis document.
Currently the term is gaining popularity in other sectors of technology, with each sector using its own appropriate components.
Current trends in EDA
Originally, the QoR was used to specify absolute values such as chip area, power dissipation, speed, etc. (for example, a QoR could be specified as a {100 MHz, 1W, 1 mm²} vector), and could only be used for comparing the different achievements of a single design specification. The current trend among designers is to include normalized values in the QoR vector, such that they will remain meaningful for a longer period of time (as technologies change), and/or across broad classes of design. For example, one often uses – as a QoR component – a number representing the ratio between the area required by a combinational logic block and the area required by a simple logic gate, this number being often referred to as "relative density of combinational logic". In this case, a relative density of five will generally be accepted as a good quality of result – relative density of combinational logic component – while a relative density of fifty will
|
https://en.wikipedia.org/wiki/Outline%20of%20category%20theory
|
The following outline is provided as an overview of and guide to category theory, the area of study in mathematics that examines in an abstract way the properties of particular mathematical concepts, by formalising them as collections of objects and arrows (also called morphisms, although this term also has a specific, non category-theoretical sense), where these collections satisfy certain basic conditions. Many significant areas of mathematics can be formalised as categories, and the use of category theory allows many intricate and subtle mathematical results in these fields to be stated, and proved, in a much simpler way than without the use of categories.
Essence of category theory
Category
Functor
Natural transformation
Branches of category theory
Homological algebra
Diagram chasing
Topos theory
Enriched category theory
Higher category theory
Categorical logic
Specific categories
Category of sets
Concrete category
Category of vector spaces
Category of graded vector spaces
Category of chain complexes
Category of finite dimensional Hilbert spaces
Category of sets and relations
Category of topological spaces
Category of metric spaces
Category of preordered sets
Category of groups
Category of abelian groups
Category of rings
Category of magmas
Category of medial magmas
Objects
Initial object
Terminal object
Zero object
Subobject
Group object
Magma object
Natural number object
Exponential object
Morphisms
Epimorphism
Monomorphism
Zero morphism
Normal morphism
Dual (category theory)
Groupoid
Image (category theory)
Coimage
Commutative diagram
Cartesian morphism
Slice category
Functors
Isomorphism of categories
Natural transformation
Equivalence of categories
Subcategory
Faithful functor
Full functor
Forgetful functor
Yoneda lemma
Representable functor
Functor category
Adjoint functors
Galois connection
Pontryagin duality
Affine scheme
Monad (category theory)
Comonad
Combinatorial species
E
|
https://en.wikipedia.org/wiki/Morse%20Micro
|
Morse Micro is a Sydney-based developer of Wi-Fi HaLow microprocessors; chips that enable high data rates, with long range and low power consumption. Amongst all Wi-Fi HaLow systems on a chip, Morse Micro processors are reported to be the smallest, fastest, longest-range with lowest-power-use.
The main application of the technology is machine-to-machine communications. With the Internet of things expected to extend to 30 billion devices by 2025, this represents a steeply growing number of users of the technology. The founders plan to be part of "expanding Wi-Fi so it can go into everything, every smoke alarm, every camera."
The firm has its global HQ in Sydney, which is also its main base for R&D, with additional centres in India, China and the United States. As of 2022, Morse Micro was producing more semiconductors than any other Australian-based tech company.
Technology
After eight years' development, the company's Wifi HalLow processor was reported to deliver 10 times the range of conventional Wi-Fi technology, and able to function for several years before needing battery change.
Data rates and range
The microprocessor allows for a range of data rates, depending on the modulation and coding scheme (MCS) used. This can be as low as 150 kilobytes per second using MCS10 with BPSK modulation, to a top rate of 4 megabytes per second using MCS9 at 256 quadrature amplitude modulation.
The chip uses low-bandwidth wireless network protocols, operating in the 1 GHz spectrum, while providing a communications range of 1,000 metres. In one field test, researchers found the technology could sustain high speed data transmission between a device placed by the north end of Sydney Harbour Bridge and a device across the harbour at Sydney Opera House. The company claims their chip provides 10 times the range, 100 times the area and 1000 times the volume of data offered by traditional wi-fi.
Connectivity and energy
To enable networked communications between machines, a sing
|
https://en.wikipedia.org/wiki/CMD640
|
CMD640, the California Micro Devices Technology Inc product 0640, is an IDE interface chip for the PCI and VLB buses. CMD640 had some sort of hardware acceleration: WDMA and Read-Ahead (prefetch) support.
CMD Technology Inc was acquired by Silicon Image Inc. in 2001.
Hardware bug
The original CMD640 has data corruption bugs, some of which remained in CMD646. The data corruption bug is similar to the bug affecting the contemporaneous PC Tech (a subsidiary of Zeos) RZ1000 chipset. Both chipsets were used on a number of motherboards, including those from Intel.
Мodern operating systems have a workaround for this bug by prohibiting aggressive acceleration mode and losing about 10% of the performance.
|
https://en.wikipedia.org/wiki/Period%20%28algebraic%20geometry%29
|
In algebraic geometry, a period is a number that can be expressed as an integral of an algebraic function over an algebraic domain. Sums and products of periods remain periods, such that the periods form a ring.
Maxim Kontsevich and Don Zagier gave a survey of periods and introduced some conjectures about them. Periods also arise in computing the integrals that arise from Feynman diagrams, and there has been intensive work trying to understand the connections.
Definition
A real number is a period if it is of the form
where is a polynomial and a rational function on with rational coefficients. A complex number is a period if its real and imaginary parts are periods.
An alternative definition allows and to be algebraic functions; this looks more general, but is equivalent. The coefficients of the rational functions and polynomials can also be generalised to algebraic numbers because irrational algebraic numbers are expressible in terms of areas of suitable domains.
In the other direction, can be restricted to be the constant function or , by replacing the integrand with an integral of over a region defined by a polynomial in additional variables. In other words, a (nonnegative) period is the volume of a region in defined by a polynomial inequality.
Examples
Besides the algebraic numbers, the following numbers are known to be periods:
The natural logarithm of any positive algebraic number a, which is
Elliptic integrals with rational arguments
All zeta constants (the Riemann zeta function of an integer) and multiple zeta values
Special values of hypergeometric functions at algebraic arguments
Γ(p/q)q for natural numbers p and q.
An example of a real number that is not a period is given by Chaitin's constant Ω. Any other non-computable number also gives an example of a real number that is not a period. Currently there are no natural examples of computable numbers that have been proved not to be periods, however it is possible to construct artif
|
https://en.wikipedia.org/wiki/History%20of%20mathematical%20notation
|
The history of mathematical notation includes the commencement, progress, and cultural diffusion of mathematical symbols and the conflict of the methods of notation confronted in a notation's move to popularity or inconspicuousness. Mathematical notation comprises the symbols used to write mathematical equations and formulas. Notation generally implies a set of well-defined representations of quantities and symbols operators. The history includes Hindu–Arabic numerals, letters from the Roman, Greek, Hebrew, and German alphabets, and a host of symbols invented by mathematicians over the past several centuries.
The development of mathematical notation can be divided in stages:
The "rhetorical" stage is where calculations are performed by words and no symbols are used.
The "syncopated" stage is where frequently used operations and quantities are represented by symbolic syntactical abbreviations. From ancient times through the post-classical age, bursts of mathematical creativity were often followed by centuries of stagnation. As the early modern age opened and the worldwide spread of knowledge began, written examples of mathematical developments came to light.
The "symbolic" stage is where comprehensive systems of notation supersede rhetoric. Beginning in Italy in the 16th century, new mathematical developments, interacting with new scientific discoveries were made at an increasing pace that continues through the present day. This symbolic system was in use by medieval Indian mathematicians and in Europe since the middle of the 17th century, and has continued to develop in the contemporary era.
The area of study known as the history of mathematics is primarily an investigation into the origin of discoveries in mathematics and the focus here, the investigation into the mathematical methods and notation of the past.
Rhetorical stage
Although the history commences with that of the Ionian schools, there is no doubt that those Ancient Greeks who paid attention to i
|
https://en.wikipedia.org/wiki/List%20of%20physics%20journals
|
This is a list of physics journals with existing articles on Wikipedia. The list is organized by subfields of physics.
By subject
General
Astrophysics
Atomic, molecular, and optical physics
European Physical Journal D
Journal of Physics B
Laser Physics
Molecular Physics
Physical Review A
Plasmas
Measurement
Measurement Science and Technology
Metrologia
Review of Scientific Instruments
Nuclear and particle physics
Optics
Computational physics
Computational Materials Science
Computer Physics Communications
International Journal of Modern Physics C (computational physics, physical computations)
Journal of Computational Physics
Physical Review E, section E13
Communications in Computational Physics
Condensed matter and materials science
Low temperature physics
Journal of Low Temperature Physics
Low Temperature Physics
Chemical physics
Chemical Physics Letters
Journal of Chemical Physics
Journal of Physical Chemistry A
Journal of Physical Chemistry B
Journal of Physical Chemistry C
Journal of Physical Chemistry Letters
Physical Chemistry Chemical Physics
Soft matter physics
European Physical Journal E
Journal of Polymer Science Part B
Soft Matter
Medical physics
Australasian Physical & Engineering Sciences in Medicine
BMC Medical Physics
Bioelectromagnetics
Health Physics
Journal of Medical Physics
Magnetic Resonance in Medicine
Medical Physics
Physics in Medicine and Biology
Biological physics
Annual Review of Biophysics
Biochemical and Biophysical Research Communications
Biophysical Journal
Biophysical Reviews and Letters
Doklady Biochemistry and Biophysics
European Biophysics Journal
International Journal of Biological Macromolecules
Physical Biology
Radiation and Environmental Biophysics
Statistical and nonlinear physics
Theoretical and mathematical physics
Quantum information
Quantum
Journal of Quantum Information Science
International Journal of Quantum Information
npj Quantum Information
Geophysic
|
https://en.wikipedia.org/wiki/Double%20counting%20%28proof%20technique%29
|
In combinatorics, double counting, also called counting in two ways, is a combinatorial proof technique for showing that two expressions are equal by demonstrating that they are two ways of counting the size of one set. In this technique, which call "one of the most important tools in combinatorics", one describes a finite set from two perspectives leading to two distinct expressions for the size of the set. Since both expressions equal the size of the same set, they equal each other.
Examples
Multiplication (of natural numbers) commutes
This is a simple example of double counting, often used when teaching multiplication to young children. In this context, multiplication of natural numbers is introduced as repeated addition, and is then shown to be commutative by counting, in two different ways, a number of items arranged in a rectangular grid. Suppose the grid has rows and columns. We first count the items by summing rows of items each, then a second time by summing columns of items each, thus showing that, for these particular values of and , .
Forming committees
One example of the double counting method counts the number of ways in which a committee can be formed from people, allowing any number of the people (even zero of them) to be part of the committee. That is, one counts the number of subsets that an -element set may have. One method for forming a committee is to ask each person to choose whether or not to join it. Each person has two choices – yes or no – and these choices are independent of those of the other people. Therefore there are possibilities. Alternatively, one may observe that the size of the committee must be some number between 0 and . For each possible size , the number of ways in which a committee of people can be formed from people is the binomial coefficient
Therefore the total number of possible committees is the sum of binomial coefficients over . Equating the two expressions gives the identity
a special case of the b
|
https://en.wikipedia.org/wiki/Outline%20of%20geometry
|
Geometry is a branch of mathematics concerned with questions of shape, size, relative position of figures, and the properties of space. Geometry is one of the oldest mathematical sciences.
Classical branches
Geometry
Analytic geometry
Differential geometry
Euclidean geometry
Non-Euclidean geometry
Projective geometry
Riemannian geometry
Contemporary branches
Absolute geometry
Affine geometry
Archimedes' use of infinitesimals
Birational geometry
Complex geometry
Combinatorial geometry
Computational geometry
Conformal geometry
Constructive solid geometry
Contact geometry
Convex geometry
Descriptive geometry
Digital geometry
Discrete geometry
Distance geometry
Elliptic geometry
Enumerative geometry
Epipolar geometry
Finite geometry
Geometry of numbers
Hyperbolic geometry
Incidence geometry
Information geometry
Integral geometry
Inversive geometry
Klein geometry
Lie sphere geometry
Numerical geometry
Ordered geometry
Parabolic geometry
Plane geometry
Quantum geometry
Ruppeiner geometry
Spherical geometry
Symplectic geometry
Synthetic geometry
Systolic geometry
Taxicab geometry
Toric geometry
Transformation geometry
Tropical geometry
History of geometry
History of geometry
Timeline of geometry
Babylonian geometry
Egyptian geometry
Ancient Greek geometry
Euclidean geometry
Pythagorean theorem
Euclid's Elements
Measurement of a Circle
Indian mathematics
Bakhshali manuscript
Modern geometry
History of analytic geometry
History of the Cartesian coordinate system
History of non-Euclidean geometry
History of topology
History of algebraic geometry
General geometry concepts
General concepts
Geometric progression — Geometric shape — Geometry — Pi — angular velocity — linear velocity — De Moivre's theorem — parallelogram rule — Pythagorean theorem — similar triangles — trigonometric identity — unit circle — Trapezoid — Triangle — Theorem — point — ray — plane — line — line segment
Measurements
Bearing
A
|
https://en.wikipedia.org/wiki/Greek%20letters%20used%20in%20mathematics%2C%20science%2C%20and%20engineering
|
Greek letters are used in mathematics, science, engineering, and other areas where mathematical notation is used as symbols for constants, special functions, and also conventionally for variables representing certain quantities. In these contexts, the capital letters and the small letters represent distinct and unrelated entities. Those Greek letters which have the same form as Latin letters are rarely used: capital A, B, E, Z, H, I, K, M, N, O, P, T, Y, X. Small ι, ο and υ are also rarely used, since they closely resemble the Latin letters i, o and u. Sometimes, font variants of Greek letters are used as distinct symbols in mathematics, in particular for ε/ϵ and π/ϖ. The archaic letter digamma (Ϝ/ϝ/ϛ) is sometimes used.
The Bayer designation naming scheme for stars typically uses the first Greek letter, α, for the brightest star in each constellation, and runs through the alphabet before switching to Latin letters.
In mathematical finance, the Greeks are the variables denoted by Greek letters used to describe the risk of certain investments.
Typography
The Greek letter forms used in mathematics are often different from those used in Greek-language text: they are designed to be used in isolation, not connected to other letters, and some use variant forms which are not normally used in current Greek typography.
The OpenType font format has the feature tag "mgrk" ("Mathematical Greek") to identify a glyph as representing a Greek letter to be used in mathematical (as opposed to Greek language) contexts.
The table below shows a comparison of Greek letters rendered in TeX and HTML.
The font used in the TeX rendering is an italic style. This is in line with the convention that variables should be italicized. As Greek letters are more often than not used as variables in mathematical formulas, a Greek letter appearing similar to the TeX rendering is more likely to be encountered in works involving mathematics.
Concepts represented by a Greek letter
Αα (alpha)
repr
|
https://en.wikipedia.org/wiki/Geometry%20template
|
A geometry template is a piece of clear plastic with cut-out shapes for use in mathematics and other subjects in primary school through secondary school. It also has various measurements on its sides to be used like a ruler. In Australia, popular brands include Mathomat and MathAid.
Brands
Mathomat and Mathaid
Mathomat is a trademark used for a plastic stencil developed in Australia by Craig Young in 1969, who originally worked as an engineering tradesperson in the Government Aircraft Factories (GAF) in Melbourne before retraining and working as head of mathematics in a secondary school in Melbourne. Young designed Mathomat to address what he perceived as limitations of traditional mathematics drawing sets in classrooms, mainly caused by students losing parts of the sets. The Mathomat stencil has a large number of geometric shapes stencils combined with the functions of a technical drawing set (rulers, set squares, protractor and circles stencils to replace a compass).
The template made use polycarbonate – a new type of thermoplastic polymer when Mathomat first came out – which was strong and transparent enough to allow for a large number of stencil shapes to be included in its design without breaking or tearing. The first template was exhibited in 1970 at a mathematics conference in Melbourne along with a series of popular mathematics teaching lesson plan; it became an immediate success with a large number of schools specifying it as a required students purchase. As of 2017, the stencil is widely specified in Australian schools, chiefly for students at early secondary school level. The manufacturing of Mathomat was taken over in 1989 by the W&G drawing instrument company, which had a factory in Melbourne for manufacture of technical drawing instruments. Young also developed MathAid, which was initially produced by him when he was living in Ringwood, Victoria. He later sold the company.
W&G published a series of teacher resource books for Mathomat authored by
|
https://en.wikipedia.org/wiki/Heuristic
|
A heuristic (; ), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate, short-term goal or approximation in a search space. Where finding an optimal solution is impossible or impractical, heuristic methods can be used to speed up the process of finding a satisfactory solution. Heuristics can be mental shortcuts that ease the cognitive load of making a decision.
Examples that employ heuristics include using trial and error, a rule of thumb or an educated guess.
Heuristics are the strategies derived from previous experiences with similar problems. These strategies depend on using readily accessible, though loosely applicable, information to control problem solving in human beings, machines and abstract issues. When an individual applies a heuristic in practice, it generally performs as expected. However it can alternatively create systematic errors.
The most fundamental heuristic is trial and error, which can be used in everything from matching nuts and bolts to finding the values of variables in algebra problems. In mathematics, some common heuristics involve the use of visual representations, additional assumptions, forward/backward reasoning and simplification. Here are a few commonly used heuristics from George Pólya's 1945 book, How to Solve It:
In psychology, heuristics are simple, efficient rules, either learned or inculcated by evolutionary processes. These psychological heuristics have been proposed to explain how people make decisions, come to judgements, and solve problems. These rules typically come into play when people face complex problems or incomplete information. Researchers employ various methods to test whether people use these rules. The rules have been shown to work well under most circumstances, but in certain cases can lead to systematic errors or cognitive biases.
Hist
|
https://en.wikipedia.org/wiki/Outline%20of%20actuarial%20science
|
The following outline is provided as an overview of and topical guide to actuarial science:
Actuarial science – discipline that applies mathematical and statistical methods to assess risk in the insurance and finance industries.
What type of thing is actuarial science?
Actuarial science can be described as all of the following:
An academic discipline –
A branch of science –
An applied science –
A subdiscipline of statistics –
Essence of actuarial science
Actuarial science
Actuary
Actuarial notation
Fields in which actuarial science is applied
Mathematical finance
Insurance, especially:
Life insurance
Health insurance
Human resource consulting
History of actuarial science
History of actuarial science
General actuarial science concepts
Insurance
Health insurance
Life Insurance
Life insurance
Life insurer
Insurable interest
Insurable risk
Annuity
Life annuity
Perpetuity
New Business Strain
Zillmerisation
Financial reinsurance
Net premium valuation
Gross premium valuation
Embedded value
European Embedded Value
Stochastic modelling
Asset liability modelling
Non-life Insurance
Property insurance
Casualty insurance
Vehicle insurance
Ruin theory
Stochastic modelling
Risk and capital management in non-life insurance
Reinsurance
Reinsurance
Financial reinsurance
Reinsurance Actuarial Premium
Reinsurer
Investments & Asset Management
Dividend yield
PE ratio
Bond valuation
Yield to maturity
Cost of capital
Net asset value
Derivatives
Mathematics of Finance
Financial mathematics
Interest
Time value of money
Discounting
Present value
Future value
Net present value
Internal rate of return
Yield curve
Yield to maturity
Effective annual rate (EAR)
Annual percentage rate (APR)
Mortality
Force of mortality
Life table
Pensions
Pensions
Stochastic modelling
Other
Enterprise risk management
Fictional actuaries
Persons influential in the field of actuarial science
List of actuaries
See also
In
|
https://en.wikipedia.org/wiki/Strict
|
In mathematical writing, the term strict refers to the property of excluding equality and equivalence and often occurs in the context of inequality and monotonic functions. It is often attached to a technical term to indicate that the exclusive meaning of the term is to be understood. The opposite is non-strict, which is often understood to be the case but can be put explicitly for clarity. In some contexts, the word "proper" can also be used as a mathematical synonym for "strict".
Use
This term is commonly used in the context of inequalities — the phrase "strictly less than" means "less than and not equal to" (likewise "strictly greater than" means "greater than and not equal to"). More generally, a strict partial order, strict total order, and strict weak order exclude equality and equivalence.
When comparing numbers to zero, the phrases "strictly positive" and "strictly negative" mean "positive and not equal to zero" and "negative and not equal to zero", respectively. In the context of functions, the adverb "strictly" is used to modify the terms "monotonic", "increasing", and "decreasing".
On the other hand, sometimes one wants to specify the inclusive meanings of terms. In the context of comparisons, one can use the phrases "non-negative", "non-positive", "non-increasing", and "non-decreasing" to make it clear that the inclusive sense of the terms is being used.
The use of such terms and phrases helps avoid possible ambiguity and confusion. For instance, when reading the phrase "x is positive", it is not immediately clear whether x = 0 is possible, since some authors might use the term positive loosely to mean that x is not less than zero. Such an ambiguity can be mitigated by writing "x is strictly positive" for x > 0, and "x is non-negative" for x ≥ 0. (A precise term like non-negative is never used with the word negative in the wider sense that includes zero.)
The word "proper" is often used in the same way as "strict". For example, a "proper subset" of
|
https://en.wikipedia.org/wiki/Turing%27s%20proof
|
Turing's proof is a proof by Alan Turing, first published in January 1937 with the title "On Computable Numbers, with an Application to the ". It was the second proof (after Church's theorem) of the negation of Hilbert's ; that is, the conjecture that some purely mathematical yes–no questions can never be answered by computation; more technically, that some decision problems are "undecidable" in the sense that there is no single algorithm that infallibly gives a correct "yes" or "no" answer to each instance of the problem. In Turing's own words:
"what I shall prove is quite different from the well-known results of Gödel ... I shall now show that there is no general method which tells whether a given formula U is provable in K [Principia Mathematica]".
Turing followed this proof with two others. The second and third both rely on the first. All rely on his development of typewriter-like "computing machines" that obey a simple set of rules and his subsequent development of a "universal computing machine".
Summary of the proofs
In his proof that the Entscheidungsproblem can have no solution, Turing proceeded from two proofs that were to lead to his final proof. His first theorem is most relevant to the halting problem, the second is more relevant to Rice's theorem.
First proof: that no "computing machine" exists that can decide whether or not an arbitrary "computing machine" (as represented by an integer 1, 2, 3, . . .) is "circle-free" (i.e. goes on printing its number in binary ad infinitum): "...we have no general process for doing this in a finite number of steps" (p. 132, ibid.). Turing's proof, although it seems to use the "diagonal process", in fact shows that his machine (called H) cannot calculate its own number, let alone the entire diagonal number (Cantor's diagonal argument): "The fallacy in the argument lies in the assumption that B [the diagonal number] is computable" The proof does not require much mathematics.
Second proof: This one is perhaps more f
|
https://en.wikipedia.org/wiki/Divisibility%20rule
|
A divisibility rule is a shorthand and useful way of determining whether a given integer is divisible by a fixed divisor without performing the division, usually by examining its digits. Although there are divisibility tests for numbers in any radix, or base, and they are all different, this article presents rules and examples only for decimal, or base 10, numbers. Martin Gardner explained and popularized these rules in his September 1962 "Mathematical Games" column in Scientific American.
Divisibility rules for numbers 1–30
The rules given below transform a given number into a generally smaller number, while preserving divisibility by the divisor of interest. Therefore, unless otherwise noted, the resulting number should be evaluated for divisibility by the same divisor. In some cases the process can be iterated until the divisibility is obvious; for others (such as examining the last n digits) the result must be examined by other means.
For divisors with multiple rules, the rules are generally ordered first for those appropriate for numbers with many digits, then those useful for numbers with fewer digits.
To test the divisibility of a number by a power of 2 or a power of 5 (2n or 5n, in which n is a positive integer), one only need to look at the last n digits of that number.
To test divisibility by any number expressed as the product of prime factors , we can separately test for divisibility by each prime to its appropriate power. For example, testing divisibility by 24 (24 = 8×3 = 23×3) is equivalent to testing divisibility by 8 (23) and 3 simultaneously, thus we need only show divisibility by 8 and by 3 to prove divisibility by 24.
Step-by-step examples
Divisibility by 2
First, take any number (for this example it will be 376) and note the last digit in the number, discarding the other digits. Then take that digit (6) while ignoring the rest of the number and determine if it is divisible by 2. If it is divisible by 2, then the original number is divis
|
https://en.wikipedia.org/wiki/BullSequana
|
BullSequana is the brand name of a range of high performance computer systems produced by Atos.
The range includes
BullSequana S series - a modular compute platform optimised for AI and GPU-intensive tasks.
BullSequana X series - supercomputers which are claimed to operate at exascale
|
https://en.wikipedia.org/wiki/Pulse%20duration
|
In signal processing and telecommunication, pulse duration is the interval between the time, during the first transition, that the amplitude of the pulse reaches a specified fraction (level) of its final amplitude, and the time the pulse amplitude drops, on the last transition, to the same level.
The interval between the 50% points of the final amplitude is usually used to determine or define pulse duration, and this is understood to be the case unless otherwise specified. Other fractions of the final amplitude, e.g., 90% or 1/e, may also be used, as may the root mean square (rms) value of the pulse amplitude.
In radar, the pulse duration is the time the radar's transmitter is energized during each cycle.
|
https://en.wikipedia.org/wiki/List%20of%20graphs
|
This partial list of graphs contains definitions of graphs and graph families. For collected definitions of graph theory terms that do not refer to individual graph types, such as vertex and path, see Glossary of graph theory. For links to existing articles about particular kinds of graphs, see Category:Graphs. Some of the finite structures considered in graph theory have names, sometimes inspired by the graph's topology, and sometimes after their discoverer. A famous example is the Petersen graph, a concrete graph on 10 vertices that appears as a minimal example or counterexample in many different contexts.
Individual graphs
Highly symmetric graphs
Strongly regular graphs
The strongly regular graph on v vertices and rank k is usually denoted srg(v,k,λ,μ).
Symmetric graphs
A symmetric graph is one in which there is a symmetry (graph automorphism) taking any ordered pair of adjacent vertices to any other ordered pair; the Foster census lists all small symmetric 3-regular graphs. Every strongly regular graph is symmetric, but not vice versa.
Semi-symmetric graphs
Graph families
Complete graphs
The complete graph on vertices is often called the -clique and usually denoted , from German komplett.
Complete bipartite graphs
The complete bipartite graph is usually denoted . For see the section on star graphs. The graph equals the 4-cycle (the square) introduced below.
Cycles
The cycle graph on vertices is called the n-cycle and usually denoted . It is also called a cyclic graph, a polygon or the n-gon. Special cases are the triangle , the square , and then several with Greek naming pentagon , hexagon , etc.
Friendship graphs
The friendship graph Fn can be constructed by joining n copies of the cycle graph C3 with a common vertex.
Fullerene graphs
In graph theory, the term fullerene refers to any 3-regular, planar graph with all faces of size 5 or 6 (including the external face). It follows from Euler's polyhedron formula, V – E + F = 2 (where V, E, F indic
|
https://en.wikipedia.org/wiki/Animalia%20Paradoxa
|
(Latin for "contradictory animals"; cf. paradox) are the mythical, magical or otherwise suspect animals mentioned in the first five editions of Carl Linnaeus's seminal work under the header "Paradoxa". It lists fantastic creatures found in medieval bestiaries and some animals reported by explorers from abroad and explains why they are excluded from Systema Naturae. According to Swedish historian Gunnar Broberg, it was to offer a natural explanation and demystify the world of superstition. Paradoxa was dropped from Linnaeus' classification system as of the 6th edition (1748).
Paradoxa
These 10 taxa appear in the 1st to 5th editions:
Hydra: Linnaeus wrote: "Hydra: body of a snake, with two feet, seven necks and the same number of heads, lacking wings, preserved in Hamburg, similar to the description of the Hydra of the Apocalypse of St.John chapters 12 and 13. And it is provided by very many as a true species of animal, but falsely. Nature for itself and always the similar, never naturally makes multiple heads on one body. Fraud and artifice, as we ourselves saw [on it] teeth of a weasel, different from teeth of an Amphibian [or reptile], easily detected." See Carl Linnaeus#Doctorate. (Distinguish from the small real coelenterate Hydra (genus).)
Rana-Piscis: a South American frog which is significantly smaller than its tadpole stage; it was thus (incorrectly) reported to Linnaeus that the metamorphosis in this species went from 'frog to fish'. In the Paradoxa in the 1st edition of Systema Naturae, Linnaeus wrote "Frog-Fish or Frog Changing into Fish: is much against teaching. Frogs, like all Amphibia, delight in lungs and spiny bones. Spiny fish, instead of lungs, are equipped with gills. Therefore the laws of Nature will be against this change. If indeed a fish is equipped with gills, it will be separate from the Frog and Amphibia. If truly [it has] lungs, it will be a Lizard: for under all the sky it differs from Chondropterygii and Plagiuri." In the 10th editi
|
https://en.wikipedia.org/wiki/Experimental%20biology
|
Experimental biology is the set of approaches in the field of biology concerned with the conduction of experiments to investigate and understand biological phenomena. The term is opposed to theoretical biology which is concerned with the mathematical modelling and abstractions of the biological systems. Due to the complexity of the investigated systems, biology is primarily an experimental science. However, as a consequence of the modern increase in computational power, it is now becoming more feasible to find approximate solutions and validate mathematical models of complex living organisms.
The methods employed in experimental biology are numerous and of different nature including molecular, biochemical, biophysical, microscopical and microbiological. See :Category:Laboratory techniques for a list of biological experimental techniques.
Gallery
|
https://en.wikipedia.org/wiki/Species
|
A species () is often defined as the largest group of organisms in which any two individuals of the appropriate sexes or mating types can produce fertile offspring, typically by sexual reproduction. It is the basic unit of classification and a taxonomic rank of an organism, as well as a unit of biodiversity. Other ways of defining species include their karyotype, DNA sequence, morphology, behaviour, or ecological niche. In addition, paleontologists use the concept of the chronospecies since fossil reproduction cannot be examined.
The most recent rigorous estimate for the total number of species of eukaryotes is between 8 and 8.7 million. About 14% of these had been described by 2011.
All species (except viruses) are given a two-part name, a "binomial". The first part of a binomial is the genus to which the species belongs. The second part is called the specific name or the specific epithet (in botanical nomenclature, also sometimes in zoological nomenclature). For example, Boa constrictor is one of the species of the genus Boa, with constrictor being the species' epithet.
While the definitions given above may seem adequate at first glance, when looked at more closely they represent problematic species concepts. For example, the boundaries between closely related species become unclear with hybridisation, in a species complex of hundreds of similar microspecies, and in a ring species. Also, among organisms that reproduce only asexually, the concept of a reproductive species breaks down, and each clone is potentially a microspecies. Although none of these are entirely satisfactory definitions, and while the concept of species may not be a perfect model of life, it is still a useful tool to scientists and conservationists for studying life on Earth, regardless of the theoretical difficulties. If species were fixed and clearly distinct from one another, there would be no problem, but evolutionary processes cause species to change. This obliges taxonomists to decide,
|
https://en.wikipedia.org/wiki/List%20of%20minerals%20by%20optical%20properties
|
See also
List of minerals
|
https://en.wikipedia.org/wiki/Simultaneous%20multithreading
|
Simultaneous multithreading (SMT) is a technique for improving the overall efficiency of superscalar CPUs with hardware multithreading. SMT permits multiple independent threads of execution to better use the resources provided by modern processor architectures.
Details
The term multithreading is ambiguous, because not only can multiple threads be executed simultaneously on one CPU core, but also multiple tasks (with different page tables, different task state segments, different protection rings, different I/O permissions, etc.). Although running on the same core, they are completely separated from each other.
Multithreading is similar in concept to preemptive multitasking but is implemented at the thread level of execution in modern superscalar processors.
Simultaneous multithreading (SMT) is one of the two main implementations of multithreading, the other form being temporal multithreading (also known as super-threading). In temporal multithreading, only one thread of instructions can execute in any given pipeline stage at a time. In simultaneous multithreading, instructions from more than one thread can be executed in any given pipeline stage at a time. This is done without great changes to the basic processor architecture: the main additions needed are the ability to fetch instructions from multiple threads in a cycle, and a larger register file to hold data from multiple threads. The number of concurrent threads is decided by the chip designers. Two concurrent threads per CPU core are common, but some processors support up to eight concurrent threads per core.
Because it inevitably increases conflict on shared resources, measuring or agreeing on its effectiveness can be difficult. However, measured energy efficiency of SMT with parallel native and managed workloads on historical 130 nm to 32 nm Intel SMT (hyper-threading) implementations found that in 45 nm and 32 nm implementations, SMT is extremely energy efficient, even with in-order Atom processors. In
|
https://en.wikipedia.org/wiki/Delay-locked%20loop
|
In electronics, a delay-locked loop (DLL) is a pseudo-digital circuit similar to a phase-locked loop (PLL), with the main difference being the absence of an internal voltage-controlled oscillator, replaced by a delay line.
A DLL can be used to change the phase of a clock signal (a signal with a periodic waveform), usually to enhance the clock rise-to-data output valid timing characteristics of integrated circuits (such as DRAM devices). DLLs can also be used for clock recovery (CDR). From the outside, a DLL can be seen as a negative delay gate placed in the clock path of a digital circuit.
The main component of a DLL is a delay chain composed of many delay gates connected output-to-input. The input of the chain (and thus of the DLL) is connected to the clock that is to be negatively delayed. A multiplexer is connected to each stage of the delay chain; a control circuit automatically updates the selector of this multiplexer to produce the negative delay effect. The output of the DLL is the resulting, negatively delayed clock signal.
Another way to view the difference between a DLL and a PLL is that a DLL uses a variable phase (=delay) block, whereas a PLL uses a variable frequency block.
A DLL compares the phase of its last output with the input clock to generate an error signal which is then integrated and fed back as the control to all of the delay elements.
The integration allows the error to go to zero while keeping the control signal, and thus the delays, where they need to be for phase lock. Since the control signal directly impacts the phase this is all that is required.
A PLL compares the phase of its oscillator with the incoming signal to generate an error signal which is then integrated to create a control signal for the voltage-controlled oscillator. The control signal impacts the oscillator's frequency, and phase is the integral of frequency, so a second integration is unavoidably performed by the oscillator itself.
In the Control Systems jargon, th
|
https://en.wikipedia.org/wiki/Dell%20M1000e
|
The Dell blade server products are built around their M1000e enclosure that can hold their server blades, an embedded EqualLogic iSCSI storage area network and I/O modules including Ethernet, Fibre Channel and InfiniBand switches.
Enclosure
The M1000e fits in a 19-inch rack and is 10 rack units high (44 cm), 17.6" (44.7 cm) wide and 29.7" (75.4 cm) deep. The empty blade enclosure weighs 44.5 kg while a fully loaded system can weigh up to 178.8 kg.
On the front the servers are inserted while at the backside the power-supplies, fans and I/O modules are inserted together with the management modules(s) (CMC or chassis management controller) and the KVM switch. A blade enclosure offers centralized management for the servers and I/O systems of the blade-system. Most servers used in the blade-system offer an iDRAC card and one can connect to each servers iDRAC via the M1000e management system. It is also possible to connect a virtual KVM switch to have access to the main-console of each installed server.
In June 2013 Dell introduced the PowerEdge VRTX, which is a smaller blade system that shares modules with the M1000e. The blade servers, although following the traditional naming strategy e.g. M520, M620 (only blades supported) are not interchangeable between the VRTX and the M1000e. The blades differ in firmware and mezzanine connectors.
In 2018 Dell introduced the Dell PE MX7000, a new MX enclosure model, next generation of Dell enclosures.
The M1000e enclosure has a front-side and a back-side and thus all communication between the inserted blades and modules goes via the midplane, which has the same function as a backplane but has connectors at both sides where the front side is dedicated for server-blades and the back for I/O modules.
Midplane
The midplane is completely passive. The server-blades are inserted in the front side of the enclosure while all other components can be reached via the back.
The original midplane 1.0 capabilities are Fabric A - Ethernet
|
https://en.wikipedia.org/wiki/Semiconductor%20Chip%20Protection%20Act%20of%201984
|
The Semiconductor Chip Protection Act of 1984 (or SCPA) is an act of the US Congress that makes the layouts of integrated circuits legally protected upon registration, and hence illegal to copy without permission. It is an integrated circuit layout design protection law.
Background
Prior to 1984, it was not necessarily illegal to produce a competing chip with an identical layout. As the legislative history for the SCPA explained, patent and copyright protection for chip layouts, chip topographies, was largely unavailable. This led to considerable complaint by American chip manufacturers—notably, Intel, which, along with the Semiconductor Industry Association (SIA), took the lead in seeking remedial legislation—against what they termed "chip piracy." During the hearings that led to enactment of the SCPA, chip industry representatives asserted that a pirate could copy a chip design for $100,000 in 3 to 5 months that had cost its original manufacturer upwards of $1 million to design.
Enactment of US and other national legislation
In 1984 the United States enacted the Semiconductor Chip Protection Act of 1984 (the SCPA) to protect the topography of semiconductor chips. The SCPA is found in title 17, U.S. Code, sections 901-914 ( 17 U.S.C. §§ 901-914).
Japan and European Community (EC) countries soon followed suit and enacted their own, similar laws protecting the topography of semiconductor chips.
Chip topographies are also protected by TRIPS, an international treaty.
How the SCPA operates
Sui generis law
Although the U.S. SCPA is codified in title 17 (copyrights), the SCPA is not a copyright or patent law. Rather, it is a sui generis law resembling a utility model law or Gebrauchsmuster. It has some aspects of copyright law, some aspects of patent law, and in some ways, it is completely different from either. From Brooktree, ¶ 23:
The Semiconductor Chip Protection Act of 1984 was an innovative solution to this new problem of technology-based industry. While
|
https://en.wikipedia.org/wiki/Noise-domain%20reflectometry
|
Noise-domain reflectometry is a type of reflectometry where the reflectometer exploits existing data signals on wiring and does not have to generate any signals itself. Noise-domain reflectometry, like time-domain and spread-spectrum time domain reflectometers, is most often used in identifying the location of wire faults in electrical lines.
Time-domain reflectometers work by generating a signal and then sending that signal down the wireline and examining the reflected signal. Noise-domain reflectometers (NDRs) provide the benefit of locating wire faults without introducing an external signal because the NDR examines the existing signals on the line to identify wire faults. This technique is particularly useful in the testing of live wires where data integrity on the wires is critical. For example, NDRs can be used for monitoring aircraft wiring while in flight.
See also
Spread-spectrum time-domain reflectometry
Time-domain reflectometry
|
https://en.wikipedia.org/wiki/Mesh%20analysis
|
Mesh analysis (or the mesh current method) is a method that is used to solve planar circuits for the currents (and indirectly the voltages) at any place in the electrical circuit. Planar circuits are circuits that can be drawn on a plane surface with no wires crossing each other. A more general technique, called loop analysis (with the corresponding network variables called loop currents) can be applied to any circuit, planar or not. Mesh analysis and loop analysis both make use of Kirchhoff’s voltage law to arrive at a set of equations guaranteed to be solvable if the circuit has a solution. Mesh analysis is usually easier to use when the circuit is planar, compared to loop analysis.
Mesh currents and essential meshes
Mesh analysis works by arbitrarily assigning mesh currents in the essential meshes (also referred to as independent meshes). An essential mesh is a loop in the circuit that does not contain any other loop. Figure 1 labels the essential meshes with one, two, and three.
A mesh current is a current that loops around the essential mesh and the equations are solved in terms of them. A mesh current may not correspond to any physically flowing current, but the physical currents are easily found from them. It is usual practice to have all the mesh currents loop in the same direction. This helps prevent errors when writing out the equations. The convention is to have all the mesh currents looping in a clockwise direction. Figure 2 shows the same circuit from Figure 1 with the mesh currents labeled.
Solving for mesh currents instead of directly applying Kirchhoff's current law and Kirchhoff's voltage law can greatly reduce the amount of calculation required. This is because there are fewer mesh currents than there are physical branch currents. In figure 2 for example, there are six branch currents but only three mesh currents.
Setting up the equations
Each mesh produces one equation. These equations are the sum of the voltage drops in a comple
|
https://en.wikipedia.org/wiki/Soil%20seed%20bank
|
The soil seed bank is the natural storage of seeds, often dormant, within the soil of most ecosystems. The study of soil seed banks started in 1859 when Charles Darwin observed the emergence of seedlings using soil samples from the bottom of a lake. The first scientific paper on the subject was published in 1882 and reported on the occurrence of seeds at different soil depths. Weed seed banks have been studied intensely in agricultural science because of their important economic impacts; other fields interested in soil seed banks include forest regeneration and restoration ecology.
Henry David Thoreau wrote that the contemporary popular belief explaining the succession of a logged forest, specifically to trees of a dissimilar species to the trees cut down, was that seeds either spontaneously generated in the soil, or sprouted after lying dormant for centuries. However, he dismissed this idea, noting that heavy nuts unsuited for distribution by wind were distributed instead by animals.
Background
Many taxa have been classified according to the longevity of their seeds in the soil seed bank. Seeds of transient species remain viable in the soil seed bank only to the next opportunity to germinate, while seeds of persistent species can survive longer than the next opportunity—often much longer than one year. Species with seeds that remain viable in the soil longer than five years form the long-term persistent seed bank, while species whose seeds generally germinate or die within one to five years are called short-term persistent. A typical long-term persistent species is Chenopodium album (Lambsquarters); its seeds commonly remain viable in the soil for up to 40 years and in rare situations perhaps as long as 1,600 years. A species forming no soil seed bank at all (except the dry season between ripening and the first autumnal rains) is Agrostemma githago (Corncockle), which was formerly a widespread cereal weed.
Seed longevity
Longevity of seeds is very var
|
https://en.wikipedia.org/wiki/Resilience%20%28mathematics%29
|
In mathematical modeling, resilience refers to the ability of a dynamical system to recover from perturbations and return to its original stable steady state. It is a measure of the stability and robustness of a system in the face of changes or disturbances. If a system is not resilient enough, it is more susceptible to perturbations and can more easily undergo a critical transition. A common analogy used to explain the concept of resilience of an equilibrium is one of a ball in a valley. A resilient steady state corresponds to a ball in a deep valley, so any push or perturbation will very quickly lead the ball to return to the resting point where it started. On the other hand, a less resilient steady state corresponds to a ball in a shallow valley, so the ball will take a much longer time to return to the equilibrium after a perturbation.
The concept of resilience is particularly useful in systems that exhibit tipping points, whose study has a long history that can be traced back to catastrophe theory. While this theory was initially overhyped and fell out of favor, its mathematical foundation remains strong and is now recognized as relevant to many different systems.
History
In 1973, Canadian ecologist C. S. Holling proposed a definition of resilience in the context of ecological systems. According to Holling, resilience is "a measure of the persistence of systems and of their ability to absorb change and disturbance and still maintain the same relationships between populations or state variables". Holling distinguished two types of resilience: engineering resilience and ecological resilience. Engineering resilience refers to the ability of a system to return to its original state after a disturbance, such as a bridge that can be repaired after an earthquake. Ecological resilience, on the other hand, refers to the ability of a system to maintain its identity and function despite a disturbance, such as a forest that can regenerate after a wildfire while maintain
|
https://en.wikipedia.org/wiki/Biosphere
|
The biosphere (from Greek βίος bíos "life" and σφαῖρα sphaira "sphere"), also known as the ecosphere (from Greek οἶκος oîkos "environment" and σφαῖρα), is the worldwide sum of all ecosystems. It can also be termed the zone of life on Earth. The biosphere (which is technically a spherical shell) is virtually a closed system with regard to matter, with minimal inputs and outputs. Regarding energy, it is an open system, with photosynthesis capturing solar energy at a rate of around 100 terawatts. By the most general biophysiological definition, the biosphere is the global ecological system integrating all living beings and their relationships, including their interaction with the elements of the lithosphere, cryosphere, hydrosphere, and atmosphere. The biosphere is postulated to have evolved, beginning with a process of biopoiesis (life created naturally from matter, such as simple organic compounds) or biogenesis (life created from living matter), at least some 3.5 billion years ago.
In a general sense, biospheres are any closed, self-regulating systems containing ecosystems. This includes artificial biospheres such as and , and potentially ones on other planets or moons.
Origin and use of the term
The term "biosphere" was coined in 1875 by geologist Eduard Suess, who defined it as the place on Earth's surface where life dwells.
While the concept has a geological origin, it is an indication of the effect of both Charles Darwin and Matthew F. Maury on the Earth sciences. The biosphere's ecological context comes from the 1920s (see Vladimir I. Vernadsky), preceding the 1935 introduction of the term "ecosystem" by Sir Arthur Tansley (see ecology history). Vernadsky defined ecology as the science of the biosphere. It is an interdisciplinary concept for integrating astronomy, geophysics, meteorology, biogeography, evolution, geology, geochemistry, hydrology and, generally speaking, all life and Earth sciences.
Narrow definition
Geochemists define the biosphere as
|
https://en.wikipedia.org/wiki/The%20COED%20Project
|
The COED Project, or the COmmunications and EDiting Project, was an innovative software project created by the Computer Division of NOAA, US Department of Commerce in Boulder, Colorado in the 1970s. This project was designed, purchased and implemented by the in-house computing staff rather than any official organization.
Intent
The computer division previously had a history of frequently replacing its mainframe computers. Starting with a CDC 1604, then a CDC 3600, a couple of CDC 3800s, and finally a CDC 6600. The department also had an XDS 940 timesharing system which would support up to 32 users on dial-up modems. Due to rapidly changing requirements for computer resources, it was expected that new systems would be installed on a regular basis, and the resultant strain on the users to adapt to each new system was perceived to be excessive. The COED project was the result of a study group convened to solve this problem.
The project was implemented by the computer specialists who were also responsible for the purchase, installation, and maintenance of all the computers in the division. COED was designed and implemented in long hours of overtime. The data communications aspect of the system was fully implemented and resulted in greatly improved access to the XDS 940 and CDC 6600 systems. It was also used as the front end of the - Free University of Amsterdam's SARA system for many years.
Design
A complete networked system was a pair of Modcomps: one II handled up to 256 communication ports, and one IV handled the disks and file editing. The system was designed to be fully redundant. If one pair failed the other automatically took over. All computer systems in the network were kept time-synchronized so that all file dates/times would be accurate - synchronized to the National Bureau of Standards atomic clock, housed in the same building. Another innovation was asynchronous dynamic speed recognition. After a terminal connected to a port, the user would type a Carr
|
https://en.wikipedia.org/wiki/Junos%20OS
|
Junos OS (also known as Juniper Junos, Junos and JUNOS) is a FreeBSD-based network operating system used in Juniper Networks routing, switching and security devices.
Versioning
Junos OS was first made available on 7 July 1998, with new feature updates being released every quarter 2008. , the latest version is Junos OS 23.2, released on 23 June 2023.
Architecture
Junos operating system is primarily based on Linux and FreeBSD, with Linux running as bare metal, and FreeBSD running in a QEMU Virtual machine. Because FreeBSD is a Unix implementation, users can access a Unix shell and execute normal Unix commands. Junos runs on most or all Juniper hardware systems. After acquisition of NetScreen by Juniper Networks, Juniper integrated ScreenOS security functions into its own Junos network operating system.
Junos OS has several architecture variations:
Junos OS is based on the FreeBSD operating system and can run as a guest virtual machine (VM) on a Linux VM host.
Junos OS Evolved, which runs native Linux and provides direct access to Linux utilities and operations.
Both operating systems use the same command-line interface (CLI) user interface, the same applications and features and the same management and automation tools—but Junos OS evolved infrastructure has been entirely modernized to enable higher availability, accelerated deployment, greater innovation, and improved operational efficiencies.
Features
Junos SDK
Junos's ecosystem includes a Software Development Kit (SDK) . Juniper Developer Network (JDN) provides the Junos SDK to 3rd-party developers who want to develop applications for Junos-powered devices such as Juniper Networks routers, switches, and service gateway systems. It provides a set of tools and application programming interfaces (APIs), including interfaces to Junos routing, firewall filter, UI and traffic services functions. Additionally, Junos SDK is used to develop other Juniper's products such as OpenFlow for Junos, and other traffic se
|
https://en.wikipedia.org/wiki/Jouanolou%27s%20trick
|
In algebraic geometry, Jouanolou's trick is a theorem that asserts, for an algebraic variety X, the existence of a surjection with affine fibers from an affine variety W to X. The variety W is therefore homotopy-equivalent to X, but it has the technically advantageous property of being affine. Jouanolou's original statement of the theorem required that X be quasi-projective over an affine scheme, but this has since been considerably weakened.
Jouanolou's construction
Jouanolou's original statement was:
If X is a scheme quasi-projective over an affine scheme, then there exists a vector bundle E over X and an affine E-torsor W.
By the definition of a torsor, W comes with a surjective map to X and is Zariski-locally on X an affine space bundle.
Jouanolou's proof used an explicit construction. Let S be an affine scheme and . Interpret the affine space as the space of (r + 1) × (r + 1) matrices over S. Within this affine space, there is a subvariety W consisting of idempotent matrices of rank one. The image of such a matrix is therefore a point in X, and the map that sends a matrix to the point corresponding to its image is the map claimed in the statement of the theorem. To show that this map has the desired properties, Jouanolou notes that there is a short exact sequence of vector bundles:
where the first map is defined by multiplication by a basis of sections of and the second map is the cokernel. Jouanolou then asserts that W is a torsor for .
Jouanolou deduces the theorem in general by reducing to the above case. If X is projective over an affine scheme S, then it admits a closed immersion into some projective space . Pulling back the variety W constructed above for along this immersion yields the desired variety W for X. Finally, if X is quasi-projective, then it may be realized as an open subscheme of a projective S-scheme. Blow up the complement of X to get , and let denote the inclusion morphism. The complement of X in is a Cartier div
|
https://en.wikipedia.org/wiki/Built-in%20self-test
|
A built-in self-test (BIST) or built-in test (BIT) is a mechanism that permits a machine to test itself. Engineers design BISTs to meet requirements such as:
high reliability
lower repair cycle times
or constraints such as:
limited technician accessibility
cost of testing during manufacture
The main purpose of BIST is to reduce the complexity, and thereby decrease the cost and reduce reliance upon external (pattern-programmed) test equipment. BIST reduces cost in two ways:
reduces test-cycle duration
reduces the complexity of the test/probe setup, by reducing the number of I/O signals that must be driven/examined under tester control.
Both lead to a reduction in hourly charges for automated test equipment (ATE) service.
Applications
BIST is commonly placed in weapons, avionics, medical devices, automotive electronics, complex machinery of all types, unattended machinery of all types, and integrated circuits.
Automotive
Automotive tests itself to enhance safety and reliability. For example, most vehicles with antilock brakes test them once per safety interval. If the antilock brake system has a broken wire or other fault, the brake system reverts to operating as a normal brake system. Most automotive engine controllers incorporate a "limp mode" for each sensor, so that the engine will continue to operate if the sensor or its wiring fails. Another, more trivial example of a limp mode is that some cars test door switches, and automatically turn lights on using seat-belt occupancy sensors if the door switches fail.
Aviation
Almost all avionics now incorporate BIST. In avionics, the purpose is to isolate failing line-replaceable units, which are then removed and repaired elsewhere, usually in depots or at the manufacturer. Commercial aircraft only make money when they fly, so they use BIST to minimize the time on the ground needed for repair and to increase the level of safety of the system which contains BIST. Similar arguments apply to military ai
|
https://en.wikipedia.org/wiki/List%20of%20numerical%20analysis%20topics
|
This is a list of numerical analysis topics.
General
Validated numerics
Iterative method
Rate of convergence — the speed at which a convergent sequence approaches its limit
Order of accuracy — rate at which numerical solution of differential equation converges to exact solution
Series acceleration — methods to accelerate the speed of convergence of a series
Aitken's delta-squared process — most useful for linearly converging sequences
Minimum polynomial extrapolation — for vector sequences
Richardson extrapolation
Shanks transformation — similar to Aitken's delta-squared process, but applied to the partial sums
Van Wijngaarden transformation — for accelerating the convergence of an alternating series
Abramowitz and Stegun — book containing formulas and tables of many special functions
Digital Library of Mathematical Functions — successor of book by Abramowitz and Stegun
Curse of dimensionality
Local convergence and global convergence — whether you need a good initial guess to get convergence
Superconvergence
Discretization
Difference quotient
Complexity:
Computational complexity of mathematical operations
Smoothed analysis — measuring the expected performance of algorithms under slight random perturbations of worst-case inputs
Symbolic-numeric computation — combination of symbolic and numeric methods
Cultural and historical aspects:
History of numerical solution of differential equations using computers
Hundred-dollar, Hundred-digit Challenge problems — list of ten problems proposed by Nick Trefethen in 2002
International Workshops on Lattice QCD and Numerical Analysis
Timeline of numerical analysis after 1945
General classes of methods:
Collocation method — discretizes a continuous equation by requiring it only to hold at certain points
Level-set method
Level set (data structures) — data structures for representing level sets
Sinc numerical methods — methods based on the sinc function, sinc(x) = sin(x) / x
ABS methods
Error
Error analysis (mathematics)
Approximat
|
https://en.wikipedia.org/wiki/Operability
|
Operability is the ability to keep a piece of equipment, a system or a whole industrial installation in a safe and reliable functioning condition, according to pre-defined operational requirements.
In a computing systems environment with multiple systems this includes the ability of products, systems and business processes to work together to accomplish a common task such as finding and returning availability of inventory for flight.
For a gas turbine engine, operability addresses the installed aerodynamic operation of the engine to ensure that it operates with care-free throttle handling without compressor stall or surge or combustor flame-out. There must be no unacceptable loss of power or handling deterioration after ingesting birds, rain and hail or ingesting or accumulating ice. Design and development responsibilities include the components through which the thrust/power-producing flow passes, ie the intake, compressor, combustor, fuel system, turbine and exhaust. They also include the software in the computers which control the way the engine changes its speed in response to the actions of the pilot in selecting a start, selecting different idle settings and higher power ratings such as take-off, climb and cruise. The engine has to start to idle and accelerate and decelerate within agreed, or mandated, times while remaining within operating limits (shaft speeds, turbine temperature, combustor casing pressure) over the required aircraft operating envelope.
Operability is considered one of the ilities and is closely related to reliability, supportability and maintainability.
Operability also refers to whether or not a surgical operation can be performed to treat a patient with a reasonable degree of safety and chance of success.
|
https://en.wikipedia.org/wiki/List%20of%20small%20groups
|
The following list in mathematics contains the finite groups of small order up to group isomorphism.
Counts
For n = 1, 2, … the number of nonisomorphic groups of order n is
1, 1, 1, 2, 1, 2, 1, 5, 2, 2, 1, 5, 1, 2, 1, 14, 1, 5, 1, 5, ...
For labeled groups, see .
Glossary
Each group is named by Small Groups library as Goi, where o is the order of the group, and i is the index used to label the group within that order.
Common group names:
Zn: the cyclic group of order n (the notation Cn is also used; it is isomorphic to the additive group of Z/nZ)
Dihn: the dihedral group of order 2n (often the notation Dn or D2n is used)
K4: the Klein four-group of order 4, same as and Dih2
D2n: the dihedral group of order 2n, the same as Dihn (notation used in section List of small non-abelian groups)
Sn: the symmetric group of degree n, containing the n! permutations of n elements
An: the alternating group of degree n, containing the even permutations of n elements, of order 1 for , and order n!/2 otherwise
Dicn or Q4n: the dicyclic group of order 4n
Q8: the quaternion group of order 8, also Dic2
The notations Zn and Dihn have the advantage that point groups in three dimensions Cn and Dn do not have the same notation. There are more isometry groups than these two, of the same abstract group type.
The notation denotes the direct product of the two groups; Gn denotes the direct product of a group with itself n times. G ⋊ H denotes a semidirect product where H acts on G; this may also depend on the choice of action of H on G.
Abelian and simple groups are noted. (For groups of order , the simple groups are precisely the cyclic groups Zn, for prime n.) The equality sign ("=") denotes isomorphism.
The identity element in the cycle graphs is represented by the black circle. The lowest order for which the cycle graph does not uniquely represent a group is order 16.
In the lists of subgroups, the trivial group and the group itself are not listed. Where there are s
|
https://en.wikipedia.org/wiki/Journal%20of%20Mathematics%20and%20the%20Arts
|
The Journal of Mathematics and the Arts is a quarterly peer-reviewed academic journal that deals with relationship between mathematics and the arts.
The journal was established in 2007 and is published by Taylor & Francis. The editor-in-chief is Mara Alagic (Wichita State University, Kansas).
|
https://en.wikipedia.org/wiki/Integrated%20circuit%20layout%20design%20protection
|
Layout designs (topographies) of integrated circuits are a field in the protection of intellectual property.
In United States intellectual property law, a "mask work" is a two or three-dimensional layout or topography of an integrated circuit (IC or "chip"), i.e. the arrangement on a chip of semiconductor devices such as transistors and passive electronic components such as resistors and interconnections. The layout is called a mask work because, in photolithographic processes, the multiple etched layers within actual ICs are each created using a mask, called the photomask, to permit or block the light at specific locations, sometimes for hundreds of chips on a wafer simultaneously.
Because of the functional nature of the mask geometry, the designs cannot be effectively protected under copyright law (except perhaps as decorative art). Similarly, because individual lithographic mask works are not clearly protectable subject matter; they also cannot be effectively protected under patent law, although any processes implemented in the work may be patentable. So since the 1990s, national governments have been granting copyright-like exclusive rights conferring time-limited exclusivity to reproduction of a particular layout. Terms of integrated circuit rights are usually shorter than copyrights applicable on pictures.
International law
A diplomatic conference was held at Washington, D.C., in 1989, which adopted a Treaty on Intellectual Property in Respect of Integrated Circuits, also called the Washington Treaty or IPIC Treaty. The Treaty, signed at Washington on May 26, 1989, is open to member states of the United Nations (UN) World Intellectual Property Organization (WIPO) and to intergovernmental organizations meeting certain criteria. The Treaty has been incorporated by reference into the TRIPS Agreement of the World Trade Organization (WTO), subject to the following modifications: the term of protection is at least 10 (rather than eight) years from the date of f
|
https://en.wikipedia.org/wiki/Elmore%20delay
|
Elmore delay is a simple approximation to the delay through an RC network in an electronic system. It is often used in applications such as logic synthesis, delay calculation, static timing analysis, placement and routing, since it is simple to compute (especially in tree structured networks, which are the vast majority of signal nets within ICs) and is reasonably accurate. Even where it is not accurate, it is usually faithful, in the sense that reducing the Elmore delay will almost always reduce the true delay, so it is still useful in optimization.
Elmore delay can be thought of in several ways, all mathematically identical.
For tree structured networks, find the delay through each segment as the R (electrical resistance) times the downstream C (electrical capacitance). Sum the delays from the root to the sink.
Assume the output is a simple exponential, and find the exponential that has the same integral as the true response. This is also equivalent to moment matching with one moment, since the first moment is a pure exponential.
Find a one pole approximation to the true frequency response. This is a first-order Padé approximation.
There are many extensions to Elmore delay. It can be extended to upper and lower bounds, to include inductance as well as R and C, to be more accurate (higher order approximations) and so on. See delay calculation for more details and references.
See also
Delay calculation
Static timing analysis
William Cronk Elmore
|
https://en.wikipedia.org/wiki/List%20of%20polynomial%20topics
|
This is a list of polynomial topics, by Wikipedia page. See also trigonometric polynomial, list of algebraic geometry topics.
Terminology
Degree: The maximum exponents among the monomials.
Factor: An expression being multiplied.
Linear factor: A factor of degree one.
Coefficient: An expression multiplying one of the monomials of the polynomial.
Root (or zero) of a polynomial: Given a polynomial p(x), the x values that satisfy p(x) = 0 are called roots (or zeroes) of the polynomial p.
Graphing
End behaviour –
Concavity –
Orientation –
Tangency point –
Inflection point – Point where concavity changes.
Basics
Polynomial
Coefficient
Monomial
Polynomial long division
Synthetic division
Polynomial factorization
Rational function
Partial fraction
Partial fraction decomposition over R
Vieta's formulas
Integer-valued polynomial
Algebraic equation
Factor theorem
Polynomial remainder theorem
Elementary abstract algebra
See also Theory of equations below.
Polynomial ring
Greatest common divisior of two polynomials
Symmetric function
Homogeneous polynomial
Polynomial SOS (sum of squares)
Theory of equations
Polynomial family
Quadratic function
Cubic function
Quartic function
Quintic function
Sextic function
Septic function
Octic function
Completing the square
Abel–Ruffini theorem
Bring radical
Binomial theorem
Blossom (functional)
Root of a function
nth root (radical)
Surd
Square root
Methods of computing square roots
Cube root
Root of unity
Constructible number
Complex conjugate root theorem
Algebraic element
Horner scheme
Rational root theorem
Gauss's lemma (polynomial)
Irreducible polynomial
Eisenstein's criterion
Primitive polynomial
Fundamental theorem of algebra
Hurwitz polynomial
Polynomial transformation
Tschirnhaus transformation
Galois theory
Discriminant of a polynomial
Resultant
Elimination theory
Gröbner basis
Regular chain
Triangular decomposition
Sturm's theorem
Descartes' rule of signs
Carlitz–Wan conjecture
Po
|
https://en.wikipedia.org/wiki/Census%20of%20Marine%20Life
|
The Census of Marine Life was a 10-year, US $650 million scientific initiative, involving a global network of researchers in more than 80 nations, engaged to assess and explain the diversity, distribution, and abundance of life in the oceans. The world's first comprehensive Census of Marine Life — past, present, and future — was released in 2010 in London. Initially supported by funding from the Alfred P. Sloan Foundation, the project was successful in generating many times that initial investment in additional support and substantially increased the baselines of knowledge in often underexplored ocean realms, as well as engaging over 2,700 different researchers for the first time in a global collaborative community united in a common goal, and has been described as "one of the largest scientific collaborations ever conducted".
Project history
According to Jesse Ausubel, Senior Research Associate of the Program for the Human Environment of Rockefeller University and science advisor to the Alfred P. Sloan Foundation, the idea for a "Census of Marine Life" originated in conversations between himself and Dr. J. Frederick Grassle, an oceanographer and benthic ecology professor at Rutgers University, in 1996. Grassle had been urged to talk with Ausubel by former colleagues at the Woods Hole Oceanographic Institution and was at that time unaware that Ausubel was also a program manager at the Alfred P. Sloan Foundation, funders of a number of other large scale "public good" science-based projects such as the Sloan Digital Sky Survey. Ausubel was instrumental in persuading the Foundation to fund a series of "feasibility workshops" over the period 1997-1998 into how the project might be conducted, one result of these workshops being the broadening of the initial concept from a "Census of the Fishes" into a comprehensive "Census of Marine Life". Results from these workshops, plus associated invited contributions, formed the basis of a special issue of Oceanography magazine i
|
https://en.wikipedia.org/wiki/Reflected-wave%20switching
|
Reflected-wave switching is a signalling technique used in backplane computer buses such as PCI.
A backplane computer bus is a type of multilayer printed circuit board that has at least one (almost) solid layer of copper called the ground plane, and at least one layer of copper tracks that are used as wires for the signals. Each signal travels along a transmission line formed by its track and the narrow strip of ground plane directly beneath it. This structure is known in radio engineering as microstrip line.
Each signal travels from a transmitter to one or more receivers. Most computer buses use binary digital signals, which are sequences of pulses of fixed amplitude. In order to receive the correct data, the receiver must detect each pulse once, and only once. To ensure this, the designer must take the high-frequency characteristics of the microstrip into account.
When a pulse is launched into the microstrip by the transmitter, its amplitude depends on the ratio of the impedances of the transmitter and the microstrip. The impedance of the transmitter is simply its output resistance. The impedance of the microstrip is its characteristic impedance, which depends on its dimensions and on the materials used in the backplane's construction. As the leading edge of the pulse (the incident wave) passes the receiver, it may or may not have sufficient amplitude to be detected. If it does, then the system is said to use incident-wave switching. This is the system used in most computer buses predating PCI, such as the VME bus.
When the pulse reaches the end of the microstrip, its behaviour depends on the circuit conditions at this point. If the microstrip is correctly terminated (usually with a combination of resistors), the pulse is absorbed and its energy is converted to heat. This is the case in an incident-wave switching bus. If, on the other hand, there is no termination at the end of the microstrip, and the pulse encounters an open circuit, it is reflec
|
https://en.wikipedia.org/wiki/List%20of%20formal%20systems
|
This is a list of formal systems, also known as logical calculi.
Mathematical
Domain relational calculus, a calculus for the relational data model
Functional calculus, a way to apply various types of functions to operators
Join calculus, a theoretical model for distributed programming
Lambda calculus, a formulation of the theory of reflexive functions that has deep connections to computational theory
Matrix calculus, a specialized notation for multivariable calculus over spaces of matrices
Modal μ-calculus, a common temporal logic used by formal verification methods such as model checking
Pi-calculus, a formulation of the theory of concurrent, communicating processes that was invented by Robin Milner
Predicate calculus, specifies the rules of inference governing the logic of predicates
Propositional calculus, specifies the rules of inference governing the logic of propositions
Refinement calculus, a way of refining models of programs into efficient programs
Rho calculus, introduced as a general means to uniformly integrate rewriting and lambda calculus
Tuple calculus, a calculus for the relational data model, inspired the SQL language
Umbral calculus, the combinatorics of certain operations on polynomials
Vector calculus (also called vector analysis), comprising specialized notations for multivariable analysis of vectors in an inner-product space
Other formal systems
Music is a formal system too. Please have editors illuminate on this.
See also
Formal systems
|
https://en.wikipedia.org/wiki/Cline%20%28biology%29
|
In biology, a cline (from the Greek κλίνειν klinein, meaning "to lean") is a measurable gradient in a single characteristic (or biological trait) of a species across its geographical range. First coined by Julian Huxley in 1938, the cline usually has a genetic (e.g. allele frequency, blood type), or phenotypic (e.g. body size, skin pigmentation) character. Clines can show smooth, continuous gradation in a character, or they may show more abrupt changes in the trait from one geographic region to the next.
A cline refers to a spatial gradient in a specific, singular trait, rather than a collection of traits; a single population can therefore have as many clines as it has traits, at least in principle. Additionally, Huxley recognised that these multiple independent clines may not act in concordance with each other. For example, it has been observed that in Australia, birds generally become smaller the further towards the north of the country they are found. In contrast, the intensity of their plumage colouration follows a different geographical trajectory, being most vibrant where humidity is highest and becoming less vibrant further into the arid centre of the country.
Because of this, clines were defined by Huxley as being an "auxiliary taxonomic principle"; that is, clinal variation in a species is not awarded taxonomic recognition in the way subspecies or species are.
While the terms "ecotype" and "cline" are sometimes used interchangeably, they do in fact differ in that "ecotype" refers to a population which differs from other populations in a number of characters, rather than the single character that varies amongst populations in a cline.
Drivers and the evolution of clines
Clines are often cited to be the result of two opposing drivers: selection and gene flow (also known as migration). Selection causes adaptation to the local environment, resulting in different genotypes or phenotypes being favoured in different environments. This diversifying force is c
|
https://en.wikipedia.org/wiki/Transistor%20model
|
Transistors are simple devices with complicated behavior. In order to ensure the reliable operation of circuits employing transistors, it is necessary to scientifically model the physical phenomena observed in their operation using transistor models. There exists a variety of different models that range in complexity and in purpose. Transistor models divide into two major groups: models for device design and models for circuit design.
Models for device design
The modern transistor has an internal structure that exploits complex physical mechanisms. Device design requires a detailed understanding of how device manufacturing processes such as ion implantation, impurity diffusion, oxide growth, annealing, and etching affect device behavior. Process models simulate the manufacturing steps and provide a microscopic description of device "geometry" to the device simulator. "Geometry" does not mean readily identified geometrical features such as a planar or wrap-around gate structure, or raised or recessed forms of source and drain (see Figure 1 for a memory device with some unusual modeling challenges related to charging the floating gate by an avalanche process). It also refers to details inside the structure, such as the doping profiles after completion of device processing.
With this information about what the device looks like, the device simulator models the physical processes taking place in the device to determine its electrical behavior in a variety of circumstances: DC current–voltage behavior, transient behavior (both large-signal and small-signal), dependence on device layout (long and narrow versus short and wide, or interdigitated versus rectangular, or isolated versus proximate to other devices). These simulations tell the device designer whether the device process will produce devices with the electrical behavior needed by the circuit designer, and is used to inform the process designer about any necessary process improvements. Once the process gets close
|
https://en.wikipedia.org/wiki/Cut-through%20switching
|
In computer networking, cut-through switching, also called cut-through forwarding is a method for packet switching systems, wherein the switch starts forwarding a frame (or packet) before the whole frame has been received, normally as soon as the destination address and outgoing interface is determined. Compared to store and forward, this technique reduces latency through the switch and relies on the destination devices for error handling. Pure cut-through switching is only possible when the speed of the outgoing interface is at least equal or higher than the incoming interface speed.
Adaptive switching dynamically selects between cut-through and store and forward behaviors based on current network conditions.
Cut-through switching is closely associated with wormhole switching.
Use in Ethernet
When cut-through switching is used in Ethernet the switch is not able to verify the integrity of an incoming frame before forwarding it.
The technology was developed by Kalpana, the company that introduced the first Ethernet switch.
The primary advantage of cut-through Ethernet switches, compared to store-and-forward Ethernet switches, is lower latency.
Cut-through Ethernet switches can support an end-to-end network delay latency of about ten microseconds.
End-to-end application latencies below 3 microseconds require specialized hardware such as InfiniBand.
A cut-through switch will forward corrupted frames, whereas a store and forward switch will drop them. Fragment free is a variation on cut-through switching that partially addresses this problem by assuring that collision fragments are not forwarded. Fragment free will hold the frame until the first 64 bytes are read from the source to detect a collision before forwarding. This is only useful if there is a chance of a collision on the source port.
The theory here is that frames that are damaged by collisions are often shorter than the minimum valid Ethernet frame size of 64 bytes. With a fragment-free buffer the fir
|
https://en.wikipedia.org/wiki/97.5th%20percentile%20point
|
In probability and statistics, the 97.5th percentile point of the standard normal distribution is a number commonly used for statistical calculations. The approximate value of this number is 1.96, meaning that 95% of the area under a normal curve lies within approximately 1.96 standard deviations of the mean. Because of the central limit theorem, this number is used in the construction of approximate 95% confidence intervals. Its ubiquity is due to the arbitrary but common convention of using confidence intervals with 95% probability in science and frequentist statistics, though other probabilities (90%, 99%, etc.) are sometimes used. This convention seems particularly common in medical statistics, but is also common in other areas of application, such as earth sciences, social sciences and business research.
There is no single accepted name for this number; it is also commonly referred to as the "standard normal deviate", "normal score" or "Z score" for the 97.5 percentile point, the .975 point, or just its approximate value, 1.96.
If X has a standard normal distribution, i.e. X ~ N(0,1),
and as the normal distribution is symmetric,
One notation for this number is z.975. From the probability density function of the standard normal distribution, the exact value of z.975 is determined by
History
The use of this number in applied statistics can be traced to the influence of Ronald Fisher's classic textbook, Statistical Methods for Research Workers, first published in 1925:
In Table 1 of the same work, he gave the more precise value 1.959964.
In 1970, the value truncated to 20 decimal places was calculated to be
1.95996 39845 40054 23552...
The commonly used approximate value of 1.96 is therefore accurate to better than one part in 50,000, which is more than adequate for applied work.
Some people even use the value of 2 in the place of 1.96, reporting a 95.4% confidence interval as a 95% confidence interval. This is not recommended but is occasionally seen.
|
https://en.wikipedia.org/wiki/Golden%20angle
|
In geometry, the golden angle is the smaller of the two angles created by sectioning the circumference of a circle according to the golden ratio; that is, into two arcs such that the ratio of the length of the smaller arc to the length of the larger arc is the same as the ratio of the length of the larger arc to the full circumference of the circle.
Algebraically, let a+b be the circumference of a circle, divided into a longer arc of length a and a smaller arc of length b such that
The golden angle is then the angle subtended by the smaller arc of length b. It measures approximately 137.5077640500378546463487 ...° or in radians 2.39996322972865332 ... .
The name comes from the golden angle's connection to the golden ratio φ; the exact value of the golden angle is
or
where the equivalences follow from well-known algebraic properties of the golden ratio.
As its sine and cosine are transcendental numbers, the golden angle cannot be constructed using a straightedge and compass.
Derivation
The golden ratio is equal to φ = a/b given the conditions above.
Let ƒ be the fraction of the circumference subtended by the golden angle, or equivalently, the golden angle divided by the angular measurement of the circle.
But since
it follows that
This is equivalent to saying that φ 2 golden angles can fit in a circle.
The fraction of a circle occupied by the golden angle is therefore
The golden angle g can therefore be numerically approximated in degrees as:
or in radians as :
Golden angle in nature
The golden angle plays a significant role in the theory of phyllotaxis; for example, the golden angle is the angle separating the florets on a sunflower. Analysis of the pattern shows that it is highly sensitive to the angle separating the individual primordia, with the Fibonacci angle giving the parastichy with optimal packing density.
Mathematical modelling of a plausible physical mechanism for floret development has shown the pattern arising spontaneousl
|
https://en.wikipedia.org/wiki/I2P
|
The Invisible Internet Project (I2P) is an anonymous network layer (implemented as a mix network) that allows for censorship-resistant, peer-to-peer communication. Anonymous connections are achieved by encrypting the user's traffic (by using end-to-end encryption), and sending it through a volunteer-run network of roughly 55,000 computers distributed around the world. Given the high number of possible paths the traffic can transit, a third party watching a full connection is unlikely. The software that implements this layer is called an "I2P router", and a computer running I2P is called an "I2P node". I2P is free and open sourced, and is published under multiple licenses.
Technical design
I2P has been beta software since it started in 2003 as a fork of Freenet. The software's developers emphasize that bugs are likely to occur in the beta version and that peer review has been insufficient to date. However, they believe the code is now reasonably stable and well-developed, and more exposure can help the development of I2P.
The network is strictly message-based, like IP, but a library is available to allow reliable streaming communication on top of it (similar to Non-blocking IO-based TCP, although from version 0.6, a new Secure Semi-reliable UDP transport is used). All communication is end-to-end encrypted (in total, four layers of encryption are used when sending a message) through garlic routing, and even the end points ("destinations") are cryptographic identifiers (essentially a pair of public keys), so that neither senders nor recipients of messages need to reveal their IP address to the other side or to third-party observers.
Although many developers had been a part of the Invisible IRC Project (IIP) and Freenet communities, significant differences exist between their designs and concepts. IIP was an anonymous centralized IRC server. Freenet is a censorship-resistant distributed data store. I2P is an anonymous peer-to-peer distributed communicatio
|
https://en.wikipedia.org/wiki/TCP%20Gender%20Changer
|
TCP Gender Changer is a method in computer networking for making an internal TCP/IP based network server accessible beyond its protective firewall.
Mechanism
It consists of two nodes, one resides on the internal the local area network where it can access the desired server, and the other node runs outside of the local area network, where the client can access it. These nodes are respectively called CC (Connect-Connect) and LL (Listen-Listen).
The reason behind naming the nodes are the fact that Connect-Connect node initiates two connections one to the Listen-Listen node and one to the actual server. The Listen-Listen node, however, passively Listens on two TCP/IP ports, one to receive a connection from CC and the other one for an incoming connection from the client.
The CC node, which runs inside the network will establish a control connection to the LL, and waiting for LL's signal to open a
connection to the internal server. Upon receiving a client connection LL will signal the CC node to connect the server, once done CC will let LL know of the result and if successful LL will keep the client connection and thus the client and server can communicate while CC and LL both relay the data back and forth.
Use cases
One of the cases where it can be very useful is to connect to a desktop machine behind a firewall running VNC, which would make the desktop remotely accessible over the network and beyond the firewall. Another useful scenario would be to create a VPN using PPP over SSH, or even simply using SSH to connect to an internal Unix based server.
See also
Firewall (computing)
LAN
Network Security
VPN
VNC
|
https://en.wikipedia.org/wiki/Angle-sensitive%20pixel
|
An angle-sensitive pixel (ASP) is a CMOS sensor with a sensitivity to incoming light that is sinusoidal in incident angle.
Principles of operation
ASPs are typically composed of two gratings (a diffraction grating and an analyzer grating) above a single photodiode. ASPs exploit the moire effect and the Talbot effect to gain their sinusoidal light sensitivity. According to the moire effect, if light acted as a particle, at certain incident angles the gaps in the diffraction and analyzer gratings line up, while at other incident angles light passed by the diffraction grating is blocked by the analyzer grating. The amount of light reaching the photodiode would be proportional to a sinusoidal function of incident angle, as the two gratings come in and out of phase with each other with shifting incident angle. The wave nature of light becomes important at small scales such as those in ASPs, meaning a pure-moire model of ASP function is insufficient. However, at half-integer multiples of the Talbot depth, the periodicity of the diffraction grating is recapitulated, and the moire effect is rescued. By building ASPs where the vertical separation between the gratings is approximately equal to a half-integer multiple of the Talbot depth, the sinusoidal sensitivity with incident angle is observed.
Applications
ASPs can be used in miniature imaging devices. They do not require any focusing elements to achieve sinusoidal incident angle sensitivity, meaning that they can be deployed without a lens to image the near field, or the far field using a Fourier-complete planar Fourier capture array. They can also be used in conjunction with a lens, in which case they perform a depth-sensitive, physics-based wavelet transform of the far-away scene, allowing single-lens 3D photography similar to that of the Lytro camera.
See also
Planar Fourier capture array
|
https://en.wikipedia.org/wiki/FAO%20GM%20Foods%20Platform
|
The FAO GM Foods Platform is a web platform where participating countries can share information on their assessments of the safety of genetically modified (recombinant-DNA) foods and feeds based on the Codex Alimentarius. It also allows for sharing of assessments of low-level GMO contamination (LLP, low-level presence).
The platform was set up by the Food and Agriculture Organization of the United Nations, and was launched at the FAO headquarters in Rome on 1 July 2013. The information uploaded to the platform is freely available to be read.
|
https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20classical%20mechanics
|
This is a list of mathematical topics in classical mechanics, by Wikipedia page. See also list of variational topics, correspondence principle.
Newtonian physics
Newton's laws of motion
Inertia,
Kinematics, rigid body
Momentum, kinetic energy
Parallelogram of force
Circular motion
Rotational speed
Angular speed
Angular momentum
torque
angular acceleration
moment of inertia
parallel axes rule
perpendicular axes rule
stretch rule
centripetal force, centrifugal force, Reactive centrifugal force
Laplace–Runge–Lenz vector
Euler's disk
elastic potential energy
Mechanical equilibrium
D'Alembert's principle
Degrees of freedom (physics and chemistry)
Frame of reference
Inertial frame of reference
Galilean transformation
Principle of relativity
Conservation laws
Conservation of momentum
Conservation of linear momentum
Conservation of angular momentum
Conservation of energy
Potential energy
Conservative force
Conservation of mass
Law of universal gravitation
Projectile motion
Kepler's laws of planetary motion
Escape velocity
Potential well
Weightlessness
Lagrangian point
N-body problem
Kolmogorov-Arnold-Moser theorem
Virial theorem
Gravitational binding energy
Speed of gravity
Newtonian limit
Hill sphere
Roche lobe
Roche limit
Hamiltonian mechanics
Phase space
Symplectic manifold
Liouville's theorem (Hamiltonian)
Poisson bracket
Poisson algebra
Poisson manifold
Antibracket algebra
Hamiltonian constraint
Moment map
Contact geometry
Analysis of flows
Nambu mechanics
Lagrangian mechanics
Action (physics)
Lagrangian
Euler–Lagrange equations
Noether's theorem
Classical mechanics
|
https://en.wikipedia.org/wiki/Mass%20versus%20weight
|
In common usage, the mass of an object is often referred to as its weight, though these are in fact different concepts and quantities. Nevertheless, one object will always weigh more than another with less mass if both are subject to the same gravity (i.e. the same gravitational field strength).
In scientific contexts, mass is the amount of "matter" in an object (though "matter" may be difficult to define), but weight is the force exerted on an object's matter by gravity. At the Earth's surface, an object whose mass is exactly one kilogram weighs approximately 9.81 newtons, the product of its mass and the gravitational field strength there. The object's weight is less on Mars, where gravity is weaker; more on Saturn, where gravity is stronger; and very small in space, far from significant sources of gravity, but it always has the same mass.
Material objects at the surface of the Earth have weight despite such sometimes being difficult to measure. An object floating freely on water, for example, does not appear to have weight since it is buoyed by the water. But its weight can be measured if it is added to water in a container which is entirely supported by and weighed on a scale. Thus, the "weightless object" floating in water actually transfers its weight to the bottom of the container (where the pressure increases). Similarly, a balloon has mass but may appear to have no weight or even negative weight, due to buoyancy in air. However the weight of the balloon and the gas inside it has merely been transferred to a large area of the Earth's surface, making the weight difficult to measure. The weight of a flying airplane is similarly distributed to the ground, but does not disappear. If the airplane is in level flight, the same weight-force is distributed to the surface of the Earth as when the plane was on the runway, but spread over a larger area.
A better scientific definition of mass is its description as being a measure of inertia, which is the tendency of an
|
https://en.wikipedia.org/wiki/Logic%20built-in%20self-test
|
Logic built-in self-test (or LBIST) is a form of built-in self-test (BIST) in which hardware and/or software is built into integrated circuits allowing them to test their own operation, as opposed to reliance on external automated test equipment.
Advantages
The main advantage of LBIST is the ability to test internal circuits having no direct connections to external pins, and thus unreachable by external automated test equipment. Another advantage is the ability to trigger the LBIST of an integrated circuit while running a built-in self test or power-on self test of the finished product.
Disadvantages
LBIST that requires additional circuitry (or read-only memory) increases the cost of the integrated circuit. LBIST that only requires temporary changes to programmable logic or rewritable memory avoids this extra cost, but requires more time to first program in the BIST and then to remove it and program in the final configuration. Another disadvantage of LBIST is the possibility that the on-chip testing hardware itself can fail; external automated test equipment tests the integrated circuit with known-good test circuitry.
Related technologies
Other, related technologies are MBIST (a BIST optimized for testing internal memory) and ABIST (either a BIST optimized for testing arrays or a BIST that is optimized for testing analog circuitry). The two uses may be distinguished by considering whether the integrated circuit being tested has an internal array or analog functions.
See also
Built-in self-test
Built-in test equipment
Design for test
Power-on self-test
External links
Built-in Self Test (BIST)
Hardware Diagnostic Self Tests
BIST for Analog Weenies
Integrated circuits
|
https://en.wikipedia.org/wiki/Paratype
|
In zoology and botany, a paratype is a specimen of an organism that helps define what the scientific name of a species and other taxon actually represents, but it is not the holotype (and in botany is also neither an isotype nor a syntype). Often there is more than one paratype. Paratypes are usually held in museum research collections.
The exact meaning of the term paratype when it is used in zoology is not the same as the meaning when it is used in botany. In both cases however, this term is used in conjunction with holotype.
Zoology
In zoological nomenclature, a paratype is officially defined as "Each specimen of a type series other than the holotype."
In turn, this definition relies on the definition of a "type series". A type series is the material (specimens of organisms) that was cited in the original publication of the new species or subspecies, and was not excluded from being type material by the author (this exclusion can be implicit, e.g., if an author mentions "paratypes" and then subsequently mentions "other material examined", the latter are not included in the type series), nor referred to as a variant, or only dubiously included in the taxon (e.g., a statement such as "I have before me a specimen which agrees in most respects with the remainder of the type series, though it may yet prove to be distinct" would exclude this specimen from the type series).
Thus, in a type series of five specimens, if one is the holotype, the other four will be paratypes.
A paratype may originate from a different locality than the holotype. A paratype cannot become a lectotype, though it is eligible (and often desirable) for designation as a neotype. The International Code of Zoological Nomenclature (ICZN) has not always required a type specimen, but any species or subspecies newly described after the end of 1999 must have a designated holotype or syntypes.
A related term is allotype, a term that indicates a specimen that exemplifies the opposite sex of the holoty
|
https://en.wikipedia.org/wiki/A%20calorie%20is%20a%20calorie
|
"Calorie In Calorie Out" is a tautology used to convey the thermodynamic concept that a "calorie" is a sufficient way to describe the energy content of food.
History
In 1878, German nutritionist Max Rubner crafted what he called the "isodynamic law". The law claims that the basis of nutrition is the exchange of energy, and was applied to the study of obesity in the early 1900s by Carl von Noorden. Von Noorden had two theories about what caused people to develop obesity. The first simply avowed Rubner's notion that "a calorie is a calorie". The second theorized that obesity development depends on how the body partitions calories for either use or storage. Since 1925, the calorie has been defined in terms of the joule; the current definition of the calorie was formally adopted in 1948.
The related concept of "calorie in, calorie out" might be contested, despite having become a commonly held belief in nutritionism.
Calorie counting
Calorie amounts found on food labels are based on the Atwater system. The accuracy of the system is disputed, despite no real proposed alternatives. For example, a 2012 study by a USDA scientist concluded that the measured energy content of a sample of almonds was 32% lower than the estimated Atwater value. The driving mechanism behind caloric intake is absorption, which occurs largely in the small intestine and distributes nutrients to the circulatory and lymphatic capillaries by means of osmosis, diffusion and active transport. Fat, in particular is emulsified by bile produced by the liver and stored in the gallbladder where it is released to the small intestine via the bile duct. A relatively lesser amount of absorption—composed primarily of water—occurs in the large intestine.
A kilocalorie is the equivalent of 1000 calories or one dietary Calorie, which contains 4184 joules of energy. The human body is a highly complex biochemical system that undergoes processes which regulate energy balance. The metabolic pathways for protein are
|
https://en.wikipedia.org/wiki/Energy%20efficient%20transformer
|
In a typical power distribution grid, electric transformer power loss typically contributes to about 40-50% of the total transmission and distribution loss. Energy efficient transformers are therefore an important means to reduce transmission and distribution loss. With the improvement of electrical steel (silicon steel) properties, the losses of a transformer in 2010 can be half that of a similar transformer in the 1970s. With new magnetic materials, it is possible to achieve even higher efficiency. The amorphous metal transformer is a modern example.
|
https://en.wikipedia.org/wiki/No%20free%20lunch%20theorem
|
In mathematical folklore, the "no free lunch" (NFL) theorem (sometimes pluralized) of David Wolpert and William Macready, alludes to the saying "no such thing as a free lunch", that is, there are no easy shortcuts to success. It appeared in the 1997 "No Free Lunch Theorems for Optimization". Wolpert had previously derived no free lunch theorems for machine learning (statistical inference).
In 2005, Wolpert and Macready themselves indicated that the first theorem in their paper "state[s] that any two optimization algorithms are equivalent when their performance is averaged across all possible problems".
The "no free lunch" (NFL) theorem is an easily stated and easily understood consequence of theorems Wolpert and Macready actually prove. It is weaker than the proven theorems, and thus does not encapsulate them. Various investigators have extended the work of Wolpert and Macready substantively. In terms of how the NFL theorem is used in the context of the research area, the no free lunch in search and optimization is a field that is dedicated for purposes of mathematically analyzing data for statistical identity, particularly search and optimization.
While some scholars argue that NFL conveys important insight, others argue that NFL is of little relevance to machine learning research.
Example
Posit a toy universe that exists for exactly two days and on each day contains exactly one object, a square or a triangle. The universe has exactly four possible histories:
(square, triangle): the universe contains a square on day 1, and a triangle on day 2
(square, square)
(triangle, triangle)
(triangle, square)
Any prediction strategy that succeeds for history #2, by predicting a square on day 2 if there is a square on day 1, will fail on history #1, and vice versa. If all histories are equally likely, then any prediction strategy will score the same, with the same accuracy rate of 0.5.
Origin
Wolpert and Macready give two NFL theorems that are closely related to the
|
https://en.wikipedia.org/wiki/Outline%20of%20computer%20engineering
|
The following outline is provided as an overview of and topical guide to computer engineering:
Computer engineering – discipline that integrates several fields of electrical engineering and computer science required to develop computer hardware and software. Computer engineers usually have training in electronic engineering (or electrical engineering), software design, and hardware-software integration instead of only software engineering or electronic engineering. Computer engineers are involved in many hardware and software aspects of computing, from the design of individual microcontrollers, microprocessors, personal computers, and supercomputers, to circuit design. This field of engineering not only focuses on how computer systems themselves work, but also how they integrate into the larger picture.
Main articles on computer engineering
Computer
Computer architecture
Computer hardware
Computer software
Computer science
Engineering
Electrical engineering
Software engineering
History of computer engineering
General
Time line of computing 2400 BC - 1949 - 1950-1979 - 1980-1989 - 1990-1999 - 2000-2009
History of computing hardware up to third generation (1960s)
History of computing hardware from 1960s to current
History of computer hardware in Eastern Bloc countries
History of personal computers
History of laptops
History of software engineering
History of compiler writing
History of the Internet
History of the World Wide Web
History of video games
History of the graphical user interface
Timeline of computing
Timeline of operating systems
Timeline of programming languages
Timeline of artificial intelligence
Timeline of cryptography
Timeline of algorithms
Timeline of quantum computing
Product specific
Timeline of DOS operating systems
Classic Mac OS
History of macOS
History of Microsoft Windows
Timeline of the Apple II series
Timeline of Apple products
Timeline of file sharing
Timeline of OpenBSD
Hardware
Digital
|
https://en.wikipedia.org/wiki/List%20of%20Martin%20Gardner%20Mathematical%20Games%20columns
|
Over a period of 24 years (January 1957 – December 1980), Martin Gardner wrote 288 consecutive monthly "Mathematical Games" columns for Scientific American magazine. During the next years, through June 1986, Gardner wrote 9 more columns, bringing his total to 297. During this period other authors wrote most of the columns. In 1981, Gardner's column alternated with a new column by Douglas Hofstadter called "Metamagical Themas" (an anagram of "Mathematical Games"). The table below lists Gardner's columns.
Twelve of Gardner's columns provided the cover art for that month's magazine, indicated by "[cover]" in the table with a hyperlink to the cover.
Other articles by Gardner
Gardner wrote 5 other articles for Scientific American. His flexagon article in December 1956 was in all but name the first article in the series of Mathematical Games columns and led directly to the series which began the following month. These five articles are listed below.
|
https://en.wikipedia.org/wiki/Negative%20feedback
|
Negative feedback (or balancing feedback) occurs when some function of the output of a system, process, or mechanism is fed back in a manner that tends to reduce the fluctuations in the output, whether caused by changes in the input or by other disturbances. A classic example of negative feedback is a heating system thermostat — when the temperature gets high enough, the heater is turned OFF. When the temperature gets too cold, the heat is turned back ON. In each case the "feedback" generated by the thermostat "negates" the trend.
The opposite tendency — called positive feedback — is when a trend is positively reinforced, creating amplification, such as the squealing "feedback" loop that can occur when a mic is brought too close to a speaker which is amplifying the very sounds the mic is picking up, or the runaway heating and ultimate meltdown of a nuclear reactor.
Whereas positive feedback tends to lead to instability via exponential growth, oscillation or chaotic behavior, negative feedback generally promotes stability. Negative feedback tends to promote a settling to equilibrium, and reduces the effects of perturbations. Negative feedback loops in which just the right amount of correction is applied with optimum timing, can be very stable, accurate, and responsive.
Negative feedback is widely used in mechanical and electronic engineering, and also within living organisms, and can be seen in many other fields from chemistry and economics to physical systems such as the climate. General negative feedback systems are studied in control systems engineering.
Negative feedback loops also play an integral role in maintaining the atmospheric balance in various systems on Earth. One such feedback system is the interaction between solar radiation, cloud cover, and planet temperature.
General description
In many physical and biological systems, qualitatively different influences can oppose each other. For example, in biochemistry, one set of chemicals drives the syst
|
https://en.wikipedia.org/wiki/Noiselet
|
Noiselets are functions which gives the worst case behavior for the Haar wavelet packet analysis. In other words, noiselets are totally incompressible by the Haar wavelet packet analysis. Like the canonical and Fourier bases, which have an incoherent property, noiselets are perfectly incoherent with the Haar basis. In addition, they have a fast algorithm for implementation, making them useful as a sampling basis for signals that are sparse in the Haar domain.
Definition
The mother bases function is defined as:
The family of noislets is constructed recursively as follows:
Property of fn
is an orthogonal basis for , where is the space of all possible approximations at the resolution of functions in .
For each ,
Matrix construction of noiselets
Noiselet can be extended and discretized. The extended function is defined as follows:
Use extended noiselet , we can generate the noiselet matrix , where n is a power of two :
Here denotes the Kronecker product.
Suppose , we can find that is equal .
The elements of the noiselet matrices take discrete values from one of two four-element sets:
2D noiselet transform
2D noiselet transforms are obtained through the Kronecker product of 1D noiselet transform:
Applications
Noiselet has some properties that make them ideal for applications:
The noiselet matrix can be derived in .
Noiselet completely spread out spectrum and have the perfectly incoherent with Haar wavelets.
Noiselet is conjugate symmetric and is unitary.
The complementarity of wavelets and noiselets means that noiselets can be used in compressed sensing to reconstruct a signal (such as an image) which has a compact representation in wavelets. MRI data can be acquired in noiselet domain, and, subsequently, images can be reconstructed from undersampled data using compressive-sensing reconstruction.
|
https://en.wikipedia.org/wiki/The%20Swallow%27s%20Tail
|
The Swallow's Tail — Series of Catastrophes () was Salvador Dalí's last painting. It was completed in May 1983, as the final part of a series based on the mathematical catastrophe theory of René Thom.
Thom suggested that in four-dimensional phenomena, there are seven possible equilibrium surfaces, and therefore seven possible discontinuities, or "elementary catastrophes": fold, cusp, swallowtail, butterfly, hyperbolic umbilic, elliptic umbilic, and parabolic umbilic. "The shape of Dalí's Swallow's Tail is taken directly from Thom's four-dimensional graph of the same title, combined with a second catastrophe graph, the s-curve that Thom dubbed, 'the cusp'. Thom's model is presented alongside the elegant curves of a cello and the instrument's f-holes, which, especially as they lack the small pointed side-cuts of a traditional f-hole, equally connote the mathematical symbol for an integral in calculus: ∫."
In his 1979 speech, Gala, Velázquez and the Golden Fleece, presented upon his 1979 induction into the prestigious Académie des Beaux-Arts of the Institut de France, Dalí described Thom's theory of catastrophes as "the most beautiful aesthetic theory in the world". He also recollected his first and only meeting with René Thom, at which Thom purportedly told Dalí that he was studying tectonic plates; this provoked Dalí to question Thom about the railway station at Perpignan, France (near the Spanish border), which the artist had declared in the 1960s to be the center of the universe.
Thom reportedly replied, "I can assure you that Spain pivoted precisely — not in the area of — but exactly there where the Railway Station in Perpignan stands today". Dalí was immediately enraptured by Thom's statement, influencing his painting Topological Abduction of Europe — Homage to René Thom, the lower left corner of which features an equation closely linked to the "swallow's tail": an illustration of the graph, and the term queue d'aronde. The seismic fracture that transver
|
https://en.wikipedia.org/wiki/Degeneracy%20%28mathematics%29
|
In mathematics, a degenerate case is a limiting case of a class of objects which appears to be qualitatively different from (and usually simpler than) the rest of the class, and the term degeneracy is the condition of being a degenerate case.
The definitions of many classes of composite or structured objects often implicitly include inequalities. For example, the angles and the side lengths of a triangle are supposed to be positive. The limiting cases, where one or several of these inequalities become equalities, are degeneracies. In the case of triangles, one has a degenerate triangle if at least one side length or angle is zero. Equivalently, it becomes a "line segment".
Often, the degenerate cases are the exceptional cases where changes to the usual dimension or the cardinality of the object (or of some part of it) occur. For example, a triangle is an object of dimension two, and a degenerate triangle is contained in a line, which makes its dimension one. This is similar to the case of a circle, whose dimension shrinks from two to zero as it degenerates into a point. As another example, the solution set of a system of equations that depends on parameters generally has a fixed cardinality and dimension, but cardinality and/or dimension may be different for some exceptional values, called degenerate cases. In such a degenerate case, the solution set is said to be degenerate.
For some classes of composite objects, the degenerate cases depend on the properties that are specifically studied. In particular, the class of objects may often be defined or characterized by systems of equations. In most scenarios, a given class of objects may be defined by several different systems of equations, and these different systems of equations may lead to different degenerate cases, while characterizing the same non-degenerate cases. This may be the reason for which there is no general definition of degeneracy, despite the fact that the concept is widely used and defined (if need
|
https://en.wikipedia.org/wiki/Toy%20theorem
|
In mathematics, a toy theorem is a simplified instance (special case) of a more general theorem, which can be useful in providing a handy representation of the general theorem, or a framework for proving the general theorem. One way of obtaining a toy theorem is by introducing some simplifying assumptions in a theorem.
In many cases, a toy theorem is used to illustrate the claim of a theorem, while in other cases, studying the proofs of a toy theorem (derived from a non-trivial theorem) can provide insight that would be hard to obtain otherwise.
Toy theorems can also have educational value as well. For example, after presenting a theorem (with, say, a highly non-trivial proof), one can sometimes give some assurance that the theorem really holds, by proving a toy version of the theorem.
Examples
A toy theorem of the Brouwer fixed-point theorem is obtained by restricting the dimension to one. In this case, the Brouwer fixed-point theorem follows almost immediately from the intermediate value theorem.
Another example of toy theorem is Rolle's theorem, which is obtained from the mean value theorem by equating the function values at the endpoints.
See also
Corollary
Fundamental theorem
Lemma (mathematics)
Toy model
|
https://en.wikipedia.org/wiki/TMS6100
|
The Texas Instruments TMS6100 is a 1 or 4-bit serial mask (factory)-programmed read-only memory IC. It is a companion chip to the TMS5100, CD2802, TMS5110, (rarely) TMS5200, and (rarely) TMS5220 speech synthesizer ICs, and was mask-programmed with LPC data required for a specific product. It holds 128Kib (16KiB) of data, and is mask-programmed with a start address for said data on a 16KiB boundary. It is also mask-programmable whether the /CE line needs to be high or low to activate, and also what the two (or four) 'internal' CE bits need to be set to activate, effectively making the total addressable area 18 bits. Finally, it is mask-programmable whether the bits are read out 1-bit serially or 4 at a time.
TMS6125
The TMS6125 is a smaller, 32Kib (4KiB) version of effectively the same chip, with some minor changes to the 'address load' command format to reflect its smaller size.
Texas Instruments calls both of these serial roms (TMS6100 and TMS6125) "VSM"s (Voice Synthesis Memory) on their datasheets and literature, and they will be referred to as such for the rest of this article.
Both VSMs use 'local addressing', meaning the chip keeps track of its own address pointer once loaded. Hence every bit in the chip can be sequentially read out, even though internally the chip stores data in 8-bit bytes.
(For the following section, CE stands for "Chip Enable" and is used as a way to enable one specific VSM)
Commands
The VSM has supports 4 basic commands, based on two input pins called 'M0' and 'M1':
no operation/idle: this command tells the chip to 'do nothing' or 'continue doing what was being done before'.
load address: this command parallel-loads 4 bits from the data bus. to fully load an address, this command must be executed 5 times in sequence, for a load of a 20 bit block (LSB-first 14 bit address, 4 CE bits, and two unused bits, effectively 18 address bits) into the internal address pointer. On the TMS6125 the command must be executed 4 times instead, and o
|
https://en.wikipedia.org/wiki/Lillian%20Rosanoff%20Lieber
|
Lillian Rosanoff Lieber (July 26, 1886 in Nicolaiev, Russian Empire – July 11, 1986 in Queens, New York) was a Russian-American mathematician and popular author. She often teamed up with her illustrator husband, Hugh Gray Lieber, to produce works.
Life and career
Early life and education
Lieber was one of four children of Abraham H. and Clara (Bercinskaya) Rosanoff. Her brothers were Denver publisher Joseph Rosenberg, psychiatrist Aaron Rosanoff, and chemist Martin André Rosanoff. Aaron and Martin changed their names to sound more Russian, less Jewish. Lieber moved to the US with her family in 1891. She received her A.B. from Barnard College in 1908, her M.A. from Columbia University in 1911, and her Ph.D. (in chemistry) from Clark University in 1914, under Martin's direction; at Clark, Solomon Lefschetz was a classmate. She married Hugh Gray Lieber on October 27, 1926.
Career
After teaching at Hunter College from 1908 to 1910, and in the New York City high school system (1910–1912, 1914–1915), she became a Research Fellow at Bryn Mawr College from 1915 to 1917; she then went on to teach at Wells College from 1917 to 1918 as Instructor of Physics (also acting as head of the physics department), and at the Connecticut College for Women (1918 to 1920). She joined the mathematics department at Long Island University (LIU) in Brooklyn, New York (LIU Brooklyn) in 1934, became department chair in 1945 (taking over from Hugh when he became Professor, and Chair, of Art at LIU ), and was made a full professor in 1947, until her retirement in 1954; she was appointed director of LIU's Galois Institute of Mathematics (later the Galois Institute of Mathematics and Art) (named for Évariste Galois) in 1934. Over her career she published some 17 books, which were written in a unique, free-verse style and illustrated with whimsical line drawings by her husband. Her highly accessible writings were praised by no less than Albert Einstein, Cassius Jackson Keyser, Eric Temple Bell,
|
https://en.wikipedia.org/wiki/Shadow%20stack
|
In computer security, a shadow stack is a mechanism for protecting a procedure's stored return address, such as from a stack buffer overflow. The shadow stack itself is a second, separate stack that "shadows" the program call stack. In the function prologue, a function stores its return address to both the call stack and the shadow stack. In the function epilogue, a function loads the return address from both the call stack and the shadow stack, and then compares them. If the two records of the return address differ, then an attack is detected; the typical course of action is simply to terminate the program or alert system administrators about a possible intrusion attempt. A shadow stack is similar to stack canaries in that both mechanisms aim to maintain the control-flow integrity of the protected program by detecting attacks that tamper the stored return address by an attacker during an exploitation attempt.
Shadow stacks can be implemented by recompiling programs with modified prologues and epilogues, by dynamic binary rewriting techniques to achieve the same effect, or with hardware support. Unlike the call stack, which also stores local program variables, passed arguments, spilled registers and other data, the shadow stack typically just stores a second copy of a function's return address.
Shadow stacks provide more protection for return addresses than stack canaries, which rely on the secrecy of the canary value and are vulnerable to non-contiguous write attacks. Shadow stacks themselves can be protected with guard pages or with information hiding, such that an attacker would also need to locate the shadow stack to overwrite a return address stored there.
Like stack canaries, shadow stacks do not protect stack data other than return addresses, and so offer incomplete protection against security vulnerabilities that result from memory safety errors.
In 2016, Intel announced upcoming hardware support for shadow stacks with their Control-flow Enforcement Tech
|
https://en.wikipedia.org/wiki/Fault%20coverage
|
Fault coverage refers to the percentage of some type of fault that can be detected during the test of any engineered system. High fault coverage is particularly valuable during manufacturing test, and techniques such as Design For Test (DFT) and automatic test pattern generation are used to increase it.
In electronics for example, stuck-at fault coverage is measured by sticking each pin of the hardware model at logic '0' and logic '1', respectively, and running the test vectors. If at least one of the outputs differs from what is to be expected, the fault is said to be detected. Conceptually, the total number of simulation runs is twice the number of pins (since each pin is stuck in one of two ways, and both faults should be detected). However, there are many optimizations that can reduce the needed computation. In particular, often many non-interacting faults can be simulated in one run, and each simulation can be terminated as soon as a fault is detected.
A fault coverage test passes when at least a specified percentage of all possible faults can be detected. If it does not pass, at least three options are possible. First, the designer can augment or otherwise improve the vector set, perhaps by using a more effective automatic test pattern generation tool. Second, the circuit may be re-defined for better fault detectibility (improved controllability and observability). Third, the designer may simply accept the lower coverage.
Test coverage (computing)
The term test coverage used in the context of programming / software engineering, refers to measuring how much a software program has been exercised by tests. Coverage is a means of determining the rigour with which the question underlying the test has been answered. There are many kinds of test coverage:
code coverage
feature coverage,
scenario coverage,
screen item coverage,
requirements coverage,
model coverage.
Each of these coverage types assumes that some kind of baseline exists which defin
|
https://en.wikipedia.org/wiki/Glossary%20of%20Principia%20Mathematica
|
This is a list of the notation used in Alfred North Whitehead and Bertrand Russell's Principia Mathematica (1910–1913).
The second (but not the first) edition of Volume I has a list of notation used at the end.
Glossary
This is a glossary of some of the technical terms in Principia Mathematica that are no longer widely used or whose meaning has changed.
Symbols introduced in Principia Mathematica, Volume I
Symbols introduced in Principia Mathematica, Volume II
Symbols introduced in Principia Mathematica, Volume III
See also
Glossary of set theory
Notes
|
https://en.wikipedia.org/wiki/Orbifold%20notation
|
In geometry, orbifold notation (or orbifold signature) is a system, invented by the mathematician William Thurston and promoted by John Conway, for representing types of symmetry groups in two-dimensional spaces of constant curvature. The advantage of the notation is that it describes these groups in a way which indicates many of the groups' properties: in particular, it follows William Thurston in describing the orbifold obtained by taking the quotient of Euclidean space by the group under consideration.
Groups representable in this notation include the point groups on the sphere (), the frieze groups and wallpaper groups of the Euclidean plane (), and their analogues on the hyperbolic plane ().
Definition of the notation
The following types of Euclidean transformation can occur in a group described by orbifold notation:
reflection through a line (or plane)
translation by a vector
rotation of finite order around a point
infinite rotation around a line in 3-space
glide-reflection, i.e. reflection followed by translation.
All translations which occur are assumed to form a discrete subgroup of the group symmetries being described.
Each group is denoted in orbifold notation by a finite string made up from the following symbols:
positive integers
the infinity symbol,
the asterisk, *
the symbol o (a solid circle in older documents), which is called a wonder and also a handle because it topologically represents a torus (1-handle) closed surface. Patterns repeat by two translation.
the symbol (an open circle in older documents), which is called a miracle and represents a topological crosscap where a pattern repeats as a mirror image without crossing a mirror line.
A string written in boldface represents a group of symmetries of Euclidean 3-space. A string not written in boldface represents a group of symmetries of the Euclidean plane, which is assumed to contain two independent translations.
Each symbol corresponds to a distinct transformation:
an
|
https://en.wikipedia.org/wiki/Resilient%20control%20systems
|
In our modern society, computerized or digital control systems have been used to reliably automate many of the industrial operations that we take for granted, from the power plant to the automobiles we drive. However, the complexity of these systems and how the designers integrate them, the roles and responsibilities of the humans that interact with the systems, and the cyber security of these highly networked systems have led to a new paradigm in research philosophy for next-generation control systems. Resilient Control Systems consider all of these elements and those disciplines that contribute to a more effective design, such as cognitive psychology, computer science, and control engineering to develop interdisciplinary solutions. These solutions consider things such as how to tailor the control system operating displays to best enable the user to make an accurate and reproducible response, how to design in cybersecurity protections such that the system defends itself from attack by changing its behaviors, and how to better integrate widely distributed computer control systems to prevent cascading failures that result in disruptions to critical industrial operations. In the context of cyber-physical systems, resilient control systems are an aspect that focuses on the unique interdependencies of a control system, as compared to information technology computer systems and networks, due to its importance in operating our critical industrial operations.
Introduction
Originally intended to provide a more efficient mechanism for controlling industrial operations, the development of digital control systems allowed for flexibility in integrating distributed sensors and operating logic while maintaining a centralized interface for human monitoring and interaction. This ease of readily adding sensors and logic through software, which was once done with relays and isolated analog instruments, has led to wide acceptance and integration of these systems in all industries. Ho
|
https://en.wikipedia.org/wiki/Photonic%20integrated%20circuit
|
A photonic integrated circuit (PIC) or integrated optical circuit is a microchip containing two or more photonic components which form a functioning circuit. This technology detects, generates, transports, and processes light. Photonic integrated circuits utilize photons (or particles of light) as opposed to electrons that are utilized by electronic integrated circuits. The major difference between the two is that a photonic integrated circuit provides functions for information signals imposed on optical wavelengths typically in the visible spectrum or near infrared (850–1650 nm).
The most commercially utilized material platform for photonic integrated circuits is indium phosphide (InP), which allows for the integration of various optically active and passive functions on the same chip. Initial examples of photonic integrated circuits were simple 2-section distributed Bragg reflector (DBR) lasers, consisting of two independently controlled device sections – a gain section and a DBR mirror section. Consequently, all modern monolithic tunable lasers, widely tunable lasers, externally modulated lasers and transmitters, integrated receivers, etc. are examples of photonic integrated circuits. As of 2012, devices integrate hundreds of functions onto a single chip.
Pioneering work in this arena was performed at Bell Laboratories. The most notable academic centers of excellence of photonic integrated circuits in InP are the University of California at Santa Barbara, USA, the Eindhoven University of Technology and the University of Twente in the Netherlands.
A 2005 development showed that silicon can, even though it is an indirect bandgap material, still be used to generate laser light via the Raman nonlinearity. Such lasers are not electrically driven but optically driven and therefore still necessitate a further optical pump laser source.
History
Photonics is the science behind the detection, generation, and manipulation of photons. According to quantum mechanics and t
|
https://en.wikipedia.org/wiki/List%20of%20integrated%20circuit%20manufacturers
|
The following is an incomplete list of notable integrated circuit (i.e. microchip) manufacturers. Some are in business, others are defunct and some are Fabless.
0–9
3dfx Interactive (acquired by Nvidia in 2002)
A
Achronix
Actions Semiconductor
Adapteva
Agere Systems (now part of LSI Logic formerly part of Lucent, which was formerly part of AT&T)
Agilent Technologies (formerly part of Hewlett-Packard, spun off in 1999)
Airgo Networks (acquired by Qualcomm in 2006)
Alcatel
Alchip
Altera
Allegro Microsystems
Allwinner Technology
Alphamosaic (acquired by Broadcom in 2004)
AMD (Advanced Micro Devices; founded by ex-Fairchild employees)
Analog Devices
Apple Inc.
Applied Materials
Applied Micro Circuits Corporation (AMCC)
ARM
Asahi Kasei Microdevices (AKM)
AT&T
Atari
Atheros (acquired by Qualcomm in 2011)
ATI Technologies (Array Technologies Incorporated; acquired parts of Tseng Labs in 1997; in 2006, became a wholly owned subsidiary of AMD)
Atmel (co-founded by ex-Intel employee, now part of Microchip Technology)
Amkor Technology
ams AG (formerly known as austriamicrosystems AG and frequently still known as AMS (Austria Mikro Systeme))
B
Bourns, Inc.
Brite Semiconductor
Broadcom Corporation (acquired by Avago Technologies in 2016)
Broadcom Inc. (formerly Avago Technologies)
BroadLight
Burr-Brown Corporation (Acquired by Texas Instruments in 2000)
C
C-Cube Microsystems
Calxeda (re-emerged with Silver Lining Systems in 2014)
Cavium
CEITEC
Chips and Technologies (acquired by Intel in 1997)
CISC Semiconductor
Cirrus Logic
Corsair
Club 3D (Formerly Colour Power)
Commodore Semiconductor Group (formerly MOS Technologies)
Conexant (formerly Rockwell Semiconductor, acquired by Synaptics in 2017)
Crocus Technology
CSR plc (formerly Cambridge Silicon Radio)
Cypress Semiconductor Now operating as subsidiary of Infineon Technologies.It was (acquired by Infineon Technologies in 2019).
D
D-Wave Systems
Dallas Semiconductor (acquired by Maxim Integrated in 2001)
Dynex Semiconductor
|
https://en.wikipedia.org/wiki/Anatomy
|
Anatomy () is the branch of biology concerned with the study of the structure of organisms and their parts. Anatomy is a branch of natural science that deals with the structural organization of living things. It is an old science, having its beginnings in prehistoric times. Anatomy is inherently tied to developmental biology, embryology, comparative anatomy, evolutionary biology, and phylogeny, as these are the processes by which anatomy is generated, both over immediate and long-term timescales. Anatomy and physiology, which study the structure and function of organisms and their parts respectively, make a natural pair of related disciplines, and are often studied together. Human anatomy is one of the essential basic sciences that are applied in medicine.
Anatomy is a complex and dynamic field that is constantly evolving as new discoveries are made. In recent years, there has been a significant increase in the use of advanced imaging techniques, such as MRI and CT scans, which allow for more detailed and accurate visualizations of the body's structures.
The discipline of anatomy is divided into macroscopic and microscopic parts. Macroscopic anatomy, or gross anatomy, is the examination of an animal's body parts using unaided eyesight. Gross anatomy also includes the branch of superficial anatomy. Microscopic anatomy involves the use of optical instruments in the study of the tissues of various structures, known as histology, and also in the study of cells.
The history of anatomy is characterized by a progressive understanding of the functions of the organs and structures of the human body. Methods have also improved dramatically, advancing from the examination of animals by dissection of carcasses and cadavers (corpses) to 20th-century medical imaging techniques, including X-ray, ultrasound, and magnetic resonance imaging.
Etymology and definition
Derived from the Greek anatomē "dissection" (from anatémnō "I cut up, cut open" from ἀνά aná "up", and τέμνω té
|
https://en.wikipedia.org/wiki/Switch%20virtual%20interface
|
A switch virtual interface (SVI) represents a logical layer-3 interface on a switch.
VLANs divide broadcast domains in a LAN environment. Whenever hosts in one VLAN need to communicate with hosts in another VLAN, the traffic must be routed between them. This is known as inter-VLAN routing. On layer-3 switches it is accomplished by the creation of layer-3 interfaces (SVIs). Inter VLAN routing, in other words routing between VLANs, can be achieved using SVIs.
SVI or VLAN interface, is a virtual routed interface that connects a VLAN on the device to the Layer 3 router engine on the same device. Only one VLAN interface can be associated with a VLAN, but you need to configure a VLAN interface for a VLAN only when you want to route between VLANs or to provide IP host connectivity to the device through a virtual routing and forwarding (VRF) instance that is not the management VRF. When you enable VLAN interface creation, a switch creates a VLAN interface for the default VLAN (VLAN 1) to permit remote switch administration.
SVIs are generally configured for a VLAN for the following reasons:
Allow traffic to be routed between VLANs by providing a default gateway for the VLAN.
Provide fallback bridging (if required for non-routable protocols).
Provide Layer 3 IP connectivity to the switch.
Support bridging configurations and routing protocol.
Access Layer - 'Routed Access' Configuration (in lieu of Spanning Tree)
SVIs advantages include:
Much faster than router-on-a-stick, because everything is hardware-switched and routed.
No need for external links from the switch to the router for routing.
Not limited to one link. Layer 2 EtherChannels can be used between the switches to get more bandwidth.
Latency is much lower, because it does not need to leave the switch
An SVI can also be known as a Routed VLAN Interface (RVI) by some vendors.
|
https://en.wikipedia.org/wiki/Grading%20%28tumors%29
|
In pathology, grading is a measure of the cell appearance in tumors and other neoplasms. Some pathology grading systems apply only to malignant neoplasms (cancer); others apply also to benign neoplasms. The neoplastic grading is a measure of cell anaplasia (reversion of differentiation) in the sampled tumor and is based on the resemblance of the tumor to the tissue of origin. Grading in cancer is distinguished from staging, which is a measure of the extent to which the cancer has spread.
Pathology grading systems classify the microscopic cell appearance abnormality and deviations in their rate of growth with the goal of predicting developments at tissue level (see also the 4 major histological changes in dysplasia).
Cancer is a disorder of cell life cycle alteration that leads (non-trivially) to excessive cell proliferation rates, typically longer cell lifespans and poor differentiation. The grade score (numerical: G1 up to G4) increases with the lack of cellular differentiation - it reflects how much the tumor cells differ from the cells of the normal tissue they have originated from (see 'Categories' below). Tumors may be graded on four-tier, three-tier, or two-tier scales, depending on the institution and the tumor type.
The histologic tumor grade score along with the metastatic (whole-body-level cancer-spread) staging are used to evaluate each specific cancer patient, develop their individual treatment strategy and to predict their prognosis. A cancer that is very poorly differentiated is called anaplastic.
Categories
Grading systems are also different for many common types of cancer, though following a similar pattern with grades being increasingly malignant over a range of 1 to 4. If no specific system is used, the following general grades are most commonly used, and recommended by the American Joint Commission on Cancer and other bodies:
GX Grade cannot be assessed
G1 Well differentiated (Low grade)
G2 Mode
|
https://en.wikipedia.org/wiki/Charge%20controller
|
A charge controller, charge regulator or battery regulator limits the rate at which electric current is added to or drawn from electric batteries to protect against electrical overload, overcharging, and may protect against overvoltage. This prevents conditions that reduce battery performance or lifespan and may pose a safety risk. It may also prevent completely draining ("deep discharging") a battery, or perform controlled discharges, depending on the battery technology, to protect battery life.
The terms "charge controller" or "charge regulator" may refer to either a stand-alone device, or to control circuitry integrated within a battery pack, battery-powered device, and/or battery charger.
Stand-alone charge controllers
Charge controllers are sold to consumers as separate devices, often in conjunction with solar or wind power generators, for uses such as RV, boat, and off-the-grid home battery storage systems.
In solar applications, charge controllers may also be called solar regulators or solar charge controllers. Some charge controllers / solar regulators have additional features, such as a low voltage disconnect (LVD), a separate circuit which powers down the load when the batteries become overly discharged (some battery chemistries are such that over-discharge can ruin the battery).
A series charge controller or series regulator disables further current flow into batteries when they are full. A shunt charge controller or shunt regulator diverts excess electricity to an auxiliary or "shunt" load, such as an electric water heater, when batteries are full.
Simple charge controllers stop charging a battery when they exceed a set high voltage level, and re-enable charging when battery voltage drops back below that level. Pulse-width modulation (PWM) and maximum power point tracker (MPPT) technologies are more electronically sophisticated, adjusting charging rates depending on the battery's level, to allow charging closer to its maximum capacity.
A charge con
|
https://en.wikipedia.org/wiki/Galactic%20algorithm
|
A galactic algorithm is one that outperforms other algorithms for problems that are sufficiently large, but where "sufficiently large" is so big that the algorithm is never used in practice. Galactic algorithms were so named by Richard Lipton and Ken Regan, because they will never be used on any data sets on Earth.
Possible use cases
Even if they are never used in practice, galactic algorithms may still contribute to computer science:
An algorithm, even if impractical, may show new techniques that may eventually be used to create practical algorithms.
Available computational power may catch up to the crossover point, so that a previously impractical algorithm becomes practical.
An impractical algorithm can still demonstrate that conjectured bounds can be achieved, or that proposed bounds are wrong, and hence advance the theory of algorithms. As Lipton states: Similarly, a hypothetical large but polynomial algorithm for the Boolean satisfiability problem, although unusable in practice, would settle the P versus NP problem, considered the most important open problem in computer science and one of the Millennium Prize Problems.
Examples
Integer multiplication
An example of a galactic algorithm is the fastest known way to multiply two numbers, which is based on a 1729-dimensional Fourier transform. It needs bit operations, but as the constants hidden by the big O notation are large, it is never used in practice. However, it also shows why galactic algorithms may still be useful. The authors state: "we are hopeful that with further refinements, the algorithm might become practical for numbers with merely billions or trillions of digits."
Matrix multiplication
The first improvement over brute-force matrix multiplication (which needs multiplications) was the Strassen algorithm: a recursive algorithm that needs multiplications. This algorithm is not galactic and is used in practice. Further extensions of this, using sophisticated group theory, are the Coppers
|
https://en.wikipedia.org/wiki/Electronic%20hardware
|
Electronic hardware consists of interconnected electronic components which perform analog or logic operations on received and locally stored information to produce as output or store resulting new information or to provide control for output actuator mechanisms.
Electronic hardware can range from individual chips/circuits to distributed information processing systems. Well designed electronic hardware is composed of hierarchies of functional modules which inter-communicate via precisely defined interfaces.
Hardware logic is primarily a differentiation of the data processing circuitry from other more generalized circuitry. For example nearly all computers include a power supply which consists of circuitry not involved in data processing but rather powering the data processing circuits. Similarly, a computer may output information to a computer monitor or audio amplifier which is also not involved in the computational processes.
See also
Digital electronics
|
https://en.wikipedia.org/wiki/Circuit%20underutilization
|
Circuit underutilization also chip underutilization, programmable circuit underutilization, gate underutilization, logic block underutilization refers to a physical incomplete utility of semiconductor grade silicon on a standardized mass-produced circuit programmable chip, such as a gate array type ASIC, an FPGA, or a CPLD.
Gate array
In the example of a gate array, which may come in sizes of 5,000 or 10,000 gates, a design which utilizes even 5,001 gates would be required to use a 10,000 gate chip. This inefficiency results in underutilization of the silicon.
FPGA
Due to the design components of field-programmable gate array into logic blocks, simple designs that underutilize a single block suffer from gate underutilization, as do designs that overflow onto multiple blocks, such as designs that use wide gates. Additionally, the very generic architecture of FPGAs lends to high inefficiency; multiplexers occupy silicon real estate for programmable selection, and an abundance of flip-flops to reduce setup and hold times, even if the design does not require them, resulting in 40 times less density than of standard cell ASICs.
See also
Circuit minimization
Don't-care condition
|
https://en.wikipedia.org/wiki/Mass%20action%20law%20%28electronics%29
|
In electronics and semiconductor physics, the law of mass action relates the concentrations of free electrons and electron holes under thermal equilibrium. It states that, under thermal equilibrium, the product of the free electron concentration and the free hole concentration is equal to a constant square of intrinsic carrier concentration . The intrinsic carrier concentration is a function of temperature.
The equation for the mass action law for semiconductors is:
Carrier concentrations
In semiconductors, free electrons and holes are the carriers that provide conduction. For cases where the number of carriers are much less than the number of band states, the carrier concentrations can be approximated by using Boltzmann statistics, giving the results below.
Electron concentration
The free-electron concentration n can be approximated by
where
Ec is the energy of the conduction band,
EF is the energy of the Fermi level,
kB is the Boltzmann constant,
T is the absolute temperature in kelvins,
Nc is the effective density of states at the conduction band edge given by , with m*e being the electron effective mass and h being Planck's constant.
Hole concentration
The free-hole concentration p is given by a similar formula
where
EF is the energy of the Fermi level,
Ev is the energy of the valence band,
kB is the Boltzmann constant,
T is the absolute temperature in kelvins,
Nv is the effective density of states at the valence band edge given by , with m*h being the hole effective mass and h Planck's constant.
Mass action law
Using the carrier concentration equations given above, the mass action law can be stated as
where Eg is the band gap energy given by Eg = Ec − Ev. The above equation holds true even for lightly doped extrinsic semiconductors as the product is independent of doping concentration.
See also
Law of mass action
|
https://en.wikipedia.org/wiki/CoRR%20hypothesis
|
The CoRR hypothesis states that the location of genetic information in cytoplasmic organelles permits regulation of its expression by the reduction-oxidation ("redox") state of its gene products.
CoRR is short for "co-location for redox regulation", itself a shortened form of "co-location (of gene and gene product) for (evolutionary) continuity of redox regulation of gene expression".
CoRR was put forward explicitly in 1993 in a paper in the Journal of Theoretical Biology with the title "Control of gene expression by redox potential and the requirement for chloroplast and mitochondrial genomes". The central concept had been outlined in a review of 1992. The term CoRR was introduced in 2003 in a paper in Philosophical Transactions of the Royal Society entitled "The function of genomes in bioenergetic organelles".
The problem
Chloroplasts and mitochondria
Chloroplasts and mitochondria are energy-converting organelles in the cytoplasm of eukaryotic cells. Chloroplasts in plant cells perform photosynthesis; the capture and conversion of the energy of sunlight. Mitochondria in both plant and animal cells perform respiration; the release of this stored energy when work is done. In addition to these key reactions of bioenergetics, chloroplasts and mitochondria each contain specialized and discrete genetic systems. These genetic systems enable chloroplasts and mitochondria to make some of their own proteins.
Both the genetic and energy-converting systems of chloroplasts and mitochondria are descended, with little modification, from those of the free-living bacteria that these organelles once were. The existence of these cytoplasmic genomes is consistent with, and counts as evidence for, the endosymbiont hypothesis. Most genes for proteins of chloroplasts and mitochondria are, however, now located on chromosomes in the nuclei of eukaryotic cells. There they code for protein precursors that are made in the cytosol for subsequent import into the organelles.
Why
|
https://en.wikipedia.org/wiki/Link%20flap
|
Link flap is a condition where a communications link alternates between up and down states. Link flap can be caused by end station reboots, power-saving features, incorrect duplex configuration or marginal connections and signal integrity issues on the link.
|
https://en.wikipedia.org/wiki/Mail-sink
|
Smtp-sink is a utility program in the Postfix Mail software package that implements a "black hole" function. It listens on the named host (or address) and port. It accepts Simple Mail Transfer Protocol (SMTP) messages from the network and discards them. The purpose is to support measurement of client performance. It is not SMTP protocol compliant.
Connections can be accepted on IPv4 or IPv6 endpoints, or on UNIX-domain sockets. IPv4 and IPv6 are the default. This program is the complement of the smtp-source(1) program.
See also
Tarpit (networking)
SMTP
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.