source
stringlengths 33
168
| text
stringlengths 28
2k
|
---|---|
https://en.wikipedia.org/wiki/Nitro%20Zeus
|
Nitro Zeus is the project name for a well funded comprehensive cyber attack plan created as a mitigation strategy after the Stuxnet malware campaign and its aftermath. Unlike Stuxnet, that was loaded onto a system after the design phase to affect its proper operation, Nitro Zeus's objectives are built into a system during the design phase unbeknownst to the system users. This built-in feature allows a more assured and effective cyber attack against the system's users.
The information about its existence was raised during research and interviews carried out by Alex Gibney for his Zero Days documentary film. The proposed long term widespread infiltration of major Iranian systems would disrupt and degrade communications, power grid, and other vital systems as desired by the cyber attackers. This was to be achieved by electronic implants in Iranian computer networks. The project was seen as one pathway in alternatives to full-scale war.
See also
Kill Switch
Backdoor (computing)
Operation Olympic Games
|
https://en.wikipedia.org/wiki/Proof%20by%20infinite%20descent
|
In mathematics, a proof by infinite descent, also known as Fermat's method of descent, is a particular kind of proof by contradiction used to show that a statement cannot possibly hold for any number, by showing that if the statement were to hold for a number, then the same would be true for a smaller number, leading to an infinite descent and ultimately a contradiction. It is a method which relies on the well-ordering principle, and is often used to show that a given equation, such as a Diophantine equation, has no solutions.
Typically, one shows that if a solution to a problem existed, which in some sense was related to one or more natural numbers, it would necessarily imply that a second solution existed, which was related to one or more 'smaller' natural numbers. This in turn would imply a third solution related to smaller natural numbers, implying a fourth solution, therefore a fifth solution, and so on. However, there cannot be an infinity of ever-smaller natural numbers, and therefore by mathematical induction, the original premise—that any solution exists—is incorrect: its correctness produces a contradiction.
An alternative way to express this is to assume one or more solutions or examples exists, from which a smallest solution or example—a minimal counterexample—can then be inferred. Once there, one would try to prove that if a smallest solution exists, then it must imply the existence of a smaller solution (in some sense), which again proves that the existence of any solution would lead to a contradiction.
The earliest uses of the method of infinite descent appear in Euclid's Elements. A typical example is Proposition 31 of Book 7, in which Euclid proves that every composite integer is divided (in Euclid's terminology "measured") by some prime number.
The method was much later developed by Fermat, who coined the term and often used it for Diophantine equations. Two typical examples are showing the non-solvability of the Diophantine equation and prov
|
https://en.wikipedia.org/wiki/FPGA%20prototyping
|
Field-programmable gate array prototyping (FPGA prototyping), also referred to as FPGA-based prototyping, ASIC prototyping or system-on-chip (SoC) prototyping, is the method to prototype system-on-chip and application-specific integrated circuit designs on FPGAs for hardware verification and early software development.
Verification methods for hardware design as well as early software and firmware co-design have become mainstream. Prototyping SoC and ASIC designs with one or more FPGAs and electronic design automation (EDA) software has become a good method to do this.
Why prototyping is important
Running a SoC design on FPGA prototype is a reliable way to ensure that it is functionally correct. This is compared to designers only relying on software simulations to verify that their hardware design is sound. About a third of all current SoC designs are fault-free during first silicon pass, with nearly half of all re-spins caused by functional logic errors. A single prototyping platform can provide verification for hardware, firmware, and application software design functionality before the first silicon pass.
Time-to-market (TTM) is reduced from FPGA prototyping: In today's technological driven society, new products are introduced rapidly, and failing to have a product ready at a given market window can cost a company a considerable amount of revenue. If a product is released too late of a market window, then the product could be rendered useless, costing the company its investment capital in the product. After the design process, FPGAs are ready for production, while standard cell ASICs take more than six months to reach production.
Development cost: Development cost of 90-nm ASIC/SoC design tape-out is around $20 million, with a mask set costing over $1 million alone. Development costs of 45-nm designs are expected to top $40 million. With increasing cost of mask sets, and the continuous decrease of IC size, minimizing the number of re-spins is vital to the deve
|
https://en.wikipedia.org/wiki/Annatto
|
Annatto ( or ) is an orange-red condiment and food coloring derived from the seeds of the achiote tree (Bixa orellana), native to tropical parts of the Americas. It is often used to impart a yellow or orange color to foods, but sometimes also for its flavor and aroma. Its scent is described as "slightly peppery with a hint of nutmeg" and flavor as "slightly nutty, sweet and peppery".
The color of annatto comes from various carotenoid pigments, mainly bixin and norbixin, found in the reddish waxy coating of the seeds. The condiment is typically prepared by grinding the seeds to a powder or paste. Similar effects can be obtained by extracting some of the color and flavor principles from the seeds with hot water, oil, or lard, which are then added to the food.
Annatto and its extracts are now widely used in an artisanal or industrial scale as a coloring agent in many processed food products, such as cheeses, dairy spreads, butter and margarine, custards, cakes and other baked goods, potatoes, snack foods, breakfast cereals, smoked fish, sausages, and more. In these uses, annatto is a natural alternative to synthetic food coloring compounds, but it has been linked to rare cases of food-related allergies. Annatto is of particular commercial value in the United States because the Food and Drug Administration considers colorants derived from it to be "exempt of certification".
History
The annatto tree B. orellana is believed to originate in tropical regions from Mexico to Brazil. It was probably not initially used as a food additive, but for other purposes such as ritual and decorative body painting (still an important tradition in many Brazilian native tribes, such as the Wari'), sunscreen, and insect repellent, and for medical purposes. It was used for Mexican manuscript painting in the 16th century.
Annatto has been traditionally used as both a coloring and flavoring agent in various cuisines from Latin America, the Caribbean, the Philippines, and other countries w
|
https://en.wikipedia.org/wiki/Eirpac
|
EIRPAC is Ireland's packet switched X.25 data network. It replaced Euronet in 1984. Eirpac uses the DNIC 2724. HEAnet was first in operation via X.25 4.8Kb Eirpac connections back in 1985. By 1991 most Universities in Ireland used 64k Eirpac VPN connections. Today Eirpac is owned and operated by Eircom but does not accept new applications for Eirpac: no reference is made on the products-offering on their website They began the process of migrating existing customers using more capable forms of telecommunications back in late April 2004.
In 2001 Eirpac had approximately 5,000 customers dialing in daily via switched virtual circuits although those numbers have been declining rapidly. Eirpac is still an important element for data transfer in Ireland with numerous banks (automatic teller machines), telecoms switches, pager systems and other networks that utilise permanent virtual circuits.
Connecting to Eirpac can be done using a simple AT compatible modem. The dial in number is 1511 + baud rate. So for example to connect at 28,800 bit/s would be ATDT 15112880. The user would then have to authenticate with their Eirpac NUI. The NUI (Network User Identification) consists of a name and password provided by Eir.
Sources
External links
Official website
Computer networking
Internet in Ireland
|
https://en.wikipedia.org/wiki/List%20of%20Proton%20Synchrotron%20experiments
|
This is a list of past and current experiments at the CERN Proton Synchrotron (PS) facility since its commissioning in 1959. The PS was CERN's first synchrotron and the world's highest energy particle accelerator at the time. It served as the flagship of CERN until the 1980s when its main role became to provide injection beams to other machines such as the Super Proton Synchrotron.
The information is gathered from the INSPIRE-HEP database.
See also
Experiments
List of Super Proton Synchrotron experiments
List of Large Hadron Collider experiments
Facilities
CERN: European Organization for Nuclear Research
PS: Proton Synchrotron
SPS: Super Proton Synchrotron
ISOLDE: On-Line Isotope Mass Separator
ISR: Intersecting Storage Rings
LEP: Large Electron–Positron Collider
LHC: Large Hadron Collider
|
https://en.wikipedia.org/wiki/Welfare%20biology
|
Welfare biology is a proposed cross-disciplinary field of research to study the positive and negative well-being of sentient individuals in relation to their environment. Yew-Kwang Ng first advanced the field in 1995. Since then, its establishment has been advocated for by a number of writers, including philosophers, who have argued for the importance of creating the research field, particularly in relation to wild animal suffering. Some researchers have put forward examples of existing research that welfare biology could draw upon and suggested specific applications for the research's findings.
History
Welfare biology was first proposed by the welfare economist Yew-Kwang Ng, in his 1995 paper "Towards welfare biology: Evolutionary economics of animal consciousness and suffering". In the paper, Ng defines welfare biology as the "study of living things and their environment with respect to their welfare (defined as net happiness, or enjoyment minus suffering)." He also distinguishes between "affective" and "non-affective" sentients, affective sentients being individuals with the capacity for perceiving the external world and experiencing pleasure or pain, while non-affective sentients have the capacity for perception, with no corresponding experience; Ng argues that because the latter experience no pleasure or suffering, "[t]heir welfare is necessarily zero, just like nonsentients". He concludes, based on his modelling of evolutionary dynamics, that suffering dominates enjoyment in nature.
Matthew Clarke and Ng, in 2006, used Ng's welfare biology framework to analyse the costs, benefits and welfare implications of the culling of kangaroos—classified as affective sentients—in Puckapunyal, Australia. They concluded that while their discussion "may give some support to the culling of kangaroos or other animals in certain circumstances, a more preventive measure may be superior to the resort to culling". In the same year, Thomas Eichner and Rüdiger Pethi analyzed Ng's
|
https://en.wikipedia.org/wiki/Source%20transformation
|
Source transformation is the process of simplifying a circuit solution, especially with mixed sources, by transforming voltage sources into current sources, and vice versa, using Thévenin's theorem and Norton's theorem respectively.
Process
Performing a source transformation consists of using Ohm's law to take an existing voltage source in series with a resistance, and replacing it with a current source in parallel with the same resistance, or vice versa. The transformed sources are considered identical and can be substituted for one another in a circuit.
Source transformations are not limited to resistive circuits. They can be performed on a circuit involving capacitors and inductors as well, by expressing circuit elements as impedances and sources in the frequency domain. In general, the concept of source transformation is an application of Thévenin's theorem to a current source, or Norton's theorem to a voltage source. However, this means that source transformation is bound by the same conditions as Thevenin's theorem and Norton's theorem; namely that the load behaves linearly, and does not contain dependent voltage or current sources.
Source transformations are used to exploit the equivalence of a real current source and a real voltage source, such as a battery. Application of Thévenin's theorem and Norton's theorem gives the quantities associated with the equivalence. Specifically, given a real current source, which is an ideal current source in parallel with an impedance , applying a source transformation gives an equivalent real voltage source, which is an ideal voltage source in series with the impedance. The impedance retains its value and the new voltage source has value equal to the ideal current source's value times the impedance, according to Ohm's Law . In the same way, an ideal voltage source in series with an impedance can be transformed into an ideal current source in parallel with the same impedance, where the new ideal current source has
|
https://en.wikipedia.org/wiki/Computer%20architecture
|
In computer engineering, computer architecture is a description of the structure of a computer system made from component parts. It can sometimes be a high-level description that ignores details of the implementation. At a more detailed level, the description may include the instruction set architecture design, microarchitecture design, logic design, and implementation.
History
The first documented computer architecture was in the correspondence between Charles Babbage and Ada Lovelace, describing the analytical engine. While building the computer Z1 in 1936, Konrad Zuse described in two patent applications for his future projects that machine instructions could be stored in the same storage used for data, i.e., the stored-program concept. Two other early and important examples are:
John von Neumann's 1945 paper, First Draft of a Report on the EDVAC, which described an organization of logical elements; and
Alan Turing's more detailed Proposed Electronic Calculator for the Automatic Computing Engine, also 1945 and which cited John von Neumann's paper.
The term "architecture" in computer literature can be traced to the work of Lyle R. Johnson and Frederick P. Brooks, Jr., members of the Machine Organization department in IBM's main research center in 1959. Johnson had the opportunity to write a proprietary research communication about the Stretch, an IBM-developed supercomputer for Los Alamos National Laboratory (at the time known as Los Alamos Scientific Laboratory). To describe the level of detail for discussing the luxuriously embellished computer, he noted that his description of formats, instruction types, hardware parameters, and speed enhancements were at the level of "system architecture", a term that seemed more useful than "machine organization".
Subsequently, Brooks, a Stretch designer, opened Chapter 2 of a book called Planning a Computer System: Project Stretch by stating, "Computer architecture, like other architecture, is the art of determining the
|
https://en.wikipedia.org/wiki/Programmable%20matter
|
Programmable matter is matter which has the ability to change its physical properties (shape, density, moduli, conductivity, optical properties, etc.) in a programmable fashion, based upon user input or autonomous sensing. Programmable matter is thus linked to the concept of a material which inherently has the ability to perform information processing.
History
Programmable matter is a term originally coined in 1991 by Toffoli and Margolus to refer to an ensemble of fine-grained computing elements arranged in space. Their paper describes a computing substrate that is composed of fine-grained compute nodes distributed throughout space which communicate using only nearest neighbor interactions. In this context, programmable matter refers to compute models similar to cellular automata and lattice gas automata. The CAM-8 architecture is an example hardware realization of this model. This function is also known as "digital referenced areas" (DRA) in some forms of self-replicating machine science.
In the early 1990s, there was a significant amount of work in reconfigurable modular robotics with a philosophy similar to programmable matter.
As semiconductor technology, nanotechnology, and self-replicating machine technology have advanced, the use of the term programmable matter has changed to reflect the fact that
it is possible to build an ensemble of elements which can be "programmed" to change their physical properties in reality, not just in simulation. Thus, programmable matter has come to mean "any bulk substance which can be programmed to change its physical properties."
In the summer of 1998, in a discussion on artificial atoms and programmable matter, Wil McCarthy and G. Snyder coined the term "quantum wellstone" (or simply "wellstone") to describe this hypothetical but plausible form of programmable matter. McCarthy has used the term in his fiction.
In 2002, Seth Goldstein and Todd Mowry started the claytronics project at Carnegie Mellon University to
|
https://en.wikipedia.org/wiki/Kendall%27s%20notation
|
In queueing theory, a discipline within the mathematical theory of probability, Kendall's notation (or sometimes Kendall notation) is the standard system used to describe and classify a queueing node. D. G. Kendall proposed describing queueing models using three factors written A/S/c in 1953 where A denotes the time between arrivals to the queue, S the service time distribution and c the number of service channels open at the node. It has since been extended to A/S/c/K/N/D where K is the capacity of the queue, N is the size of the population of jobs to be served, and D is the queueing discipline.
When the final three parameters are not specified (e.g. M/M/1 queue), it is assumed K = ∞, N = ∞ and D = FIFO.
First example: M/M/1 queue
A M/M/1 queue means that the time between arrivals is Markovian (M), i.e. the inter-arrival time follows an exponential distribution of parameter λ. The second M means that the service time is Markovian: it follows an exponential distribution of parameter μ. The last parameter is the number of service channel which one (1).
Description of the parameters
In this section, we describe the parameters A/S/c/K/N/D from left to right.
A: The arrival process
A code describing the arrival process. The codes used are:
S: The service time distribution
This gives the distribution of time of the service of a customer. Some common notations are:
c: The number of servers
The number of service channels (or servers). The M/M/1 queue has a single server and the M/M/c queue c servers.
K: The number of places in the queue
The capacity of queue, or the maximum number of customers allowed in the queue. When the number is at this maximum, further arrivals are turned away. If this number is omitted, the capacity is assumed to be unlimited, or infinite.
Note: This is sometimes denoted c + K where K is the buffer size, the number of places in the queue above the number of servers c.
N: The calling population
The size of calling source. The size of
|
https://en.wikipedia.org/wiki/Informal%20mathematics
|
Informal mathematics, also called naïve mathematics, has historically been the predominant form of mathematics at most times and in most cultures, and is the subject of modern ethno-cultural studies of mathematics. The philosopher Imre Lakatos in his Proofs and Refutations aimed to sharpen the formulation of informal mathematics, by reconstructing its role in nineteenth century mathematical debates and concept formation, opposing the predominant assumptions of mathematical formalism. Informality may not discern between statements given by inductive reasoning (as in approximations which are deemed "correct" merely because they are useful), and statements derived by deductive reasoning.
Terminology
Informal mathematics means any informal mathematical practices, as used in everyday life, or by aboriginal or ancient peoples, without historical or geographical limitation. Modern mathematics, exceptionally from that point of view, emphasizes formal and strict proofs of all statements from given axioms. This can usefully be called therefore formal mathematics. Informal practices are usually understood intuitively and justified with examples—there are no axioms. This is of direct interest in anthropology and psychology: it casts light on the perceptions and agreements of other cultures. It is also of interest in developmental psychology as it reflects a naïve understanding of the relationships between numbers and things. Another term used for informal mathematics is folk mathematics, which is ambiguous; the mathematical folklore article is dedicated to the usage of that term among professional mathematicians.
The field of naïve physics is concerned with similar understandings of physics. People use mathematics and physics in everyday life, without really understanding (or caring) how mathematical and physical ideas were historically derived and justified.
History
There has long been a standard account of the development of geometry in ancient Egypt, followed by Greek
|
https://en.wikipedia.org/wiki/Kiwi%20drive
|
A Kiwi drive is a holonomic drive system of three omni-directional wheels (such as omni wheels or Mecanum wheels), 120 degrees from each other, that enables movement in any direction using only three motors. This is in contrast with non-holonomic systems such as traditionally wheeled or tracked vehicles which cannot move sideways without turning first.
This drive system is similar to the Killough platform which achieves omni-directional travel using traditional non-omni-directional wheels in a three wheel configuration.
Named after the Flightless national bird of New Zealand The Kiwi
|
https://en.wikipedia.org/wiki/Schnirelmann%20density
|
In additive number theory, the Schnirelmann density of a sequence of numbers is a way to measure how "dense" the sequence is. It is named after Russian mathematician Lev Schnirelmann, who was the first to study it.
Definition
The Schnirelmann density of a set of natural numbers A is defined as
where A(n) denotes the number of elements of A not exceeding n and inf is infimum.
The Schnirelmann density is well-defined even if the limit of A(n)/n as fails to exist (see upper and lower asymptotic density).
Properties
By definition, and for all n, and therefore , and if and only if . Furthermore,
Sensitivity
The Schnirelmann density is sensitive to the first values of a set:
.
In particular,
and
Consequently, the Schnirelmann densities of the even numbers and the odd numbers, which one might expect to agree, are 0 and 1/2 respectively. Schnirelmann and Yuri Linnik exploited this sensitivity.
Schnirelmann's theorems
If we set , then Lagrange's four-square theorem can be restated as . (Here the symbol denotes the sumset of and .) It is clear that . In fact, we still have , and one might ask at what point the sumset attains Schnirelmann density 1 and how does it increase. It actually is the case that and one sees that sumsetting once again yields a more populous set, namely all of . Schnirelmann further succeeded in developing these ideas into the following theorems, aiming towards Additive Number Theory, and proving them to be a novel resource (if not greatly powerful) to attack important problems, such as Waring's problem and Goldbach's conjecture.
Theorem. Let and be subsets of . Then
Note that . Inductively, we have the following generalization.
Corollary. Let be a finite family of subsets of . Then
The theorem provides the first insights on how sumsets accumulate. It seems unfortunate that its conclusion stops short of showing being superadditive. Yet, Schnirelmann provided us with the following results, which sufficed for most of his purpose.
|
https://en.wikipedia.org/wiki/Tensor%20Processing%20Unit
|
Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
Comparison to CPUs and GPUs
Compared to a graphics processing unit, TPUs are designed for a high volume of low precision computation (e.g. as little as 8-bit precision) with more input/output operations per joule, without hardware for rasterisation/texture mapping. The TPU ASICs are mounted in a heatsink assembly, which can fit in a hard drive slot within a data center rack, according to Norman Jouppi.
Different types of processors are suited for different types of machine learning models. TPUs are well suited for CNNs, while GPUs have benefits for some fully-connected neural networks, and CPUs can have advantages for RNNs.
History
The tensor processing unit was announced in May 2016 at Google I/O, when the company said that the TPU had already been used inside their data centers for over a year. The chip has been specifically designed for Google's TensorFlow framework, a symbolic math library which is used for machine learning applications such as neural networks. However, as of 2017 Google still used CPUs and GPUs for other types of machine learning. Other AI accelerator designs are appearing from other vendors also and are aimed at embedded and robotics markets.
Google's TPUs are proprietary. Some models are commercially available, and on February 12, 2018, The New York Times reported that Google "would allow other companies to buy access to those chips through its cloud-computing service." Google has said that they were used in the AlphaGo versus Lee Sedol series of man-machine Go games, as well as in the AlphaZero system, which produced Chess, Shogi and Go playing programs f
|
https://en.wikipedia.org/wiki/Hardware%20for%20artificial%20intelligence
|
Specialized computer hardware is often used to execute artificial intelligence (AI) programs faster, and with less energy, such as Lisp machines, neuromorphic engineering, event cameras, and physical neural networks. As of 2023, the market for AI hardware is dominated by GPUs.
Lisp machines
Lisp machines were developed in the late 1970s and early 1980s to make Artificial intelligence programs written in the programming language Lisp run faster.
Dataflow architecture
Dataflow architecture processors used for AI serve various purposes, with varied implementations like the polymorphic dataflow Convolution Engine by Kinara (formerly Deep Vision), structure-driven dataflow by Hailo, and dataflow scheduling by Cerebras.
Component hardware
AI accelerators
Since the 2010s, advances in computer hardware have led to more efficient methods for training deep neural networks that contain many layers of non-linear hidden units and a very large output layer. By 2019, graphics processing units (GPUs), often with AI-specific enhancements, had displaced central processing units (CPUs) as the dominant means to train large-scale commercial cloud AI. OpenAI estimated the hardware compute used in the largest deep learning projects from Alex Net (2012) to Alpha Zero (2017), and found a 300,000-fold increase in the amount of compute needed, with a doubling-time trend of 3.4 months.
Artificial Intelligence Hardware Components
Cеntral Procеssing Units (CPUs)
Evеry computеr systеm is built on cеntral procеssing units (CPUs). Thеy handle duties, do computations, and carry out ordеrs. Evеn if spеcializеd hardwarе is morе еffеctivе at handling AI activitiеs, CPUs arе still еssеntial for managing gеnеral computing tasks in AI systеms.
Graphics Procеssing Units (GPUs)
AI has sееn a dramatic transformation as a rеsult of graphics procеssing units (GPUs). Thеy arе pеrfеct for AI jobs that rеquirе handling massivе quantitiеs of data and intricatе mathеmatical opеrations bеcausе of thеir
|
https://en.wikipedia.org/wiki/Operating%20point
|
The operating point is a specific point within the operation characteristic of a technical device. This point will be engaged because of the properties of the system and the outside influences and parameters. In electronic engineering establishing an operating point is called biasing.
Wanted and unwanted operating points of a system
The operating point of a system is the intersection point of the torque-speed curve of drive and machine. Both devices are linked with a shaft so the speed is always identical. The drive creates the torque which rotates both devices. The machine creates the counter-torque, e.g. by being a moved device which needs permanent energy or a wheel turning against the static friction of the track.
The drive speed increases when the driving torque is higher than the counter-torque.
The drive speed decreases when the counter-torque is higher than the driving torque.
At the operating point, the driving torque and the counter-torque are balanced, so the speed does not change anymore.
A speed change in a stable operating point creates a torque change which acts against this change of speed.
A change in speed out of this stable operating point is only possible with a new control intervention. This can be changing the load of the machine or the power of the drive which both changes the torque because it is a change in the characteristic curves. The drive-machine system then runs to a new operating point with a different speed and a different balance of torques.
Should the drive torque be higher than the counter torque at any time then the system does not have an operating point. The result will be that the speed increases up to the idle speed or even until destruction. Should the counter torque be higher at any times the speed will decrease until the system stops.
Stable and unstable operating points
Also in case of an unstable operating point the law of the balance of the torques is always valid. But when the operating point is unstab
|
https://en.wikipedia.org/wiki/Mechanobiology
|
Mechanobiology is an emerging field of science at the interface of biology, engineering, chemistry and physics. It focuses on how physical forces and changes in the mechanical properties of cells and tissues contribute to development, cell differentiation, physiology, and disease. Mechanical forces are experienced and may be interpreted to give biological responses in cells. The movement of joints, compressive loads on the cartilage and bone during exercise, and shear pressure on the blood vessel during blood circulation are all examples of mechanical forces in human tissues. A major challenge in the field is understanding mechanotransduction—the molecular mechanisms by which cells sense and respond to mechanical signals. While medicine has typically looked for the genetic and biochemical basis of disease, advances in mechanobiology suggest that changes in cell mechanics, extracellular matrix structure, or mechanotransduction may contribute to the development of many diseases, including atherosclerosis, fibrosis, asthma, osteoporosis, heart failure, and cancer. There is also a strong mechanical basis for many generalized medical disabilities, such as lower back pain, foot and postural injury, deformity, and irritable bowel syndrome.
Load sensitive cells
Fibroblasts
Skin fibroblasts are vital in development and wound repair and they are affected by mechanical cues like tension, compression and shear pressure. Fibroblasts synthesize structural proteins, some of which are mechanosensitive and form integral part of the extracellular Matrix (ECM) e. g collagen types I, III, IV, V VI, elastin, lamin etc. In addition to the structural proteins, fibroblasts make Tumor-Necrosis-Factor- alpha (TNF-α), Transforming-Growth-Factor-beta (TGF-β) and matrix metalloproteases that plays in tissue in tissue maintenance and remodeling.
Chondrocytes
Articular cartilage is the connective tissue that protects bones of load-bearing joints like knee, shoulder by providing a lubric
|
https://en.wikipedia.org/wiki/Pulse%20%28signal%20processing%29
|
A pulse in signal processing is a rapid, transient change in the amplitude of a signal from a baseline value to a higher or lower value, followed by a rapid return to the baseline value.
Pulse shapes
Pulse shapes can arise out of a process called pulse-shaping. Optimum pulse shape depends on the application.
Rectangular pulse
These can be found in pulse waves, square waves, boxcar functions, and rectangular functions. In digital signals the up and down transitions between high and low levels are called the rising edge and the falling edge. In digital systems the detection of these sides or action taken in response is termed edge-triggered, rising or falling depending on which side of rectangular pulse. A digital timing diagram is an example of a well-ordered collection of rectangular pulses.
Nyquist pulse
A Nyquist pulse is one which meets the Nyquist ISI criterion and is important in data transmission. An example of a pulse which meets this condition is the sinc function. The sinc pulse is of some significance in signal-processing theory but cannot be produced by a real generator for reasons of causality.
In 2013, Nyquist pulses were produced in an effort to reduce the size of pulses in optical fibers, which enables them to be packed 10 times more closely together, yielding a corresponding 10-fold increase in bandwidth. The pulses were more than 99 percent perfect and were produced using a simple laser and modulator.
Dirac pulse
A Dirac pulse has the shape of the Dirac delta function. It has the properties of infinite amplitude and its integral is the Heaviside step function. Equivalently, it has zero width and an area under the curve of unity. This is another pulse that cannot be created exactly in real systems, but practical approximations can be achieved. It is used in testing, or theoretically predicting, the impulse response of devices and systems, particularly filters. Such responses yield a great deal of information about the system.
Gaussian
|
https://en.wikipedia.org/wiki/Spatial%20scale
|
Spatial scale is a specific application of the term scale for describing or categorizing (e.g. into orders of magnitude) the size of a space (hence spatial), or the extent of it at which a phenomenon or process occurs.
For instance, in physics an object or phenomenon can be called microscopic if too small to be visible. In climatology, a micro-climate is a climate which might occur in a mountain, valley or near a lake shore. In statistics, a megatrend is a political, social, economical, environmental or technological trend which involves the whole planet or is supposed to last a very large amount of time. The concept is also used in geography, astronomy, and meteorology.
These divisions are somewhat arbitrary; where, on this table, mega- is assigned global scope, it may only apply continentally or even regionally in other contexts. The interpretations of meso- and macro- must then be adjusted accordingly.
See also
Astronomical units of length
Cosmic distance ladder
List of examples of lengths
Orders of magnitude (length)
Scale (analytical tool)
Scale (geography)
Scale (map)
Scale (ratio)
Location of Earth
|
https://en.wikipedia.org/wiki/Network%20agility
|
Network Agility is an architectural discipline for computer networking. It can be defined as:
The ability of network software and hardware to automatically control and configure itself and other network assets across any number of devices on a network.
With regards to network hardware, network agility is used when referring to automatic hardware configuration and reconfiguration of network devices e.g. routers, switches, SNMP devices.
Network agility, as a software discipline, borrows from many fields, both technical and commercial.
On the technical side, network agility solutions leverage techniques from areas such as:
Service-oriented architecture (SOA)
Object-oriented design
Architectural patterns
Loosely coupled data streaming (e.g.: web services)
Iterative design
Artificial intelligence
Inductive scheduling
On-demand computing
Utility computing
Commercially, network agility is about solving real-world business problems using existing technology. It forms a three-way bridge between business processes, hardware resources, and software assets. In more detail, it takes, as input: 1
the business processes – i.e. what the network must achieve in real business terms;
the hardware that resides within the network; and
the set of software assets that run on this hardware.
Much of this input can be obtained through automatic discovery – finding the hardware, its types and locations, software, licenses etc. The business processes can be inferred to a certain degree, but it is these processes that business managers need to be able to control and organize.
Software resources discovered on the network can take a variety of forms – some assets may be licensed software products, others as blocks of software service code that can be accessed via some service enterprise portal, such as (but not necessarily) web services. These services may reside in-house, or they may be 'on-demand' via an on-line subscription service. Indeed, the primary motivation of network
|
https://en.wikipedia.org/wiki/Ceibo%20emulator
|
A ceibo emulator is an in-circuit emulator for microcontrollers and microprocessors.
These emulators use bond-out processors, which have internal signals brought out for the purpose of debugging. These signals provide information about the state of the processor that is otherwise unobtainable.
Supported microprocessors and microcontrollers include Atmel, Dallas Semiconductor, Infineon, Intel,
Microchip, NEC, Philips, STMicroelectronics and Winbond.
|
https://en.wikipedia.org/wiki/List%20of%20coordinate%20charts
|
This article contains a non-exhaustive list of coordinate charts for Riemannian manifolds and pseudo-Riemannian manifolds. Coordinate charts are mathematical objects of topological manifolds, and they have multiple applications in theoretical and applied mathematics. When a differentiable structure and a metric are defined, greater structure exists, and this allows the definition of constructs such as integration and geodesics.
Charts for Riemannian and pseudo-Riemannian surfaces
The following charts (with appropriate metric tensors) can be used in the stated classes of Riemannian and pseudo-Riemannian surfaces:
Radially symmetric surfaces:
Hyperspherical coordinates
Surfaces embedded in E3:
Monge chart
Certain minimal surfaces:
Asymptotic chart (see also asymptotic line)
Euclidean plane E2:
Cartesian chart
Sphere S2:
Spherical coordinates
Stereographic chart
Central projection chart
Axial projection chart
Mercator chart
Hyperbolic plane H2:
Polar chart
Stereographic chart (Poincaré model)
Upper half-space chart (Poincaré model)
Central projection chart (Klein model)
Mercator chart
AdS2 (or S1,1) and dS2 (or H1,1):
Central projection
Sn
Hopf chart
Hn
Upper half-space chart (Poincaré model)
Hopf chart
The following charts apply specifically to three-dimensional manifolds:
Axially symmetric manifolds:
Cylindrical chart
Parabolic chart
Hyperbolic chart
Toroidal chart
Three-dimensional Euclidean space E3:
Cartesian
Polar spherical chart
Cylindrical chart
Elliptical cylindrical, hyperbolic cylindrical, parabolic cylindrical charts
Parabolic chart
Hyperbolic chart
Prolate spheroidal chart (rational and trigonometric forms)
Oblate spheroidal chart (rational and trigonometric forms)
Toroidal chart
Cassini toroidal chart and Cassini bipolar chart
Three-sphere S3
Polar chart
Stereographic chart
Hopf chart
Hyperbolic three-space H3
Polar chart
Upper half space chart (Poincaré model)
Hopf chart
See also
Coordinate chart
Coordinate system
Metric tensor
List of mathemat
|
https://en.wikipedia.org/wiki/FITkit%20%28hardware%29
|
FITkit is a hardware platform used for educational purposes at the Brno University of Technology in the Czech Republic.
FITkit
The FITkit contains a low-power microcontroller, a field programmable gate array chip (FPGA) and a set of peripherals.
Utilizing advanced reconfigurable hardware, the FITkit may be modified to suit various tasks.
Configuration of the FPGA chip can be specified using the VHDL hardware description language (i.e. VHSIC hardware description language).
Software for the Microcontroller is written in C and compiled using the GNU Compiler Collection.
Configuration of the FPGA chip is synthesized from the source VHDL code using professional design tools, which are also available free of charge.
Use in education
The FITkit serves as an educational tool in several courses throughout the bachelor's and master's degree programmes. Students are expected to create an FPGA interpreter design of a simple programming language (such as Brainfuck) as part of the Design of Computer Systems course.
Licensing
The project is developed as an open-source (software) and open-core (hardware), under the BSD license.
Related projects
QDevKit, multiplatform development environment for FITkit (Linux, BSD and Microsoft Windows operating systems)
|
https://en.wikipedia.org/wiki/Quark%20model
|
In particle physics, the quark model is a classification scheme for hadrons in terms of their valence quarks—the quarks and antiquarks that give rise to the quantum numbers of the hadrons. The quark model underlies "flavor SU(3)", or the Eightfold Way, the successful classification scheme organizing the large number of lighter hadrons that were being discovered starting in the 1950s and continuing through the 1960s. It received experimental verification beginning in the late 1960s and is a valid effective classification of them to date. The model was independently proposed by physicists Murray Gell-Mann, who dubbed them "quarks" in a concise paper, and George Zweig, who suggested "aces" in a longer manuscript. André Petermann also touched upon the central ideas from 1963 to 1965, without as much quantitative substantiation. Today, the model has essentially been absorbed as a component of the established quantum field theory of strong and electroweak particle interactions, dubbed the Standard Model.
Hadrons are not really "elementary", and can be regarded as bound states of their "valence quarks" and antiquarks, which give rise to the quantum numbers of the hadrons. These quantum numbers are labels identifying the hadrons, and are of two kinds. One set comes from the Poincaré symmetry—JPC, where J, P and C stand for the total angular momentum, P-symmetry, and C-symmetry, respectively.
The other set is the flavor quantum numbers such as the isospin, strangeness, charm, and so on. The strong interactions binding the quarks together are insensitive to these quantum numbers, so variation of them leads to systematic mass and coupling relationships among the hadrons in the same flavor multiplet.
All quarks are assigned a baryon number of . Up, charm and top quarks have an electric charge of +, while the down, strange, and bottom quarks have an electric charge of −. Antiquarks have the opposite quantum numbers. Quarks are spin- particles, and thus fermions. Each quark
|
https://en.wikipedia.org/wiki/Disk%20controller
|
The disk controller is the controller circuit which enables the CPU to communicate with a hard disk, floppy disk or other kind of disk drive. It also provides an interface between the disk drive and the bus connecting it to the rest of the system.
Early disk controllers were identified by their storage methods and data encoding. They were typically implemented on a separate controller card. Modified frequency modulation (MFM) controllers were the most common type in small computers, used for both floppy disk and hard disk drives. Run length limited (RLL) controllers used data compression to increase storage capacity by about 50%. Priam created a proprietary storage algorithm that could double the disk storage. Shugart Associates Systems Interface (SASI) was a predecessor to SCSI.
Modern disk controllers are integrated into the disk drive as peripheral controllers. For example, disks called "SCSI disks" have built-in SCSI controllers. In the past, before most SCSI controller functionality was implemented in a single chip, separate SCSI controllers interfaced disks to the SCSI bus.
These integrated peripheral controllers communicate with a host adapter in the host system over a standardized, high-level storage bus interface. The most common types of interfaces provided nowadays by host controllers are PATA (IDE) and Serial ATA for home use. High-end disks use Parallel SCSI, Fibre Channel or Serial Attached SCSI.
Disk controllers can also control the timing of access to flash memory which is not mechanical in nature (i.e. no physical disk).
Disk controller versus host adapter
The component that allows a computer to talk to a peripheral bus is host adapter or host bus adapter (HBA, e.g. Advanced Host Controller Interface or AHDC). A disk controller allows a disk to talk to the same bus. Signals read by a disk read-and-write head are converted by a disk controller, then transmitted over the peripheral bus, then converted again by the host adapter into the suita
|
https://en.wikipedia.org/wiki/Automated%20ECG%20interpretation
|
Automated ECG interpretation is the use of artificial intelligence and pattern recognition software and knowledge bases to carry out automatically the interpretation, test reporting, and computer-aided diagnosis of electrocardiogram tracings obtained usually from a patient.
History
The first automated ECG programs were developed in the 1970s, when digital ECG machines became possible by third-generation digital signal processing boards. Commercial models, such as those developed by Hewlett-Packard, incorporated these programs into clinically used devices.
During the 1980s and 1990s, extensive research was carried out by companies and by university labs in order to improve the accuracy rate, which was not very high in the first models. For this purpose, several signal databases with normal and abnormal ECGs were built by institutions such as MIT and used to test the algorithms and their accuracy.
Phases
A digital representation of each recorded ECG channel is obtained, by means of an analog-to-digital converter and a special data acquisition software or a digital signal processing (DSP) chip.
The resulting digital signal is processed by a series of specialized algorithms, which start by conditioning it, e.g., removal of noise, baselevel variation, etc.
Feature extraction: mathematical analysis is now performed on the clean signal of all channels, to identify and measure a number of features which are important for interpretation and diagnosis, this will constitute the input to AI-based programs, such as the peak amplitude, area under the curve, displacement in relation to baseline, etc., of the P, Q, R, S and T waves, the time delay between these peaks and valleys, heart rate frequency (instantaneous and average), and many others. Some sort of secondary processing such as Fourier analysis and wavelet analysis may also be performed in order to provide input to pattern recognition-based programs.
Logical processing and pattern recognition, using rule-based expe
|
https://en.wikipedia.org/wiki/Alexander%E2%80%93Hirschowitz%20theorem
|
The Alexander–Hirschowitz theorem shows that a specific collection of double points in the will impose independent types of conditions on homogenous polynomials and the hypersurface of with many known lists of exceptions. In which case, the classic polynomial interpolation that is located in several variables can be generalized to points that have larger multiplicities.
|
https://en.wikipedia.org/wiki/Software%20calculator
|
A software calculator is a calculator that has been implemented as a computer program, rather than as a physical hardware device.
They are among the simpler interactive software tools, and, as such, they provide operations for the user to select one at a time. They can be used to perform any process that consists of a sequence of steps each of which applies one of these operations, and have no purpose other than these processes, because the operations are the sole, or at least the primary, features of the calculator, rather than being secondary features that support other functionality that is not normally known simply as calculation.
As a calculator, rather than a computer, they usually have a small set of relatively simple operations, perform short processes that are not compute intensive and do not accept large amounts of input data or produce many results.
Platforms
Software calculators are available for many different platforms, and they can be:
A program for, or included with an operating system.
A program implemented as server or client-side scripting (such as JavaScript) within a web page.
Embedded in a calculator watch.
Also complex software may have calculator-like dialogs, sometimes with the full calculator functionality, to enter data into the system.
History
Early years
Computers as we know them today first emerged in the 1940s and 1950s. The software that they ran was naturally used to perform calculations, but it was specially designed for a substantial application that was not limited to simple calculations. For example, the LEO computer was designed to run business application software such as payroll.
Software specifically to perform calculations as its main purpose was first written in the 1960s, and the first software package for general calculations to obtain widespread use was released in 1978. This was VisiCalc and it was called an interactive visible calculator, but it was actually a spreadsheet, and these are now not normally kno
|
https://en.wikipedia.org/wiki/Highly%20accelerated%20stress%20audit
|
HASA (highly accelerated stress audit) is a proven test method developed to find manufacturing/production process induced defects in electronics and electro-mechanical assemblies before those products are released to market. HASA is a form of HASS (highly accelerated stress screening) – a powerful testing tool for improving product reliability, reducing warranty costs and increasing customer satisfaction.
Since HASS levels are more aggressive than conventional screening tools, a POS procedure is used to establish the effectiveness in revealing production induced defects. A POS is vital to determine that the HASS stresses are capable of revealing production defects, but not so extreme as to remove significant life from the test item. Instituting HASS to screen the product is an excellent tool to maintain a high level of robustness and it will reduce the test time required to screen a product resulting in long term savings. Ongoing HASS screening assures that any weak components or manufacturing process degradations are quickly detected and corrected. HASS is not intended to be a rigid process that has an endpoint. It is a dynamic process that may need modification or adjustment over the life of the product.
HASS aids in the detection of early life failures. HASA's primary purpose is to monitor manufacturing and prevent any defects from being introduced during the process. A carefully determined HASA sampling plan must be designed that will quickly signal when process quality has been degraded.
External links
cotsjournalonline.com – COTS Journal HALT/HASS Testing Goes Beyond the Norm
Electronic engineering
Quality management
Environmental testing
|
https://en.wikipedia.org/wiki/Infraspecific%20name
|
In botany, an infraspecific name is the scientific name for any taxon below the rank of species, i.e. an infraspecific taxon or infraspecies. A "taxon", plural "taxa", is a group of organisms to be given a particular name. The scientific names of botanical taxa are regulated by the International Code of Nomenclature for algae, fungi, and plants (ICN). This specifies a three part name for infraspecific taxa, plus a connecting term to indicate the rank of the name. An example of such a name is Astrophytum myriostigma subvar. glabrum, the name of a subvariety of the species Astrophytum myriostigma (bishop's hat cactus).
Names below the rank of species of cultivated kinds of plants and of animals are regulated by different codes of nomenclature and are formed somewhat differently.
Construction of infraspecific names
Article 24 of the ICN describes how infraspecific names are constructed. The order of the three parts of an infraspecific name is:
genus name, specific epithet, connecting term indicating the rank (not part of the name, but required), infraspecific epithet.
It is customary to italicize all three parts of such a name, but not the connecting term. For example:
Acanthocalycium klimpelianum var. macranthum
genus name = Acanthocalycium, specific epithet = klimpelianum, connecting term = var. (short for "varietas" or variety), infraspecific epithet = macranthum
Astrophytum myriostigma subvar. glabrum
genus name = Astrophytum, specific epithet = myriostigma, connecting term = subvar. (short for "subvarietas" or subvariety), infraspecific epithet = glabrum
The recommended abbreviations for ranks below species are:
subspecies - recommended abbreviation: subsp. (but "ssp." is also in use although not recognised by Art 26)
varietas (variety) - recommended abbreviation: var.
subvarietas (subvariety) - recommended abbreviation: subvar.
forma (form) - recommended abbreviation: f.
subforma (subform) - recommended abbreviation: subf.
Although the connecting t
|
https://en.wikipedia.org/wiki/Coherence%20%28physics%29
|
In physics, coherence expresses the potential for two waves to interfere. Two monochromatic beams from a single source always interfere. Physical sources are not strictly monochromatic: they may be partly coherent. Beams from different sources are mutually incoherent.
When interfering, two waves add together to create a wave of greater amplitude than either one (constructive interference) or subtract from each other to create a wave of minima which may be zero (destructive interference), depending on their relative phase. Constructive or destructive interference are limit cases, and two waves always interfere, even if the result of the addition is complicated or not remarkable.
Two waves with constant relative phase will be coherent. The amount of coherence can readily be measured by the interference visibility, which looks at the size of the interference fringes relative to the input waves (as the phase offset is varied); a precise mathematical definition of the degree of coherence is given by means of correlation functions. More generally, coherence describes the statistical similarity of a field (electromagnetic field, quantum wave packet etc.) at two points in space or time.
Qualitative concept
Coherence controls the visibility or contrast of interference patterns. For example visibility of the double slit experiment pattern requires that both slits be illuminated by a coherent wave as illustrated in the figure. Large sources without collimation or sources that mix many different frequencies will have lower visibility.
Coherence contains several distinct concepts. Spatial coherence describes the correlation (or predictable relationship) between waves at different points in space, either lateral or longitudinal. Temporal coherence describes the correlation between waves observed at different moments in time. Both are observed in the Michelson–Morley experiment and Young's interference experiment. Once the fringes are obtained in the Michelson interferomete
|
https://en.wikipedia.org/wiki/Degeneracy%20%28biology%29
|
Within biological systems, degeneracy occurs when structurally dissimilar components/pathways can perform similar functions (i.e. are effectively interchangeable) under certain conditions, but perform distinct functions in other conditions. Degeneracy is thus a relational property that requires comparing the behavior of two or more components. In particular, if degeneracy is present in a pair of components, then there will exist conditions where the pair will appear functionally redundant but other conditions where they will appear functionally distinct.
Note that this use of the term has practically no relevance to the questionably meaningful concept of evolutionarily degenerate populations that have lost ancestral functions.
Biological examples
Examples of degeneracy are found in the genetic code, when many different nucleotide sequences encode the same polypeptide; in protein folding, when different polypeptides fold to be structurally and functionally equivalent; in protein functions, when overlapping binding functions and similar catalytic specificities are observed; in metabolism, when multiple, parallel biosynthetic and catabolic pathways may coexist.
More generally, degeneracy is observed in proteins of every functional class (e.g. enzymatic, structural, or regulatory), protein complex assemblies, ontogenesis, the nervous system, cell signalling (crosstalk) and numerous other biological contexts reviewed in.
Contribution to robustness
Degeneracy contributes to the robustness of biological traits through several mechanisms. Degenerate components compensate for one another under conditions where they are functionally redundant, thus providing robustness against component or pathway failure. Because degenerate components are somewhat different, they tend to harbor unique sensitivities so that a targeted attack such as a specific inhibitor is less likely to present a risk to all components at once. There are numerous biological examples where degeneracy con
|
https://en.wikipedia.org/wiki/Dolbear%27s%20law
|
Dolbear's law states the relationship between the air temperature and the rate at which crickets chirp. It was formulated by Amos Dolbear and published in 1897 in an article called "The Cricket as a Thermometer". Dolbear's observations on the relation between chirp rate and temperature were preceded by an 1881 report by Margarette W. Brooks, although this paper went unnoticed until after Dolbear's publication.
Dolbear did not specify the species of cricket which he observed, although subsequent researchers assumed it to be the snowy tree cricket, Oecanthus niveus. However, the snowy tree cricket was misidentified as O. niveus in early reports and the correct scientific name for this species is Oecanthus fultoni.
The chirping of the more common field crickets is not as reliably correlated to temperature—their chirping rate varies depending on other factors such as age and mating success. In many cases, though, the Dolbear's formula is a close enough approximation for field crickets, too.
Dolbear expressed the relationship as the following formula which provides a way to estimate the temperature in degrees Fahrenheit from the number of chirps per minute :
This formula is accurate to within a degree or so when applied to the chirping of the field cricket.
Counting can be sped up by simplifying the formula and counting the number of chirps produced in 15 seconds ():
Reformulated to give the temperature in degrees Celsius (°C), it is:
A shortcut method for degrees Celsius is to count the number of chirps in 8 seconds () and add 5 (this is fairly accurate between 5 and 30°C):
The above formulae are expressed in terms of integers to make them easier to remember—they are not intended to be exact.
In math classes
Math textbooks will sometimes cite this as a simple example of where mathematical models break down, because at temperatures outside of the range that crickets live in, the total of chirps is zero as the crickets are dead. You can apply algebra to the e
|
https://en.wikipedia.org/wiki/Electronic%20badge
|
An electronic badge (or electronic conference badge) is a gadget that is a replacement for a traditional paper-based badge or pass issued at public events. It is mainly handed out at computer (security) conferences and hacker events. Their main feature is to display the name of the attendee, but due to their electronic nature they can include a variety of software. The badges were originally a tradition at DEF CON, but spread across different events.
Examples
Hardware
SHA2017 badge, which included an e-ink screen and an ESP32
Card10 for CCCamp2019
Electromagnetic Field Camp badge
Software
The organization badge.team has developed a platform called "Hatchery" to publish and develop software for several badges.
|
https://en.wikipedia.org/wiki/Electronic%20color%20code
|
An electronic color code or electronic colour code (see spelling differences) is used to indicate the values or ratings of electronic components, usually for resistors, but also for capacitors, inductors, diodes and others. A separate code, the 25-pair color code, is used to identify wires in some telecommunications cables. Different codes are used for wire leads on devices such as transformers or in building wiring.
History
Before industry standards were established, each manufacturer used its own unique system for color coding or marking their components.
In the 1920s, the RMA resistor color code was developed by the Radio Manufacturers Association (RMA) as a fixed resistor coloring code marking. In 1930, the first radios with RMA color-coded resistors were built. Over many decades, as the organization name changed (RMA, RTMA, RETMA, EIA) so was the name of the code. Though known most recently as EIA color code, the four name variations are found in books, magazines, catalogs, and other documents over more than years.
In 1952, it was standardized in IEC 62:1952 by the International Electrotechnical Commission (IEC) and since 1963 also published as EIA RS-279. Originally only meant to be used for fixed resistors, the color code was extended to also cover capacitors with IEC 62:1968. The code was adopted by many national standards like DIN 40825 (1973), BS 1852 (1974) and IS 8186 (1976). The current international standard defining marking codes for resistors and capacitors is IEC 60062:2016. In addition to the color code, these standards define a letter and digit code named RKM code for resistors and capacitors.
Color bands were used because they were easily and cheaply printed on tiny components. However, there were drawbacks, especially for color blind people. Overheating of a component or dirt accumulation may make it impossible to distinguish brown from red or orange. Advances in printing technology have now made printed numbers more practical on small co
|
https://en.wikipedia.org/wiki/Fast-scan%20cyclic%20voltammetry
|
Fast-scan cyclic voltammetry (FSCV) is cyclic voltammetry with a very high scan rate (up to ). Application of high scan rate allows rapid acquisition of a voltammogram within several milliseconds and ensures high temporal resolution of this electroanalytical technique. An acquisition rate of 10 Hz is routinely employed.
FSCV in combination with carbon-fiber microelectrodes became a very popular method for detection of neurotransmitters, hormones and metabolites in biological systems. Initially, FSCV was successfully used for detection of electrochemically active biogenic amines release in chromaffin cells (adrenaline and noradrenaline), brain slices (5-HT, dopamine, norepinephrine) and in vivo in anesthetized or awake and behaving animals (dopamine). Further refinements of the method have enabled detection of 5-HT, HA, norepinephrine, adenosine, oxygen, pH changes in vivo in rats and mice as well as measurement of dopamine and serotonin concentration in fruit flies.
Principles of FSCV
In fast-scan cyclic voltammetry (FSCV), a small carbon fiber electrode (micrometer scale) is inserted into living cells, tissue, or extracellular space. The electrode is then used to quickly raise and lower the voltage in a triangular wave fashion. When the voltage is in the correct range (typically ±1 Volt) the compound of interest will be repeatedly oxidized and reduced. This will result in a movement of electrons in solution that will ultimately create a small alternating current (nano amps scale). By subtracting the background current created by the probe from the resulting current, it is possible to generate a voltage vs. current plot that is unique to each compound. Since the time scale of the voltage oscillations is known, this can then be used to calculate a plot of the current in solution as a function of time. The relative concentrations of the compound may be calculated as long as the number of electrons transferred in each oxidation and reduction reaction is known.
|
https://en.wikipedia.org/wiki/Leuckart%27s%20law
|
Leuckart's law is an empirical law in zoology that states that the size of the eye of an animal is related to its maximum speed of movement; fast-moving animals have larger eyes, after allowing for the effects of body mass. The hypothesis dates from 1876, and in older literature is usually referred to as Leuckart's ratio. It was proposed by Rudolf Leuckart in 1876.
The principle was initially applied to birds; it has also been applied to mammals.
Criticism
A study of 88 bird species, published in 2011, found no useful correlation between flight speed and eye size.
|
https://en.wikipedia.org/wiki/RNDIS
|
The Remote Network Driver Interface Specification (RNDIS) is a Microsoft proprietary protocol used mostly on top of USB. It provides a virtual Ethernet link to most versions of the Windows, Linux, and FreeBSD operating systems. Multiple revisions of a partial RNDIS specification are available from Microsoft, but Windows implementations have been observed to issue requests not included in that specification, and to have undocumented constraints.
The protocol is tightly coupled to Microsoft's programming interfaces and models, most notably the Network Driver Interface Specification (NDIS), which are alien to operating systems other than Windows. This complicates implementing RNDIS on non-Microsoft operating systems, but Linux, FreeBSD, NetBSD and OpenBSD implement RNDIS natively.
The USB Implementers Forum (USB-IF) defines at least three non-proprietary USB communications device class (USB CDC) protocols with comparable "virtual Ethernet" functionality; one of them (CDC-ECM) predates RNDIS and is widely used for interoperability with non-Microsoft operating systems, but does not work with Windows.
Most versions of Android include RNDIS USB functionality. For example, Samsung smartphones have the capability and use RNDIS over USB to operate as a virtual Ethernet card that will connect the host PC to the mobile or Wi-Fi network in use by the phone, effectively working as a mobile broadband modem or a wireless card, for mobile hotspot tethering.
Controversy
In 2022 it was suggested that support for RNDIS should be removed from Linux, claiming that is inherently and uncorrectably insecure in the presence of untrusted USB devices.
See also
Ethernet over USB
Qualcomm MSM Interface - A Qualcomm proprietary alternative
|
https://en.wikipedia.org/wiki/Problem%20solving%20environment
|
A problem solving environment (PSE) is a completed, integrated and specialised computer software for solving one class of problems, combining automated problem-solving methods with human-oriented tools for guiding the problem resolution. A PSE may also assist users in formulating problem resolution. A PSE may also assist users in formulating problems, selecting algorithm, simulating numerical value and viewing and analysing results.
Purpose of PSE
Many PSEs were introduced in the 1990s. They use the language of the respective field and often employ modern graphical user interfaces. The goal is to make the software easy to use for specialists in fields other than computer science. PSEs are available for generic problems like data visualization or large systems of equations and for narrow fields of science or engineering like gas turbine design.
History
The Problem Solving Environment (PSE) released a few years after the release of Fortran and Algol 60. People thought that this system with high-level language would cause elimination of professional programmers. However, surprisingly, PSE has been accepted and even though scientists used it to write programs.
The Problem Solving Environment for Parallel Scientific Computation was introduced in 1960, where this was the first Organised Collections with minor standardisation. In 1970, PSE was initially researched for providing high-class programming language rather than Fortran, also Libraries Plotting Packages advent. Development of Libraries were continued, and there were introduction of Emergence of Computational Packages and Graphical systems which is data visualisation. By 1990s, hypertext, point-and-click had moved towards inter-operability. Moving on, a "Software Parts" Industry finally existed.
Throughout a few decades, recently, many PSEs have been developed and to solve problem and also support users from different categories, including education, general programming, CSE software learning, job executing
|
https://en.wikipedia.org/wiki/List%20of%20algebraic%20geometry%20topics
|
This is a list of algebraic geometry topics, by Wikipedia page.
Classical topics in projective geometry
Affine space
Projective space
Projective line, cross-ratio
Projective plane
Line at infinity
Complex projective plane
Complex projective space
Plane at infinity, hyperplane at infinity
Projective frame
Projective transformation
Fundamental theorem of projective geometry
Duality (projective geometry)
Real projective plane
Real projective space
Segre embedding of a product of projective spaces
Rational normal curve
Algebraic curves
Conics, Pascal's theorem, Brianchon's theorem
Twisted cubic
Elliptic curve, cubic curve
Elliptic function, Jacobi's elliptic functions, Weierstrass's elliptic functions
Elliptic integral
Complex multiplication
Weil pairing
Hyperelliptic curve
Klein quartic
Modular curve
Modular equation
Modular function
Modular group
Supersingular primes
Fermat curve
Bézout's theorem
Brill–Noether theory
Genus (mathematics)
Riemann surface
Riemann–Hurwitz formula
Riemann–Roch theorem
Abelian integral
Differential of the first kind
Jacobian variety
Generalized Jacobian
Moduli of algebraic curves
Hurwitz's theorem on automorphisms of a curve
Clifford's theorem on special divisors
Gonality of an algebraic curve
Weil reciprocity law
Algebraic geometry codes
Algebraic surfaces
Enriques–Kodaira classification
List of algebraic surfaces
Ruled surface
Cubic surface
Veronese surface
Del Pezzo surface
Rational surface
Enriques surface
K3 surface
Hodge index theorem
Elliptic surface
Surface of general type
Zariski surface
Algebraic geometry: classical approach
Algebraic variety
Hypersurface
Quadric (algebraic geometry)
Dimension of an algebraic variety
Hilbert's Nullstellensatz
Complete variety
Elimination theory
Gröbner basis
Projective variety
Quasiprojective variety
Canonical bundle
Complete intersection
Serre duality
Spaltenstein variety
Arithmetic genus, geometric genus, irregularity
Tangent space, Zariski tangent space
Function field of an algebraic variet
|
https://en.wikipedia.org/wiki/Mathematical%20Alphanumeric%20Symbols
|
Mathematical Alphanumeric Symbols is a Unicode block comprising styled forms of Latin and Greek letters and decimal digits that enable mathematicians to denote different notions with different letter styles. The letters in various fonts often have specific, fixed meanings in particular areas of mathematics. By providing uniformity over numerous mathematical articles and books, these conventions help to read mathematical formulas. These also may be used to differentiate between concepts that share a letter in a single problem.
Unicode now includes many such symbols (in the range U+1D400–U+1D7FF). The rationale behind this is that it enables design and usage of special mathematical characters (fonts) that include all necessary properties to differentiate from other alphanumerics, e.g. in mathematics an italic "𝐴" can have a different meaning from a roman letter "A". Unicode originally included a limited set of such letter forms in its Letterlike Symbols block before completing the set of Latin and Greek letter forms in this block beginning in version 3.1.
Unicode expressly recommends that these characters not be used in general text as a substitute for presentational markup; the letters are specifically designed to be semantically different from each other. Unicode does include a set of normal serif letters in the set. Still they have found some usage on social media, for example by people who want a stylized user name, and in email spam, in an attempt to bypass filters.
All these letter shapes may be manipulated with MathML's attribute mathvariant.
The introduction date of some of the more commonly used symbols can be found in the Table of mathematical symbols by introduction date.
Tables of styled letters and digits
These tables show all styled forms of Latin and Greek letters, symbols and digits in the Unicode Standard, with the normal unstyled forms of these characters shown with a cyan background (the basic unstyled letters may be serif or sans-serif depen
|
https://en.wikipedia.org/wiki/List%20of%20complex%20and%20algebraic%20surfaces
|
This is a list of named algebraic surfaces, compact complex surfaces, and families thereof, sorted according to their Kodaira dimension following Enriques–Kodaira classification.
Kodaira dimension −∞
Rational surfaces
Projective plane
Quadric surfaces
Cone (geometry)
Cylinder
Ellipsoid
Hyperboloid
Paraboloid
Sphere
Spheroid
Rational cubic surfaces
Cayley nodal cubic surface, a certain cubic surface with 4 nodes
Cayley's ruled cubic surface
Clebsch surface or Klein icosahedral surface
Fermat cubic
Monkey saddle
Parabolic conoid
Plücker's conoid
Whitney umbrella
Rational quartic surfaces
Châtelet surfaces
Dupin cyclides, inversions of a cylinder, torus, or double cone in a sphere
Gabriel's horn
Right circular conoid
Roman surface or Steiner surface, a realization of the real projective plane in real affine space
Tori, surfaces of revolution generated by a circle about a coplanar axis
Other rational surfaces in space
Boy's surface, a sextic realization of the real projective plane in real affine space
Enneper surface, a nonic minimal surface
Henneberg surface, a minimal surface of degree 15
Bour's minimal surface, a surface of degree 16
Richmond surfaces, a family of minimal surfaces of variable degree
Other families of rational surfaces
Coble surfaces
Del Pezzo surfaces, surfaces with an ample anticanonical divisor
Hirzebruch surfaces, rational ruled surfaces
Segre surfaces, intersections of two quadrics in projective 4-space
Unirational surfaces of characteristic 0
Veronese surface, the Veronese embedding of the projective plane into projective 5-space
White surfaces, the blow-up of the projective plane at points by the linear system of degree- curves through those points
Bordiga surfaces, the White surfaces determined by families of quartic curves
Non-rational ruled surfaces
Class VII surfaces
Vanishing second Betti number:
Hopf surfaces
Inoue surfaces; several other families discovered by Inoue have also been called "
|
https://en.wikipedia.org/wiki/Chipset
|
In a computer system, a chipset is a set of electronic components on one or more ULSI integrated circuits known as a "Data Flow Management System" that manages the data flow between the processor, memory and peripherals. It is usually found on the motherboard of computers. Chipsets are usually designed to work with a specific family of microprocessors. Because it controls communications between the processor and external devices, the chipset plays a crucial role in determining system performance.
Computers
In computing, the term chipset commonly refers to a set of specialized chips on a computer's motherboard or an expansion card. In personal computers, the first chipset for the IBM PC AT of 1984 was the NEAT chipset developed by Chips and Technologies for the Intel 80286 CPU.
In home computers, game consoles, and arcade hardware of the 1980s and 1990s, the term chipset was used for the custom audio and graphics chips. Examples include the Original Amiga chipset and Sega's System 16 chipset.
In x86-based personal computers, the term chipset often refers to a specific pair of chips on the motherboard: the northbridge and the southbridge. The northbridge links the CPU to very high-speed devices, especially RAM and graphics controllers, and the southbridge connects to lower-speed peripheral buses (such as PCI or ISA). In many modern chipsets, the southbridge contains some on-chip integrated peripherals, such as Ethernet, USB, and audio devices.
Motherboards and their chipsets often come from different manufacturers. , manufacturers of chipsets for x86 motherboards include AMD, Intel, VIA Technologies and Zhaoxin.
In the 1990s, a major designer and manufacturer of chipsets was VLSI Technology in Tempe, Arizona. The early Apple Power Macintosh PCs (that used the Motorola 68030 and 68040) had chipsets from VLSI Technology. Some of their innovations included the integration of PCI bridge logic, the GraphiCore 2D graphics accelerator and direct support for synchronous
|
https://en.wikipedia.org/wiki/Relativistic%20heat%20conduction
|
Relativistic heat conduction refers to the modelling of heat conduction (and similar diffusion processes) in a way compatible with special relativity. In special (and general) relativity, the usual heat equation for non-relativistic heat conduction must be modified, as it leads to faster-than-light signal propagation. Relativistic heat conduction, therefore, encompasses a set of models for heat propagation in continuous media (solids, fluids, gases) that are consistent with relativistic causality, namely the principle that an effect must be within the light-cone associated to its cause. Any reasonable relativistic model for heat conduction must also be stable, in the sense that differences in temperature propagate both slower than light and are damped over time (this stability property is intimately intertwined with relativistic causality).
Parabolic model (non-relativistic)
Heat conduction in a Newtonian context is modelled by the Fourier equation, namely a parabolic partial differential equation of the kind:
where θ is temperature, t is time, α = k/(ρ c) is thermal diffusivity, k is thermal conductivity, ρ is density, and c is specific heat capacity. The Laplace operator,, is defined in Cartesian coordinates as
This Fourier equation can be derived by substituting Fourier’s linear approximation of the heat flux vector, q, as a function of temperature gradient,
into the first law of thermodynamics
where the del operator, ∇, is defined in 3D as
It can be shown that this definition of the heat flux vector also satisfies the second law of thermodynamics,
where s is specific entropy and σ is entropy production. This mathematical model is inconsistent with special relativity: the Green function associated to the heat equation (also known as heat kernel) has support that extends outside the light-cone, leading to faster-than-light propagation of information. For example, consider a pulse of heat at the origin; then according to Fourier equation, it is felt (i.
|
https://en.wikipedia.org/wiki/Dynamic%20bandwidth%20allocation
|
Dynamic bandwidth allocation is a technique by which traffic bandwidth in a shared telecommunications medium can be allocated on demand and fairly between different users of that bandwidth. This is a form of bandwidth management, and is essentially the same thing as statistical multiplexing. Where the sharing of a link adapts in some way to the instantaneous traffic demands of the nodes connected to the link.
Dynamic bandwidth allocation takes advantage of several attributes of shared networks:
all users are typically not connected to the network at one time
even when connected, users are not transmitting data (or voice or video) at all times
most traffic occurs in bursts—there are gaps between packets of information that can be filled with other user traffic
Different network protocols implement dynamic bandwidth allocation in different ways. These methods are typically defined in standards developed by standards bodies such as the ITU, IEEE, FSAN, or IETF. One example is defined in the ITU G.983 specification for passive optical network (PON).
See also
Statistical multiplexing
Channel access method
Dynamic channel allocation
Reservation ALOHA (R-ALOHA)
Telecommunications techniques
Computer networking
Radio resource management
|
https://en.wikipedia.org/wiki/Substrate%20presentation
|
Substrate presentation is a biological process that activates a protein. The protein is sequestered away from its substrate and then activated by release and exposure of the protein to its substrate. A substrate is typically the substance on which an enzyme acts but can also be a protein surface to which a ligand binds. The substrate is the material acted upon. In the case of an interaction with an enzyme, the protein or organic substrate typically changes chemical form. Substrate presentation differs from allosteric regulation in that the enzyme need not change its conformation to begin catalysis. Substrate presentation is best described for nanoscopic distances (<100 nm).
Examples
Amyloid Precursor Protein
Amyloid precursor protein (APP) is cleaved by beta and gamma secretase to yield a 40-42 amino acid peptide responsible for beta amyloid plaques associated with Alzheimer's disease. The enzymes are regulated by substrate presentation. The substrate APP is palmitoylated and moves in and out of GM1 lipid rafts in response to astrocyte cholesterol. Cholesterol delivered by apolipoprotein E (ApoE) drives APP to associate with GM1 lipid rafts. When cholesterol is low, the protein traffics to the disordered region and is cleaved by alpha secretase to produce a non-amylogenic product. The enzymes do not appear to respond to cholesterol, only the substrate moves.
Hydrophobicity drives the partitioning of molecules. In the cell, this gives rise to compartmentalization within the cell and within cell membranes. For lipid rafts, palmitoylation regulates raft affinity for the majority of integral raft proteins. Raft regulation is regulated by cholesterol signaling.
Phospholipase D2
(PLD2) is a well-defined example of an enzyme activated by substrate presentation. The enzyme is palmitoylated causing the enzyme to traffic to GM1 lipid domains or "lipid rafts". The substrate of phospholipase D is phosphatidylcholine (PC) which is unsaturated and is of low abundance in li
|
https://en.wikipedia.org/wiki/Diskless%20shared-root%20cluster
|
A diskless shared-root cluster is a way to manage several machines at the same time. Instead of each having its own operating system (OS) on its local disk, there is only one image of the OS available on a server, and all the nodes use the same image. (SSI cluster = single-system image)
The simplest way to achieve this is to use a NFS server, configured to host the generic boot image for the SSI cluster nodes. (pxe + dhcp + tftp + nfs)
To ensure that there is no single point of failure, the NFS export for the boot-image should be hosted on a two node cluster.
The architecture of a diskless computer cluster makes it possible to separate servers and storage array. The operating system as well as the actual reference data (userfiles, databases or websites) are stored competitively on the attached storage system in a centralized manner. Any server that acts as a cluster node can be easily exchanged by demand.
The additional abstraction layer between storage system and computing power eases the scale out of the infrastructure. Most notably the storage capacity, the computing power and the network bandwidth can be scaled independent from one another.
A similar technology can be found in VMScluster (OpenVMS) and TruCluster (Tru64 UNIX).
The open-source implementation of a diskless shared-root cluster is known as Open-Sharedroot.
Literature
Marc Grimme, Mark Hlawatschek, Thomas Merz: Data sharing with a Red Hat GFS storage cluster
Marc Grimme, Mark Hlawatschek German Whitepaper: Der Diskless Shared-root Cluster (PDF-Datei; 1,1 MB)
Kenneth W. Preslan: Red Hat GFS 6.1 – Administrator’s Guide
|
https://en.wikipedia.org/wiki/Euler%20product
|
In number theory, an Euler product is an expansion of a Dirichlet series into an infinite product indexed by prime numbers. The original such product was given for the sum of all positive integers raised to a certain power as proven by Leonhard Euler. This series and its continuation to the entire complex plane would later become known as the Riemann zeta function.
Definition
In general, if is a bounded multiplicative function, then the Dirichlet series
is equal to
where the product is taken over prime numbers , and is the sum
In fact, if we consider these as formal generating functions, the existence of such a formal Euler product expansion is a necessary and sufficient condition that be multiplicative: this says exactly that is the product of the whenever factors as the product of the powers of distinct primes .
An important special case is that in which is totally multiplicative, so that is a geometric series. Then
as is the case for the Riemann zeta function, where , and more generally for Dirichlet characters.
Convergence
In practice all the important cases are such that the infinite series and infinite product expansions are absolutely convergent in some region
that is, in some right half-plane in the complex numbers. This already gives some information, since the infinite product, to converge, must give a non-zero value; hence the function given by the infinite series is not zero in such a half-plane.
In the theory of modular forms it is typical to have Euler products with quadratic polynomials in the denominator here. The general Langlands philosophy includes a comparable explanation of the connection of polynomials of degree , and the representation theory for .
Examples
The following examples will use the notation for the set of all primes, that is:
The Euler product attached to the Riemann zeta function , also using the sum of the geometric series, is
while for the Liouville function , it is
Using their reciprocals, two Euler produc
|
https://en.wikipedia.org/wiki/DNA%20laddering
|
DNA laddering is a feature that can be observed when DNA fragments, resulting from Apoptosis DNA fragmentation are visualized after separation by gel electrophoresis the first described in 1980 by Andrew Wyllie at the University Edinburgh medical school DNA fragments can also be delected in cells that underwent necrosis, when theses DNA fragments after separation are subjected to gel electrophoresis which in the results in a characteristic ladder pattern,
DNA degradation
DNA laddering is a distinctive feature of DNA degraded by caspase-activated DNase (CAD), which is a key event during apoptosis. CAD cleaves genomic DNA at internucleosomal linker regions, resulting in DNA fragments that are multiples of 180–185 base-pairs in length. Separation of the fragments by agarose gel electrophoresis and subsequent visualization, for example by ethidium bromide staining, results in a characteristic "ladder" pattern. A simple method of selective extraction of fragmented DNA from apoptotic cells without the presence of high molecular weight DNA sections, generating the laddering pattern, utilizes pretreatment of cells in ethanol.
Apoptosis and necrosis
While most of the morphological features of apoptotic cells are short-lived, DNA laddering can be used as final state read-out method and has therefore become a reliable method to distinguish apoptosis from necrosis. DNA laddering can also be used to see if cells underwent apoptosis in the presence of a virus. This is useful because it can help determine the effects a virus has on a cell.
DNA laddering can only be used to detect apoptosis during the later stages of apoptosis. This is due to DNA fragmentation taking place in a later stage of the apoptosis process. DNA laddering is used to test for apoptosis of many cells, and is not accurate at testing for only a few cells that committed apoptosis. To enhance the accuracy in testing for apoptosis, other assays are used along with DNA laddering such as TEM and TUNEL. With recen
|
https://en.wikipedia.org/wiki/FTOS
|
FTOS or Force10 Operating System is the firmware family used on Force10 Ethernet switches. It has a similar functionality as Cisco's NX-OS or Juniper's Junos. FTOS 10 is running on Debian.
As part of a re-branding strategy of Dell FTOS will be renamed to Dell Networking Operating System (DNOS) 9.x or above, while the legacy PowerConnect switches will use DNOS 6.x: see the separate article on DNOS.
Hardware Abstraction Layer
Three of the four product families from Dell Force10 are using the Broadcom Trident+ ASIC's, but the company doesn't use the API's from Broadcom: the developers at Force10 have written their own Hardware Abstraction Layer so that FTOS can run on different hardware platforms with minimal impact for the firmware. Currently three of the four F10 switch families are based on the Broadcom Trident+ (while the fourth—the E-series—run on self-developed ASIC's); and if the product developers want or need to use different hardware for new products they only need to develop a HAL for that new hardware and the same firmware can run on it. This keeps the company flexible and not dependent on a specific hardware-vendor and can use both 3rd party or self designed ASIC's and chipsets.
The human interface in FTOS, that is the way network-administrators can configure and monitor their switches, is based on NetBSD, an implementation which often used in embedded networking-systems. NetBSD is a very stable, open source, OS running on many different hardware platforms. By choosing for a proven technology with extended TCP functionality built into the core of the OS it reduces time during development of new products or extending the FTOS with new features.
Modular setup
FTOS is also modular where different parts of the OS run independently from each other within one switch: if one process would fail the impact on other processes on the switch are limited. This modular setup is also taken to the hardware level in some product-lines where a routing-module has three se
|
https://en.wikipedia.org/wiki/2%20%2B%202%20%3D%205
|
"Two plus two equals five" (2 + 2 = 5) is a mathematically incorrect phrase used in the 1949 dystopian novel Nineteen Eighty-Four by George Orwell. It appears as a possible statement of Ingsoc (English Socialism) philosophy, like the dogma "War is Peace", which the Party expects the citizens of Oceania to believe is true. In writing his secret diary in the year 1984, the protagonist Winston Smith ponders if the Inner Party might declare that "two plus two equals five" is a fact. Smith further ponders whether or not belief in such a consensus reality makes the lie true.
About the falsity of "two plus two equals five", in the Ministry of Love, the interrogator O'Brien tells the thought criminal Smith that control over physical reality is unimportant to the Party, provided the citizens of Oceania subordinate their real-world perceptions to the political will of the Party; and that, by way of doublethink: "Sometimes, Winston. [Sometimes it is four fingers.] Sometimes they are five. Sometimes they are three. Sometimes they are all of them at once".
As a theme and as a subject in the arts, the anti-intellectual slogan 2 + 2 = 5 pre-dates Orwell and has produced literature, such as Deux et deux font cinq (Two and Two Make Five), written in 1895 by Alphonse Allais, which is a collection of absurdist short stories; and the 1920 imagist art manifesto 2 × 2 = 5 by the poet Vadim Shershenevich, in the 20th century.
Self-evident truth and self-evident falsehood
In the 17th century, in the Meditations on First Philosophy, in which the Existence of God and the Immortality of the Soul are Demonstrated (1641), René Descartes said that the standard of truth is self-evidence of clear and distinct ideas. Despite the logician Descartes' understanding of "self-evident truth", the philosopher Descartes considered that the self-evident truth of "two plus two equals four" might not exist beyond the human mind; that there might not exist correspondence between abstract ideas and concret
|
https://en.wikipedia.org/wiki/Stamped%20circuit%20board
|
A stamped circuit board (SCB) is used to mechanically support and electrically connect electronic components using conductive pathways, tracks or traces etched from copper sheets laminated onto a non-conductive substrate. This technology is used for small circuits, for instance in the production of LEDs.
Similar to printed circuit boards this layer structure may comprise glass-fibre reinforced epoxy resin and copper. Basically, in the case of LED substrates three variations are possible:
the PCB (printed circuit board),
plastic-injection molding and
the SCB.
Using the SCB technology it is possible to structure and laminate the most widely differing material combinations in a reel-to-reel production process. As the layers are structured separately, improved design concepts are able to be implemented. Consequently, a far better and quicker heat dissipation from within the chip is achieved.
Production
Both the plastic and the metal are initially processed on separate reels, .i.e. in accordance with the requirements the materials are individually structured by stamping (“brought into form“) and then merged.
Advantages
The engineering respectively choice of substrates actually comes down to the particular application, module design/substrate assembly, material and thickness of the material involved.
Taking these parameters it is possible to attain a good thermal management by using SCB technology, because rapid heat dissipation from beneath the chip means a longer service life for the system. Furthermore, SCB technology allows the material to be chosen to correspond to the pertinent requirements and then to optimize the design to arrive at a “perfect fit”.
|
https://en.wikipedia.org/wiki/Factorial%20code
|
Most real world data sets consist of data vectors whose individual components are not statistically independent. In other words, knowing the value of an element will provide information about the value of elements in the data vector. When this occurs, it can be desirable to create a factorial code of the data, i.e., a new vector-valued representation of each data vector such that it gets uniquely encoded by the resulting code vector (loss-free coding), but the code components are statistically independent.
Later supervised learning usually works much better when the raw input data is first translated into such a factorial code. For example, suppose the final goal is to classify images with highly redundant pixels. A naive Bayes classifier will assume the pixels are statistically independent random variables and therefore fail to produce good results. If the data are first encoded in a factorial way, however, then the naive Bayes classifier will achieve its optimal performance (compare Schmidhuber et al. 1996).
To create factorial codes, Horace Barlow and co-workers suggested to minimize the sum of the bit entropies of the code components of binary codes (1989). Jürgen Schmidhuber (1992) re-formulated the problem in terms of predictors and binary feature detectors, each receiving the raw data as an input. For each detector there is a predictor that sees the other detectors and learns to predict the output of its own detector in response to the various input vectors or images. But each detector uses a machine learning algorithm to become as unpredictable as possible. The global optimum of this objective function corresponds to a factorial code represented in a distributed fashion across the outputs of the feature detectors.
Painsky, Rosset and Feder (2016, 2017) further studied this problem in the context of independent component analysis over finite alphabet sizes. Through a series of theorems they show that the factorial coding problem can be accurately solved
|
https://en.wikipedia.org/wiki/Promiscuous%20traffic
|
In computer networking, promiscuous traffic, or cross-talking, describes situations where a receiver configured to receive a particular data stream receives that data stream and others. Promiscuous traffic should not be confused with the promiscuous mode, which is a network card configuration.
In particular, in multicast socket networking, an example of promiscuous traffic is when a socket configured to listen on a specific multicast address group A with a specific port P, noted A:P, receives traffic from A:P but also from another multicast source. For instance, a socket is configured to receive traffic from the multicast group address 234.234.7.70, port 36000 (noted 234.234.7.70:36000), but receives traffic from both 234.234.7.70:36000 and 234.234.7.71:36000.
This type of promiscuous traffic, due to a lack of address filtering, has been a recurring issue with certain Unix and Linux kernels, but has never been reported on Microsoft Windows operating systems post Windows XP.
Another form of promiscuous traffic occurs when two different applications happen to listen on the same group address. As the former type of promiscuous traffic (lack of address filtering) can be considered a bug at the operating system level, the latter reflects global configuration issues.
|
https://en.wikipedia.org/wiki/Classification%20of%20Clifford%20algebras
|
In abstract algebra, in particular in the theory of nondegenerate quadratic forms on vector spaces, the structures of finite-dimensional real and complex Clifford algebras for a nondegenerate quadratic form have been completely classified. In each case, the Clifford algebra is algebra isomorphic to a full matrix ring over R, C, or H (the quaternions), or to a direct sum of two copies of such an algebra, though not in a canonical way. Below it is shown that distinct Clifford algebras may be algebra-isomorphic, as is the case of Cl1,1(R) and Cl2,0(R), which are both isomorphic as rings to the ring of two-by-two matrices over the real numbers.
The significance of this result is that the additional structure on a Clifford algebra relative to the "underlying" associative algebra — namely, the structure given by the grade involution automorphism and reversal anti-automorphism (and their composition, the Clifford conjugation) — is in general an essential part of its definition, not a procedural artifact of its construction as the quotient of a tensor algebra by an ideal. The category of Clifford algebras is not just a selection from the category of matrix rings, picking out those in which the ring product can be constructed as the Clifford product for some vector space and quadratic form. With few exceptions, "forgetting" the additional structure (in the category theory sense of a forgetful functor) is not reversible.
Continuing the example above: Cl1,1(R) and Cl2,0(R) share the same associative algebra structure, isomorphic to (and commonly denoted as) the matrix algebra M2(R). But they are distinguished by different choices of grade involution — of which two-dimensional subring, closed under the ring product, to designate as the even subring — and therefore of which of the various anti-automorphisms of M2(R) can accurately represent the reversal anti-automorphism of the Clifford algebra. These distinguished (anti-)automorphisms are structures on the tensor algebra whi
|
https://en.wikipedia.org/wiki/Electronic%20oscillation
|
Electronic oscillation is a repeating cyclical variation in voltage or current in an electrical circuit, resulting in a periodic waveform. The frequency of the oscillation in hertz is the number of times the cycle repeats per second.
The recurrence may be in the form of a varying voltage or a varying current. The waveform may be sinusoidal or some other shape when its magnitude is plotted against time. Electronic oscillation may be intentionally caused, as in devices designed as oscillators, or it may be the result of unintentional positive feedback from the output of an electronic device to its input. The latter appears often in feedback amplifiers (such as operational amplifiers) that do not have sufficient gain or phase margins. In this case, the oscillation often interferes with or compromises the amplifier's intended function, and is known as parasitic oscillation.
|
https://en.wikipedia.org/wiki/Automatic%20system%20recovery
|
Automatic system recovery is a device or process that detects a computer failure and attempts recovery. The device may make use of a Watchdog timer. This may also refer to a Microsoft recovery technology by the same name.
External links
HP ProLiant, Blade - Automatic Server Recovery (ASR) archivied from original on archive.org
How does ASR (Automatic System Recovery) detect server hang?
Embedded systems
|
https://en.wikipedia.org/wiki/Point-set%20registration
|
In computer vision, pattern recognition, and robotics, point-set registration, also known as point-cloud registration or scan matching, is the process of finding a spatial transformation (e.g., scaling, rotation and translation) that aligns two point clouds. The purpose of finding such a transformation includes merging multiple data sets into a globally consistent model (or coordinate frame), and mapping a new measurement to a known data set to identify features or to estimate its pose. Raw 3D point cloud data are typically obtained from Lidars and RGB-D cameras. 3D point clouds can also be generated from computer vision algorithms such as triangulation, bundle adjustment, and more recently, monocular image depth estimation using deep learning. For 2D point set registration used in image processing and feature-based image registration, a point set may be 2D pixel coordinates obtained by feature extraction from an image, for example corner detection. Point cloud registration has extensive applications in autonomous driving, motion estimation and 3D reconstruction, object detection and pose estimation, robotic manipulation, simultaneous localization and mapping (SLAM), panorama stitching, virtual and augmented reality, and medical imaging.
As a special case, registration of two point sets that only differ by a 3D rotation (i.e., there is no scaling and translation), is called the Wahba Problem and also related to the orthogonal procrustes problem.
Formulation
The problem may be summarized as follows:
Let be two finite size point sets in a finite-dimensional real vector space , which contain and points respectively (e.g., recovers the typical case of when and are 3D point sets). The problem is to find a transformation to be applied to the moving "model" point set such that the difference (typically defined in the sense of point-wise Euclidean distance) between and the static "scene" set is minimized. In other words, a mapping from to is desired which yie
|
https://en.wikipedia.org/wiki/List%20of%20curves%20topics
|
This is an alphabetical index of articles related to curves used in mathematics.
Acnode
Algebraic curve
Arc
Asymptote
Asymptotic curve
Barbier's theorem
Bézier curve
Bézout's theorem
Birch and Swinnerton-Dyer conjecture
Bitangent
Bitangents of a quartic
Cartesian coordinate system
Caustic
Cesàro equation
Chord (geometry)
Cissoid
Circumference
Closed timelike curve
concavity
Conchoid (mathematics)
Confocal
Contact (mathematics)
Contour line
Crunode
Cubic Hermite curve
Curvature
Curve orientation
Curve fitting
Curve-fitting compaction
Curve of constant width
Curve of pursuit
Curves in differential geometry
Cusp
Cyclogon
De Boor algorithm
Differential geometry of curves
Eccentricity (mathematics)
Elliptic curve cryptography
Envelope (mathematics)
Fenchel's theorem
Genus (mathematics)
Geodesic
Geometric genus
Great-circle distance
Harmonograph
Hedgehog (curve)
Hilbert's sixteenth problem
Hyperelliptic curve cryptography
Inflection point
Inscribed square problem
intercept, y-intercept, x-intercept
Intersection number
Intrinsic equation
Isoperimetric inequality
Jordan curve
Jordan curve theorem
Knot
Limit cycle
Linking coefficient
List of circle topics
Loop (knot)
M-curve
Mannheim curve
Meander (mathematics)
Mordell conjecture
Natural representation
Opisometer
Orbital elements
Osculating circle
Osculating plane
Osgood curve
Parallel (curve)
Parallel transport
Parametric curve
Bézier curve
Spline (mathematics)
Hermite spline
Beta spline
B-spline
Higher-order spline
NURBS
Perimeter
Pi
Plane curve
Pochhammer contour
Polar coordinate system
Prime geodesic
Projective line
Ray
Regular parametric representation
Reuleaux triangle
Ribaucour curve
Riemann–Hurwitz formula
Riemann–Roch theorem
Riemann surface
Road curve
Sato–Tate conjecture
secant
Singular solution
Sinuosity
Slope
Space curve
Spinode
Square wheel
Subtangent
Tacnode
Tangent
Tangent space
Tangential angle
Tor
|
https://en.wikipedia.org/wiki/Serotype
|
A serotype or serovar is a distinct variation within a species of bacteria or virus or among immune cells of different individuals. These microorganisms, viruses, or cells are classified together based on their surface antigens, allowing the epidemiologic classification of organisms to the subspecies level. A group of serovars with common antigens is called a serogroup or sometimes serocomplex.
Serotyping often plays an essential role in determining species and subspecies. The Salmonella genus of bacteria, for example, has been determined to have over 2600 serotypes. Vibrio cholerae, the species of bacteria that causes cholera, has over 200 serotypes, based on cell antigens. Only two of them have been observed to produce the potent enterotoxin that results in cholera: O1 and O139.
Serotypes were discovered by the American microbiologist Rebecca Lancefield in 1933.
Role in organ transplantation
The immune system is capable of discerning a cell as being 'self' or 'non-self' according to that cell's serotype. In humans, that serotype is largely determined by human leukocyte antigen (HLA), the human version of the major histocompatibility complex. Cells determined to be non-self are usually recognized by the immune system as foreign, causing an immune response, such as hemagglutination. Serotypes differ widely between individuals; therefore, if cells from one human (or animal) are introduced into another random human, those cells are often determined to be non-self because they do not match the self-serotype. For this reason, transplants between genetically non-identical humans often induce a problematic immune response in the recipient, leading to transplant rejection. In some situations, this effect can be reduced by serotyping both recipient and potential donors to determine the closest HLA match.
Human leukocyte antigens
Serotyping of Salmonella
The Kauffman–White classification scheme is the basis for naming the manifold serovars of Salmonella. To date, more
|
https://en.wikipedia.org/wiki/Security%20of%20the%20Java%20software%20platform
|
The Java platform provides a number of features designed for improving the security of Java applications. This includes enforcing runtime constraints through the use of the Java Virtual Machine (JVM), a security manager that sandboxes untrusted code from the rest of the operating system, and a suite of security APIs that Java developers can utilise. Despite this, criticism has been directed at the programming language, and Oracle, due to an increase in malicious programs that revealed security vulnerabilities in the JVM, which were subsequently not properly addressed by Oracle in a timely manner.
Security features
The JVM
The binary form of programs running on the Java platform is not native machine code but an intermediate bytecode. The JVM performs verification on this bytecode before running it to prevent the program from performing unsafe operations such as branching to incorrect locations, which may contain data rather than instructions. It also allows the JVM to enforce runtime constraints such as array bounds checking. This means that Java programs are significantly less likely to suffer from memory safety flaws such as buffer overflow than programs written in languages such as C which do not provide such memory safety guarantees.
The platform does not allow programs to perform certain potentially unsafe operations such as pointer arithmetic or unchecked type casts. It manages memory allocation and initialization and provides automatic garbage collection which in many cases (but not all) relieves the developer from manual memory management. This contributes to type safety and memory safety.
Security manager
The platform provides a security manager which allows users to run untrusted bytecode in a "sandboxed" environment designed to protect them from malicious or poorly written software by preventing the untrusted code from accessing certain platform features and APIs. For example, untrusted code might be prevented from reading or writing files on the loca
|
https://en.wikipedia.org/wiki/Foliicolous
|
Foliicolous refers to the growth habit of certain lichens, algae, and fungi that prefer to grow on the leaves of vascular plants. There have been about 700 species of foliicolous lichens identified, most of which are found in the tropics.
|
https://en.wikipedia.org/wiki/Digital%20storage%20oscilloscope
|
A digital storage oscilloscope (DSO) is an oscilloscope which stores and analyses the input signal digitally rather than using analog techniques. It is now the most common type of oscilloscope in use because of the advanced trigger, storage, display and measurement features which it typically provides.
The input analogue signal is sampled and then converted into a digital record of the amplitude of the signal at each sample time. The sampling frequency should be not less than the Nyquist rate to avoid aliasing. These digital values are then turned back into an analogue signal for display on a cathode ray tube (CRT), or transformed as needed for the various possible types of output—liquid crystal display, chart recorder, plotter or network interface.
Digital storage oscilloscope costs vary widely; bench-top self-contained instruments (complete with displays) start at or even less, with high-performance models selling for tens of thousands of dollars. Small, pocket-size models, limited in function, may retail for as little as US$50.
Comparison with analog storage
The principal advantage over analog storage is that the stored traces are as bright, as sharply defined, and written as quickly as non-stored traces. Traces can be stored indefinitely or written out to some external data storage device and reloaded. This allows, for example, comparison of an acquired trace from a system under test with a standard trace acquired from a known-good system. Many models can display the waveform prior to the trigger signal.
Digital oscilloscopes usually analyze waveforms and provide numerical values as well as visual displays. These values typically include averages, maxima and minima, root mean square (RMS) and frequencies. They may be used to capture transient signals when operated in a single sweep mode, without the brightness and writing speed limitations of an analog storage oscilloscope.
The displayed trace can be manipulated after acquisition; a portion of t
|
https://en.wikipedia.org/wiki/Tertiary%20review
|
In software engineering, a tertiary review is a systematic review of systematic reviews. It is also referred to as a tertiary study in the software engineering literature. However, Umbrella review is the term more commonly used in medicine.
Kitchenham et al. suggest that methodologically there is no difference between a systematic review and a tertiary review. However, as the software engineering community has started performing tertiary reviews new concerns unique to tertiary reviews have surfaced. These include the challenge of quality assessment of systematic reviews, search validation and the additional risk of double counting.
Examples of Tertiary reviews in software engineering literature
Test quality
Machine Learning
Test-driven development
|
https://en.wikipedia.org/wiki/BiCMOS
|
Bipolar CMOS (BiCMOS) is a semiconductor technology that integrates two semiconductor technologies, those of the bipolar junction transistor and the CMOS (complementary metal–oxide–semiconductor) logic gate, into a single integrated circuit. In more recent times the bipolar processes have been extended to include high mobility devices using silicon–germanium junctions.
Bipolar transistors offer high speed, high gain, and low output impedance with relatively high power consumption per device, which are excellent properties for high-frequency analog amplifiers including low noise radio frequency (RF) amplifiers that only use a few active devices, while CMOS technology offers high input impedance and is excellent for constructing large numbers of low-power logic gates. In a BiCMOS process the doping profile and other process features may be tilted to favour either the CMOS or the bipolar devices. For example GlobalFoundries offer a basic 180 nm BiCMOS7WL process and several other BiCMOS processes optimized in various ways. These processes also include steps for the deposition of precision resistors, and high Q RF inductors and capacitors on-chip, which are not needed in a "pure" CMOS logic design.
BiCMOS is aimed at mixed-signal ICs, such as ADCs and complete software radio systems on a chip that need amplifiers, analog power management circuits, and logic gates on chip. BiCMOS has some advantages in providing digital interfaces. BiCMOS circuits use the characteristics of each type of transistor most appropriately. Generally this means that high current circuits such as on chip power regulators use metal–oxide–semiconductor field-effect transistors (MOSFETs) for efficient control, and 'sea of logic' use conventional CMOS structures, while those portions of specialized very high performance circuits such as ECL dividers and LNAs use bipolar devices. Examples include RF oscillators, bandgap-based references and low-noise circuits.
The Pentium, Pentium Pro, and SuperS
|
https://en.wikipedia.org/wiki/Delay%20equalization
|
In signal processing, delay equalization corresponds to adjusting the relative phases of different frequencies to achieve a constant group delay, using by adding an all-pass filter in series with an uncompensated filter. Clever machine-learning techniques are now being applied to the design of such filters.
|
https://en.wikipedia.org/wiki/Oscillator%20start-up%20timer
|
An oscillator start-up timer (OST) is a module used by some microcontrollers to keep the device reset until the crystal oscillator is stable. When a crystal oscillator starts up, its frequency is not constant, which causes the clock frequency to be non-constant. This would cause timing errors, leading to many problems. An oscillator start-up timer ensures that the device only operates when the oscillator generates a stable clock frequency.
The PIC microcontroller's oscillator start-up timer holds the device's reset for a 1024-oscillator-cycle delay to allow the oscillator to stabilize.
See also
Power-on reset
Brown-out reset
Watchdog timer
Low-voltage detect
|
https://en.wikipedia.org/wiki/Content-addressable%20memory
|
Content-addressable memory (CAM) is a special type of computer memory used in certain very-high-speed searching applications. It is also known as associative memory or associative storage and compares input search data against a table of stored data, and returns the address of matching data.
CAM is frequently used in networking devices where it speeds up forwarding information base and routing table operations. This kind of associative memory is also used in cache memory. In associative cache memory, both address and content is stored side by side. When the address matches, the corresponding content is fetched from cache memory.
History
Dudley Allen Buck invented the concept of content-addressable memory in 1955. Buck is credited with the idea of recognition unit.
Hardware associative array
Unlike standard computer memory, random-access memory (RAM), in which the user supplies a memory address and the RAM returns the data word stored at that address, a CAM is designed such that the user supplies a data word and the CAM searches its entire memory to see if that data word is stored anywhere in it. If the data word is found, the CAM returns a list of one or more storage addresses where the word was found. Thus, a CAM is the hardware embodiment of what in software terms would be called an associative array.
A similar concept can be found in the data word recognition unit, as proposed by Dudley Allen Buck in 1955.
Standards
A major interface definition for CAMs and other network search engines was specified in an interoperability agreement called the Look-Aside Interface (LA-1 and LA-1B) developed by the Network Processing Forum. Numerous devices conforming to the interoperability agreement have been produced by Integrated Device Technology, Cypress Semiconductor, IBM, Broadcom and others. On December 11, 2007, the OIF published the serial look-aside (SLA) interface agreement.
Semiconductor implementations
CAM is much faster than RAM in data search applications
|
https://en.wikipedia.org/wiki/Fermentation
|
Fermentation is a metabolic process that produces chemical changes in organic substances through the action of enzymes. In biochemistry, it is narrowly defined as the extraction of energy from carbohydrates in the absence of oxygen. In food production, it may more broadly refer to any process in which the activity of microorganisms brings about a desirable change to a foodstuff or beverage. The science of fermentation is known as zymology.
In microorganisms, fermentation is the primary means of producing adenosine triphosphate (ATP) by the degradation of organic nutrients anaerobically.
Humans have used fermentation to produce foodstuffs and beverages since the Neolithic age. For example, fermentation is used for preservation in a process that produces lactic acid found in such sour foods as pickled cucumbers, kombucha, kimchi, and yogurt, as well as for producing alcoholic beverages such as wine and beer. Fermentation also occurs within the gastrointestinal tracts of all animals, including humans.
Industrial fermentation is a broader term used for the process of applying microbes for the large-scale production of chemicals, biofuels, enzymes, proteins and pharmaceuticals.
Definitions and etymology
Below are some definitions of fermentation ranging from informal, general usages to more scientific definitions.
Preservation methods for food via microorganisms (general use).
Any large-scale microbial process occurring with or without air (common definition used in industry, also known as industrial fermentation).
Any process that produces alcoholic beverages or acidic dairy products (general use).
Any energy-releasing metabolic process that takes place only under anaerobic conditions (somewhat scientific).
Any metabolic process that releases energy from a sugar or other organic molecule, does not require oxygen or an electron transport system, and uses an organic molecule as the final electron acceptor (most scientific).
The word "ferment" is derived from the
|
https://en.wikipedia.org/wiki/Size-asymmetric%20competition
|
Size-asymmetric competition refers to situations in which larger individuals exploit disproportionately greater amounts of resources when competing with smaller individuals. This type of competition is common among plants but also exists among animals. Size-asymmetric competition usually results from large individuals monopolizing the resource by "pre-emption". i.e. exploiting the resource before smaller individuals are able to obtain it. Size-asymmetric competition has major effects on population structure and diversity within ecological communities.
Definition of size asymmetry
Resource competition can vary from complete symmetric (all individuals receive the same amount of resources, irrespective of their size, known also as scramble competition) to perfectly size symmetric (all individuals exploit the same amount of resource per unit biomass) to absolutely size asymmetric (the largest individuals exploit all the available resource). The degree of size asymmetry can be described by the parameter θ in the following equation focusing on the partition of the resource r among n individuals of sizes Bj.
ri refers to the amount of resource consumed by individual i in the neighbourhood of j. When θ =1, competition is perfectly size symmetric, e.g. if a large individual is twice the size of its smaller competitor, the large individual will acquire twice the amount of that resource (i.e. both individuals will exploit the same amount of resource per biomass unit). When θ >1 competition is size-asymmetric, e.g. if large individual is twice the size of its smaller competitor and θ =2, the large individual will acquire four times the amount of that resource (i.e. the large individual will exploit twice the amount of resource per biomass unit). As θ increases, competition becomes more size-asymmetric and larger plants get larger amounts of resource per unit biomass compared with smaller plants.
Differences in size-asymmetry among resources in plant communities
Competit
|
https://en.wikipedia.org/wiki/SCMOS
|
sCMOS (scientific Complementary Metal–Oxide–Semiconductor) are a type of CMOS image sensor (CIS). These sensors are commonly used as components in specific observational scientific instruments, such as microscopes and telescopes. sCMOS image sensors offer extremely low noise, rapid frame rates, wide dynamic range, high quantum efficiency, high resolution, and a large field of view simultaneously in one image.
The sCMOS technology was launched in 2009 during the Laser World of Photonics fair in Munich. The companies Andor Technology, Fairchild Imaging and PCO Imaging developed the technology for image sensors as a joint venture.
Technical details
Prior to the introduction of the technology, scientists were limited to using either CCD or EMCCD cameras, both of which had their own set of technical limitations. While back-illuminated electron-multiplying CCD (EMCCD) cameras are optimal for purposes requiring the lowest noise and dark currents, sCMOS technology's higher pixel count and lower cost result in its use in a wide range of precision applications. sCMOS devices can capture data in a global-shutter “snapshot” mode over all the pixels or rectangular subsets of pixels, and can also operate in a rolling-shutter mode.
The cameras are available with a monochrome sCMOS image sensors or with RGB sCMOS image sensors. With sCMOS, digital information for each frame is generated rapidly and with an improved low-light image quality. The sCMOS sensor's low read noise and larger area provides a low-noise, large field-of-view (FOV) image that enables researchers to scan across a sample and capture high-quality images.
Some disadvantages at this time, (2023), with sCMOS cameras versus related technologies are:
sCMOS sensors tend be more expensive than traditional CMOS sensors.
sCMOS sensors have a limited resolution compared to other types of sensors like CCD.
In practice
The New York University School of Medicine uses sCMOS cameras for their research. They were used t
|
https://en.wikipedia.org/wiki/-yllion
|
-yllion (pronounced ) is a proposal from Donald Knuth for the terminology and symbols of an alternate decimal superbase system. In it, he adapts the familiar English terms for large numbers to provide a systematic set of names for much larger numbers. In addition to providing an extended range, -yllion also dodges the long and short scale ambiguity of -illion.
Knuth's digit grouping is exponential instead of linear; each division doubles the number of digits handled, whereas the familiar system only adds three or six more. His system is basically the same as one of the ancient and now-unused Chinese numeral systems, in which units stand for 104, 108, 1016, 1032, ..., 102n, and so on (with an exception that the -yllion proposal does not use a word for thousand which the original Chinese numeral system has). Today the corresponding Chinese characters are used for 104, 108, 1012, 1016, and so on.
Details and examples
In Knuth's -yllion proposal:
1 to 999 have their usual names.
1000 to 9999 are divided before the 2nd-last digit and named "foo hundred bar." (e.g. 1234 is "twelve hundred thirty-four"; 7623 is "seventy-six hundred twenty-three")
104 to 108 − 1 are divided before the 4th-last digit and named "foo myriad bar". Knuth also introduces at this level a grouping symbol (comma) for the numeral. So 382,1902 is "three hundred eighty-two myriad nineteen hundred two."
108 to 1016 − 1 are divided before the 8th-last digit and named "foo myllion bar", and a semicolon separates the digits. So 1,0002;0003,0004 is "one myriad two myllion, three myriad four."
1016 to 1032 − 1 are divided before the 16th-last digit and named "foo byllion bar", and a colon separates the digits. So 12:0003,0004;0506,7089 is "twelve byllion, three myriad four myllion, five hundred six myriad seventy hundred eighty-nine."
etc.
Each new number name is the square of the previous one — therefore, each new name covers twice as many digits. Knuth continues borrowing the traditional names changing
|
https://en.wikipedia.org/wiki/Physical%20computing
|
Physical computing involves interactive systems that can sense and respond to the world around them. While this definition is broad enough to encompass systems such as smart automotive traffic control systems or factory automation processes, it is not commonly used to describe them. In a broader sense, physical computing is a creative framework for understanding human beings' relationship to the digital world. In practical use, the term most often describes handmade art, design or DIY hobby projects that use sensors and microcontrollers to translate analog input to a software system, and/or control electro-mechanical devices such as motors, servos, lighting or other hardware.
Physical computing intersects the range of activities often referred to in academia and industry as electrical engineering, mechatronics, robotics, computer science, and especially embedded development.
Examples
Physical computing is used in a wide variety of domains and applications.
Education
The advantage of physicality in education and playfulness has been reflected in diverse informal learning environments. The Exploratorium, a pioneer in inquiry based learning, developed some of the earliest interactive exhibitry involving computers, and continues to include more and more examples of physical computing and tangible interfaces as associated technologies progress.
Art
In the art world, projects that implement physical computing include the work of Scott Snibbe, Daniel Rozin, Rafael Lozano-Hemmer, Jonah Brucker-Cohen, and Camille Utterback.
Product design
Physical computing practices also exist in the product and interaction design sphere, where hand-built embedded systems are sometimes used to rapidly prototype new digital product concepts in a cost-efficient way. Firms such as IDEO and Teague are known to approach product design in this way.
Commercial applications
Commercial implementations range from consumer devices such as the Sony Eyetoy or games such as Dance Dance Revolution
|
https://en.wikipedia.org/wiki/Vinyl%20cutter
|
A vinyl cutter is an entry level machine for making signs. Computer designed vector files with patterns and letters are directly cut on the roll of vinyl which is mounted and fed into the vinyl cutter through USB or serial cable. Vinyl cutters are mainly used to make signs, banners and advertisements. Advertisements seen on automobiles and vans are often made with vinyl cut letters. While these machines were designed for cutting vinyl, they can also cut through computer and specialty papers, as well as thicker items like thin sheets of magnet.
In addition to sign business, vinyl cutters are commonly used for apparel decoration. To decorate apparel, a vector design needs to be cut in mirror image, weeded, and then heat applied using a commercial heat press or a hand iron for home use.
Some businesses use their vinyl cutter to produce both signs and custom apparel. Many crafters also have vinyl cutters for home use. These require little maintenance and the vinyl can be bought in bulk relatively cheaply.
Vinyl cutters are also often used by stencil artists to create single use or reusable stencil art and lettering
How it works
A vinyl cutter is a type of computer-controlled machine tool. The computer controls the movement of a sharp blade over the surface of the material as it would the nozzles of an ink-jet printer. This blade is used to cut out shapes and letters from sheets of thin self-adhesive plastic (vinyl). The vinyl can then be stuck to a variety of surfaces depending on the adhesive and type of material.
To cut out a design, a vector-based image must be created using vector drawing software. Some vinyl cutters are marketed to small in-home businesses and require download and use of a proprietary editing software. The design is then sent to the cutter where it cuts along the vector paths laid out in the design. The cutter is capable of moving the blade on an X and Y axis over the material, cutting it into the required shapes. The vinyl material comes i
|
https://en.wikipedia.org/wiki/General%20communication%20channel
|
The general communication channel (GCC) was defined by G.709 is an in-band side channel used to carry transmission management and signaling information within optical transport network elements.
Two types of GCC are available:
GCC0 – two bytes within OTUk overhead. GCC0 is terminated at every 3R (re-shaping, re-timing, re-amplification) point and used to carry GMPLS signaling protocol and/or management information.
GCC1/2 – four bytes (each of two bytes) within ODUk overhead. These bytes are used for client end-to-end information and shouldn't be touched by the OTN equipment.
In contrast to SONET/SDH where the data communication channel (DCC) has a constant data rate, GCC data rate depends on the OTN line rate. For example, GCC0 data rate in the case of OTU1 is ~333kbit/s, and for OTU2 its data rate is ~1.3 Mbit/s.
Computer networking
Optical Transport Network
|
https://en.wikipedia.org/wiki/Dye-and-pry
|
Dye-n-Pry, also called Dye And Pry, Dye and Pull, Dye Staining, or Dye Penetrant, is a destructive analysis technique used on surface mount technology (SMT) components to either perform failure analysis or inspect for solder joint integrity. It is an application of dye penetrant inspection.
Method
Dye-n-Pry is a useful technique in which a dye penetrant material is used to inspect for interconnect failures in integrated circuits (IC). This is mostly commonly done on solder joints for ball grid array (BGA) components, although in some cases it can be done with other components or samples. The component of interest is submerged in a dye material, such as red steel dye, and placed under vacuum. This allows the dye to flow underneath the component and into any cracks or defects. The dye is then dried in an oven (preferably overnight) to prevent smearing during separation, which could lead to false results. The part of interest is mechanically separated from the printed circuit board (PCB) and inspected for the presence of dye. Any fracture surface or interface will have dye present, indicating the presence of cracks or open circuits. IPC-TM-650 Method 2.4.53 specifies a process for dye-n-pry.
Use in failure analysis of electronics
Dye-n-Pry is a useful failure analysis technique to detect cracking or open circuits in BGA solder joints. This has some practical advantages over other destructive techniques, such as cross sectioning, as it can inspect a full ball grid array which may consist of hundreds of solder joints. Cross sectioning, on the other hand, may only be able to inspect a single row of solder joints and requires a better initial idea of the failure site.
Dye-n-pry can be useful for detecting several different failure modes. This includes pad cratering or solder joint fracture from mechanical drop/shock, thermal shock, or thermal cycling. This makes it useful technique to incorporate into a reliability test plan as part of the post test failure inspection.
|
https://en.wikipedia.org/wiki/Meissel%E2%80%93Mertens%20constant
|
The Meissel–Mertens constant (named after Ernst Meissel and Franz Mertens), also referred to as Mertens constant, Kronecker's constant, Hadamard–de la Vallée-Poussin constant or the prime reciprocal constant, is a mathematical constant in number theory, defined as the limiting difference between the harmonic series summed only over the primes and the natural logarithm of the natural logarithm:
Here γ is the Euler–Mascheroni constant, which has an analogous definition involving a sum over all integers (not just the primes).
The value of M is approximately
M ≈ 0.2614972128476427837554268386086958590516... .
Mertens' second theorem establishes that the limit exists.
The fact that there are two logarithms (log of a log) in the limit for the Meissel–Mertens constant may be thought of as a consequence of the combination of the prime number theorem and the limit of the Euler–Mascheroni constant.
In popular culture
The Meissel-Mertens constant was used by Google when bidding in the Nortel patent auction. Google posted three bids based on mathematical numbers: $1,902,160,540 (Brun's constant), $2,614,972,128 (Meissel–Mertens constant), and $3.14159 billion (π).
See also
Divergence of the sum of the reciprocals of the primes
Prime zeta function
|
https://en.wikipedia.org/wiki/Emulator
|
In computing, an emulator is hardware or software that enables one computer system (called the host) to behave like another computer system (called the guest). An emulator typically enables the host system to run software or use peripheral devices designed for the guest system.
Emulation refers to the ability of a computer program in an electronic device to emulate (or imitate) another program or device.
Many printers, for example, are designed to emulate HP LaserJet printers because so much software is written for HP printers. If a non-HP printer emulates an HP printer, any software written for a real HP printer will also run in the non-HP printer emulation and produce equivalent printing. Since at least the 1990s, many video game enthusiasts and hobbyists have used emulators to play classic arcade games from the 1980s using the games' original 1980s machine code and data, which is interpreted by a current-era system, and to emulate old video game consoles.
A hardware emulator is an emulator which takes the form of a hardware device. Examples include the DOS-compatible card installed in some 1990s-era Macintosh computers, such as the Centris 610 or Performa 630, that allowed them to run personal computer (PC) software programs and field-programmable gate array-based hardware emulators. The Church-Turing thesis implies that theoretically, any operating environment can be emulated within any other environment, assuming memory limitations are ignored. However, in practice, it can be quite difficult, particularly when the exact behavior of the system to be emulated is not documented and has to be deduced through reverse engineering. It also says nothing about timing constraints; if the emulator does not perform as quickly as it did using the original hardware, the software inside the emulation may run much more slowly (possibly triggering timer interrupts that alter behavior).
Types
Most emulators just emulate a hardware architecture—if operating system firmware or
|
https://en.wikipedia.org/wiki/Interactive%20kiosk
|
An interactive kiosk is a computer terminal featuring specialized hardware and software that provides access to information and applications for communication, commerce, entertainment, or education.
By 2010, the largest bill pay kiosk network is AT&T for the phone customers which allows customers to pay their phone bills. Verizon and Sprint have similar units for their customers.
Early interactive kiosks sometimes resembled telephone booths, but have been embraced by retail, food service, and hospitality to improve customer service and streamline operations. Interactive kiosks are typically placed in the high foot traffic settings such as shops, hotel lobbies, or airports.
The integration of technology allows kiosks to perform a wide range of functions, evolving into self-service kiosks. For example, kiosks may enable users to order from a shop's catalog when items are not in stock, check out a library book, look up information about products, issue a hotel key card, enter a public utility bill account number to perform an online transaction, or collect cash in exchange for merchandise. Customized components such as coin hoppers, bill acceptors, card readers, and thermal printers enable kiosks to meet the owner's specialized needs.
History
The first self-service, interactive kiosk was developed in 1977 at the University of Illinois at Urbana–Champaign by a pre-med student, Murray Lappe. The content was created on the PLATO computer system and accessible by the plasma touch-screen interface. The plasma display panel was invented at the University of Illinois by Donald L. Bitzer. Lappe's kiosk, called The Plato Hotline allowed students and visitors to find movies, maps, directories, bus schedules, extracurricular activities, and courses.
The first successful network of interactive kiosks used for commercial purposes was a project developed by the shoe retailer Florsheim Shoe Co., led by their executive VP, Harry Bock, installed circa 1985. The interactive kiosk
|
https://en.wikipedia.org/wiki/Free%20energy%20principle
|
The free energy principle is a theoretical framework suggesting that the brain reduces surprise or uncertainty by making predictions based on internal models and updating them using sensory input. It highlights the brain's objective of aligning its internal model with the external world to enhance prediction accuracy. This principle integrates Bayesian inference with active inference, where actions are guided by predictions and sensory feedback refines them. It has wide-ranging implications for comprehending brain function, perception, and action.
Overview
In biophysics and cognitive science, the free energy principle is a mathematical principle describing a formal account of the representational capacities of physical systems: that is, why things that exist look as if they track properties of the systems to which they are coupled.
It establishes that the dynamics of physical systems minimise a quantity known as surprisal (which is just the negative log probability of some outcome); or equivalently, its variational upper bound, called free energy. The principle is used especially in Bayesian approaches to brain function, but also some approaches to artificial intelligence; it is formally related to variational Bayesian methods and was originally introduced by Karl Friston as an explanation for embodied perception-action loops in neuroscience.
The free energy principle models the behaviour of systems that are distinct from, but coupled to, another system (e.g., an embedding environment), where the degrees of freedom that implement the interface between the two systems is known as a Markov blanket. More formally, the free energy principle says that if a system has a "particular partition" (i.e., into particles, with their Markov blankets), then subsets of that system will track the statistical structure of other subsets (which are known as internal and external states or paths of a system).
The free energy principle is based on the Bayesian idea of the brain as
|
https://en.wikipedia.org/wiki/Nanobacterium
|
Nanobacterium ( , pl. nanobacteria ) is the unit or member name of a former proposed class of living organisms, specifically cell-walled microorganisms, now discredited, with a size much smaller than the generally accepted lower limit for life (about 200 nm for bacteria, like mycoplasma). Originally based on observed nano-scale structures in geological formations (including one meteorite), the status of nanobacteria was controversial, with some researchers suggesting they are a new class of living organism capable of incorporating radiolabeled uridine, and others attributing to them a simpler, abiotic nature. One skeptic dubbed them "the cold fusion of microbiology", in reference to a notorious episode of supposed erroneous science. The term "calcifying nanoparticles" (CNPs) has also been used as a conservative name regarding their possible status as a life form.
Research tends to agree that these structures exist, and appear to replicate in some way. However, the idea that they are living entities has now largely been discarded, and the particles are instead thought to be nonliving crystallizations of minerals and organic molecules.
1981–2000
In 1981 Francisco Torella and Richard Y. Morita described very small cells called ultramicrobacteria. Defined as being smaller than 300 nm, by 1982 MacDonell and Hood found that some could pass through a 200 nm membrane. Early in 1989, geologist Robert L. Folk found what he later identified as nannobacteria (written with double "n"), that is, nanoparticles isolated from geological specimens in travertine from hot springs of Viterbo, Italy. Initially searching for a bacterial cause for travertine deposition, scanning electron microscope examination of the mineral where no bacteria were detectable revealed extremely small objects which appeared to be biological. His first oral presentation elicited what he called "mostly a stony silence", at the 1992 Geological Society of America's annual convention. He proposed that nanoba
|
https://en.wikipedia.org/wiki/Flat%20network
|
A flat network is a computer network design approach that aims to reduce cost, maintenance and administration. Flat networks are designed to reduce the number of routers and switches on a computer network by connecting the devices to a single switch instead of separate switches. Unlike a hierarchical network design, the network is not physically separated using different switches.
The topology of a flat network is not segmented or separated into different broadcast areas by using routers. Some such networks may use network hubs or a mixture of hubs and switches, rather than switches and routers, to connect devices to each other. Generally, all devices on the network are a part of the same broadcast area.
Uses
Flat networks are typically used in homes or small businesses where network requirements are low. Home networks usually do not require intensive security, or separation, because the network is often used to provide multiple computers access to the Internet. In such cases, a complex network with many switches is not required. Flat networks are also generally easier to administer and maintain because less complex switches or routers are being used. Purchasing switches can be costly, so flat networks can be implemented to help reduce the amount of switches that need to be purchased.
Drawbacks
Flat networks provide some drawbacks, including:
Poor security – Because traffic travels through one switch, it is not possible to segment the networks into sections and prevent users from accessing certain parts of the network. It is easier for hackers to intercept data on the network.
No redundancy – Since there is usually one switch, or a few devices, it is possible for the switch to fail. Since there is no alternative path, the network will become inaccessible and computers may lose connectivity.
Scalability and speed – Connecting all the devices to one central switch, either directly or through hubs, increases the potential for collisions (due to hubs), reduced
|
https://en.wikipedia.org/wiki/Taxon
|
In biology, a taxon (back-formation from taxonomy; : taxa) is a group of one or more populations of an organism or organisms seen by taxonomists to form a unit. Although neither is required, a taxon is usually known by a particular name and given a particular ranking, especially if and when it is accepted or becomes established. It is very common, however, for taxonomists to remain at odds over what belongs to a taxon and the criteria used for inclusion, especially in the context of rank-based ("Linnaean") nomenclature (much less so under phylogenetic nomenclature). If a taxon is given a formal scientific name, its use is then governed by one of the nomenclature codes specifying which scientific name is correct for a particular grouping.
Initial attempts at classifying and ordering organisms (plants and animals) were presumably set forth long ago by hunter-gatherers, as suggeted by the fairly sophisticated folk taxonomies. Much later, Aristotle, and later still, European scientists, like Magnol, Tournefort and Carl Linnaeus's system in Systema Naturae, 10th edition (1758),, as well as an unpublished work by Bernard and Antoine Laurent de Jussieu, contributed to this field. The idea of a unit-based system of biological classification was first made widely available in 1805 in the introduction of Jean-Baptiste Lamarck's Flore françoise, and Augustin Pyramus de Candolle's Principes élémentaires de botanique. Lamarck set out a system for the "natural classification" of plants. Since then, systematists continue to construct accurate classifications encompassing the diversity of life; today, a "good" or "useful" taxon is commonly taken to be one that reflects evolutionary relationships.
Many modern systematists, such as advocates of phylogenetic nomenclature, use cladistic methods that require taxa to be monophyletic (all descendants of some ancestor). Their basic unit, therefore, the clade is equivalent to the taxon, assuming that taxa should reflect evolutionary rela
|
https://en.wikipedia.org/wiki/Vector%20algebra%20relations
|
The following are important identities in vector algebra. Identities that involve the magnitude of a vector , or the dot product (scalar product) of two vectors A·B, apply to vectors in any dimension. Identities that use the cross product (vector product) A×B are defined only in three dimensions.
Magnitudes
The magnitude of a vector A can be expressed using the dot product:
In three-dimensional Euclidean space, the magnitude of a vector is determined from its three components using Pythagoras' theorem:
Inequalities
The Cauchy–Schwarz inequality:
The triangle inequality:
The reverse triangle inequality:
Angles
The vector product and the scalar product of two vectors define the angle between them, say θ:
To satisfy the right-hand rule, for positive θ, vector B is counter-clockwise from A, and for negative θ it is clockwise.
The Pythagorean trigonometric identity then provides:
If a vector A = (Ax, Ay, Az) makes angles α, β, γ with an orthogonal set of x-, y- and z-axes, then:
and analogously for angles β, γ. Consequently:
with unit vectors along the axis directions.
Areas and volumes
The area Σ of a parallelogram with sides A and B containing the angle θ is:
which will be recognized as the magnitude of the vector cross product of the vectors A and B lying along the sides of the parallelogram. That is:
(If A, B are two-dimensional vectors, this is equal to the determinant of the 2 × 2 matrix with rows A, B.) The square of this expression is:
where Γ(A, B) is the Gram determinant of A and B defined by:
In a similar fashion, the squared volume V of a parallelepiped spanned by the three vectors A, B, C is given by the Gram determinant of the three vectors:
Since A, B, C are three-dimensional vectors, this is equal to the square of the scalar triple product below.
This process can be extended to n-dimensions.
Addition and multiplication of vectors
Commutativity of addition: .
Commutativity of scalar product: .
Anticommutativity of cross product
|
https://en.wikipedia.org/wiki/Understory
|
In forestry and ecology, understory (American English), or understorey (Commonwealth English), also known as underbrush or undergrowth, includes plant life growing beneath the forest canopy without penetrating it to any great extent, but above the forest floor. Only a small percentage of light penetrates the canopy so understory vegetation is generally shade-tolerant. The understory typically consists of trees stunted through lack of light, other small trees with low light requirements, saplings, shrubs, vines and undergrowth. Small trees such as holly and dogwood are understory specialists.
In temperate deciduous forests, many understory plants start into growth earlier in the year than the canopy trees, to make use of the greater availability of light at that particular time of year. A gap in the canopy caused by the death of a tree stimulates the potential emergent trees into competitive growth as they grow upwards to fill the gap. These trees tend to have straight trunks and few lower branches. At the same time, the bushes, undergrowth, and plant life on the forest floor become denser. The understory experiences greater humidity than the canopy, and the shaded ground does not vary in temperature as much as open ground. This causes a proliferation of ferns, mosses, and fungi and encourages nutrient recycling, which provides favorable habitats for many animals and plants.
Understory structure
The understory is the underlying layer of vegetation in a forest or wooded area, especially the trees and shrubs growing between the forest canopy and the forest floor.
Plants in the understory comprise an assortment of seedlings and saplings of canopy trees together with specialist understory shrubs and herbs. Young canopy trees often persist in the understory for decades as suppressed juveniles until an opening in the forest overstory permits their growth into the canopy. In contrast understory shrubs complete their life cycles in the shade of the forest canopy. Some sma
|
https://en.wikipedia.org/wiki/Smooth%20maximum
|
In mathematics, a smooth maximum of an indexed family x1, ..., xn of numbers is a smooth approximation to the maximum function meaning a parametric family of functions such that for every , the function is smooth, and the family converges to the maximum function as . The concept of smooth minimum is similarly defined. In many cases, a single family approximates both: maximum as the parameter goes to positive infinity, minimum as the parameter goes to negative infinity; in symbols, as and as . The term can also be used loosely for a specific smooth function that behaves similarly to a maximum, without necessarily being part of a parametrized family.
Examples
Boltzmann operator
For large positive values of the parameter , the following formulation is a smooth, differentiable approximation of the maximum function. For negative values of the parameter that are large in absolute value, it approximates the minimum.
has the following properties:
as
is the arithmetic mean of its inputs
as
The gradient of is closely related to softmax and is given by
This makes the softmax function useful for optimization techniques that use gradient descent.
This operator is sometimes called the Boltzmann operator, after the Boltzmann distribution.
LogSumExp
Another smooth maximum is LogSumExp:
This can also be normalized if the are all non-negative, yielding a function with domain and range :
The term corrects for the fact that by canceling out all but one zero exponential, and if all are zero.
Mellowmax
The mellowmax operator is defined as follows:
It is a non-expansive operator. As , it acts like a maximum. As , it acts like an arithmetic mean. As , it acts like a minimum. This operator can be viewed as a particular instantiation of the quasi-arithmetic mean. It can also be derived from information theoretical principles as a way of regularizing policies with a cost function defined by KL divergence. The operator has previously been utilized in other
|
https://en.wikipedia.org/wiki/Filter%20%28signal%20processing%29
|
In signal processing, a filter is a device or process that removes some unwanted components or features from a signal. Filtering is a class of signal processing, the defining feature of filters being the complete or partial suppression of some aspect of the signal. Most often, this means removing some frequencies or frequency bands. However, filters do not exclusively act in the frequency domain; especially in the field of image processing many other targets for filtering exist. Correlations can be removed for certain frequency components and not for others without having to act in the frequency domain. Filters are widely used in electronics and telecommunication, in radio, television, audio recording, radar, control systems, music synthesis, image processing, computer graphics, and structural dynamics.
There are many different bases of classifying filters and these overlap in many different ways; there is no simple hierarchical classification. Filters may be:
non-linear or linear
time-variant or time-invariant, also known as shift invariance. If the filter operates in a spatial domain then the characterization is space invariance.
causal or non-causal: A filter is non-causal if its present output depends on future input. Filters processing time-domain signals in real time must be causal, but not filters acting on spatial domain signals or deferred-time processing of time-domain signals.
analog or digital
discrete-time (sampled) or continuous-time
passive or active type of continuous-time filter
infinite impulse response (IIR) or finite impulse response (FIR) type of discrete-time or digital filter.
Linear continuous-time filters
Linear continuous-time circuit is perhaps the most common meaning for filter in the signal processing world, and simply "filter" is often taken to be synonymous. These circuits are generally designed to remove certain frequencies and allow others to pass. Circuits that perform this function are generally linear in their response, or a
|
https://en.wikipedia.org/wiki/NesC
|
nesC (pronounced "NES-see") is a component-based, event-driven programming language used to build applications for the TinyOS platform. TinyOS is an operating environment designed to run on embedded devices used in distributed wireless sensor networks. nesC is built as an extension to the C programming language with components "wired" together to run applications on TinyOS. The name nesC is an abbreviation of "network embedded systems C".
Components and interfaces
nesC programs are built out of components, which are assembled ("wired") to form whole programs. Components have internal concurrency in the form of tasks. Threads of control may pass into a component through its interfaces. These threads are rooted either in a task or a hardware interrupt.
Interfaces may be provided or used by components. The provided interfaces are intended to represent the functionality that the component provides to its user, the used interfaces represent the functionality the component needs to perform its job.
In nesC, interfaces are bidirectional: They specify a set of functions to be implemented by the interface's provider (commands) and a set to be implemented by the interface's user (events). This allows a single interface to represent a complex interaction between components (e.g., registration of interest in some event, followed by a callback when that event happens). This is critical because all lengthy commands in TinyOS (e.g. send packet) are non-blocking; their completion is signaled through an event (send done). By specifying interfaces, a component cannot call the send command unless it provides an implementation of the sendDone event. Typically commands call downwards, i.e., from application components to those closer to the hardware, while events call upwards. Certain primitive events are bound to hardware interrupts.
Components are statically linked to each other via their interfaces. This increases runtime efficiency, encourages robust design, and allows for be
|
https://en.wikipedia.org/wiki/Excretion
|
Excretion is a process in which metabolic waste
is eliminated from an organism. In vertebrates this is primarily carried out by the lungs, kidneys, and skin. This is in contrast with secretion, where the substance may have specific tasks after leaving the cell. Excretion is an essential process in all forms of life. For example, in mammals, urine is expelled through the urethra, which is part of the excretory system. In unicellular organisms, waste products are discharged directly through the surface of the cell.
During life activities such as cellular respiration, several chemical reactions take place in the body. These are known as metabolism. These chemical reactions produce waste products such as carbon dioxide, water, salts, urea and uric acid. Accumulation of these wastes beyond a level inside the body is harmful to the body. The excretory organs remove these wastes. This process of removal of metabolic waste from the body is known as excretion.
Green plants excrete carbon dioxide and water as respiratory products. In green plants, the carbon dioxide released during respiration gets used during photosynthesis. Oxygen is a by product generated during photosynthesis, and exits through stomata, root cell walls, and other routes. Plants can get rid of excess water by transpiration and guttation. It has been shown that the leaf acts as an 'excretophore' and, in addition to being a primary organ of photosynthesis, is also used as a method of excreting toxic wastes via diffusion. Other waste materials that are exuded by some plants — resin, saps, latex, etc. are forced from the interior of the plant by hydrostatic pressures inside the plant and by absorptive forces of plant cells. These latter processes do not need added energy, they act passively. However, during the pre-abscission phase, the metabolic levels of a leaf are high. Plants also excrete some waste substances into the soil around them.
In animals, the main excretory products are carbon dioxide, ammoni
|
https://en.wikipedia.org/wiki/Squaring%20the%20circle
|
Squaring the circle is a problem in geometry first proposed in Greek mathematics. It is the challenge of constructing a square with the area of a given circle by using only a finite number of steps with a compass and straightedge. The difficulty of the problem raised the question of whether specified axioms of Euclidean geometry concerning the existence of lines and circles implied the existence of such a square.
In 1882, the task was proven to be impossible, as a consequence of the Lindemann–Weierstrass theorem, which proves that pi () is a transcendental number.
That is, is not the root of any polynomial with rational coefficients. It had been known for decades that the construction would be impossible if were transcendental, but that fact was not proven until 1882. Approximate constructions with any given non-perfect accuracy exist, and many such constructions have been found.
Despite the proof that it is impossible, attempts to square the circle have been common in pseudomathematics (i.e. the work of mathematical cranks). The expression "squaring the circle" is sometimes used as a metaphor for trying to do the impossible.
The term quadrature of the circle is sometimes used as a synonym for squaring the circle. It may also refer to approximate or numerical methods for finding the area of a circle. In general, quadrature or squaring may also be applied to other plane figures.
History
Methods to calculate the approximate area of a given circle, which can be thought of as a precursor problem to squaring the circle, were known already in many ancient cultures. These methods can be summarized by stating the approximation to that they produce. In around 2000 BCE, the Babylonian mathematicians used the approximation and at approximately the same time the ancient Egyptian mathematicians used Over 1000 years later, the Old Testament Books of Kings used the simpler approximation Ancient Indian mathematics, as recorded in the Shatapatha Brahmana and Shulba Sutras
|
https://en.wikipedia.org/wiki/Psammophyte
|
A psammophyte is a plant that grows in sandy and often unstable soils. Psammophytes are commonly found growing on beaches, deserts, and sand dunes. Because they thrive in these challenging or inhospitable habitats, psammophytes are considered extremophiles, and are further classified as a type of psammophile.
Etymology
The word "psammophyte" consists of two Greek roots, psamm-, meaning "sand", and -phyte, meaning "plant". The term "psammophyte" first entered English in the early twentieth century via German botanical terminology.
Description
Psammophytes are found in many different plant families, so may not share specific morphological or phytochemical traits. They also come in a variety of plant life-forms, including annual ephemerals, perennials, subshrubs, hemicryptophytes, and many others. What the many diverse psammophytes have in common is a resilience to harsh or rapidly fluctuating environmental factors, such as shifting soils, strong winds, intense sunlight exposure, or saltwater exposure, depending on the habitat. Psammophytes often have specialized traits, such as unusually tenacious or resilient roots that enable them to anchor and thrive despite various environmental stressors. Those growing in arid regions have evolved highly efficient physiological mechanisms that enable them to survive despite limited water availability.
Distribution and habitat
Psammophytes grow in regions all over the world and can be found on sandy, unstable soils of beaches, deserts, and sand dunes. In China's autonomous Inner Mongolia region, psammophytic woodlands are found in steppe habitats.
Ecology
Psammophytes often play an important ecological role by contributing some degree of soil stabilization in their sandy habitats. They can also play an important role in soil nutrient dynamics. Depending on the factors at play at a given site, psammophyte communities exhibit varying degrees of species diversity. For example, in the dunes of the Sahara Desert, psammophyte commu
|
https://en.wikipedia.org/wiki/List%20of%20countries%20by%20integrated%20circuit%20exports
|
The following is a list of countries by integrated circuit exports. Data is for 2019, in millions of United States dollars, as reported by International Trade Centre. Currently the top twenty countries are listed.
See also
List of flat panel display manufacturers
List of integrated circuit manufacturers
List of solid-state drive manufacturers
List of system on a chip suppliers
|
https://en.wikipedia.org/wiki/Self-organization
|
Self-organization, also called spontaneous order in the social sciences, is a process where some form of overall order arises from local interactions between parts of an initially disordered system. The process can be spontaneous when sufficient energy is available, not needing control by any external agent. It is often triggered by seemingly random fluctuations, amplified by positive feedback. The resulting organization is wholly decentralized, distributed over all the components of the system. As such, the organization is typically robust and able to survive or self-repair substantial perturbation. Chaos theory discusses self-organization in terms of islands of predictability in a sea of chaotic unpredictability.
Self-organization occurs in many physical, chemical, biological, robotic, and cognitive systems. Examples of self-organization include crystallization, thermal convection of fluids, chemical oscillation, animal swarming, neural circuits, and black markets.
Overview
Self-organization is realized in the physics of non-equilibrium processes, and in chemical reactions, where it is often characterized as self-assembly. The concept has proven useful in biology, from the molecular to the ecosystem level. Cited examples of self-organizing behaviour also appear in the literature of many other disciplines, both in the natural sciences and in the social sciences (such as economics or anthropology). Self-organization has also been observed in mathematical systems such as cellular automata. Self-organization is an example of the related concept of emergence.
Self-organization relies on four basic ingredients:
strong dynamical non-linearity, often (though not necessarily) involving positive and negative feedback
balance of exploitation and exploration
multiple interactions among components
availability of energy (to overcome the natural tendency toward entropy, or loss of free energy)
Principles
The cybernetician William Ross Ashby formulated the original p
|
https://en.wikipedia.org/wiki/Obligate
|
As an adjective, obligate means "by necessity" (antonym facultative) and is used mainly in biology in phrases such as:
Obligate aerobe, an organism that cannot survive without oxygen
Obligate anaerobe, an organism that cannot survive in the presence of oxygen
Obligate air-breather, a term used in fish physiology to describe those that respire entirely from the atmosphere
Obligate biped, Bipedalism designed to walk on two legs
Obligate carnivore, an organism dependent for survival on a diet of animal flesh.
Obligate chimerism, a kind of organism with two distinct sets of DNA, always
Obligate hibernation, a state of inactivity in which some organisms survive conditions of insufficiently available resources.
Obligate intracellular parasite, a parasitic microorganism that cannot reproduce without entering a suitable host cell
Obligate parasite, a parasite that cannot reproduce without exploiting a suitable host
Obligate photoperiodic plant, a plant that requires sufficiently long or short nights before it initiates flowering, germination or similarly functions
Obligate symbionts, organisms that can only live together in a symbiosis
See also
Opportunism (biological)
Biology terminology
|
https://en.wikipedia.org/wiki/List%20of%20birds%20by%20flight%20heights
|
This is a list of birds by flight height.
Birds by flight height
See also
Organisms at high altitude
List of birds by flight speed
|
https://en.wikipedia.org/wiki/Smash%20and%20Grab%20%28biology%29
|
Smash and Grab is the name given to a technique developed by Charles S. Hoffman and Fred Winston used in molecular biology to rescue plasmids from yeast transformants into Escherichia coli, also known as E. coli, in order to amplify and purify them. In addition, it can be used to prepare yeast genomic DNA (and DNA from tissue samples) for Southern blot analyses or polymerase chain reaction (PCR).
|
https://en.wikipedia.org/wiki/Generalized%20Lagrangian%20mean
|
In continuum mechanics, the generalized Lagrangian mean (GLM) is a formalism – developed by – to unambiguously split a motion into a mean part and an oscillatory part. The method gives a mixed Eulerian–Lagrangian description for the flow field, but appointed to fixed Eulerian coordinates.
Background
In general, it is difficult to decompose a combined wave–mean motion into a mean and a wave part, especially for flows bounded by a wavy surface: e.g. in the presence of surface gravity waves or near another undulating bounding surface (like atmospheric flow over mountainous or hilly terrain). However, this splitting of the motion in a wave and mean part is often demanded in mathematical models, when the main interest is in the mean motion – slowly varying at scales much larger than those of the individual undulations. From a series of postulates, arrive at the (GLM) formalism to split the flow: into a generalised Lagrangian mean flow and an oscillatory-flow part.
The GLM method does not suffer from the strong drawback of the Lagrangian specification of the flow field – following individual fluid parcels – that Lagrangian positions which are initially close gradually drift far apart. In the Lagrangian frame of reference, it therefore becomes often difficult to attribute Lagrangian-mean values to some location in space.
The specification of mean properties for the oscillatory part of the flow, like: Stokes drift, wave action, pseudomomentum and pseudoenergy – and the associated conservation laws – arise naturally when using the GLM method.
The GLM concept can also be incorporated into variational principles of fluid flow.
Notes
|
https://en.wikipedia.org/wiki/Collision%20detection
|
Collision detection is the computational problem of detecting the intersection of two or more objects. Collision detection is a classic issue of computational geometry and has applications in various computing fields, primarily in computer graphics, computer games, computer simulations, robotics and computational physics. Collision detection algorithms can be divided into operating on 2D and 3D objects.
Overview
In physical simulation, experiments such as playing billiards are conducted. The physics of bouncing billiard balls are well understood, under the umbrella of rigid body motion and elastic collisions. An initial description of the situation would be given, with a very precise physical description of the billiard table and balls, as well as initial positions of all the balls. Given a force applied to the cue ball (probably resulting from a player hitting the ball with their cue stick), we want to calculate the trajectories, precise motion and eventual resting places of all the balls with a computer program. A program to simulate this game would consist of several portions, one of which would be responsible for calculating the precise impacts between the billiard balls. This particular example also turns out to be ill conditioned: a small error in any calculation will cause drastic changes in the final position of the billiard balls.
Video games have similar requirements, with some crucial differences. While computer simulation needs to simulate real-world physics as precisely as possible, computer games need to simulate real-world physics in an acceptable way, in real time and robustly. Compromises are allowed, so long as the resulting simulation is satisfying to the game players.
Collision detection in computer simulation
Physical simulators differ in the way they react on a collision. Some use the softness of the material to calculate a force, which will resolve the collision in the following time steps like it is in reality. This is very CPU intensiv
|
https://en.wikipedia.org/wiki/FpgaC
|
FpgaC is a compiler for a subset of the C programming language, which produces digital circuits that will execute the compiled programs. The circuits may use FPGAs or CPLDs as the target processor for reconfigurable computing, or even ASICs for dedicated applications. FpgaC's goal is to be an efficient High Level Language (HLL) for reconfigurable computing, rather than a Hardware Description Language (HDL) for building efficient custom hardware circuits.
History
The historical roots of FpgaC are in the Transmogrifier C 3.1 (TMCC) HDL, a 1996 BSD licensed Open source offering from University of Toronto. TMCC is one of the first FPGA C compilers, with work starting in 1994 and presented at IEEE's FCCM95. This predated the evolution from the Handel language to Handel-C work done shortly afterward at Oxford University Computing Laboratory.
TMCC was renamed FpgaC for the initial SourceForge project release, with syntax modifications to start the evolution to ANSI C. Later development has removed all explicit HDL syntax from the language, and increased the subset of C supported. By capitalizing on ANSI C C99 extensions, the same functionality is now available by inference rather than non-standard language extensions. This shift away from non-standard HDL extensions was influenced in part by Streams-C from Los Alamos National Laboratory (now available commercially as Impulse C).
In the years that have followed, compiling ANSI C for execution as FPGA circuits has become a mainstream technology. Commercial FPGA C compilers are available from multiple vendors, and ANSI C based System Level Tools have gone mainstream for system description and simulation languages. FPGA based Reconfigurable Computing offerings from industry leaders like Altera, Silicon Graphics, Seymour Cray's SRC Computers, and Xilinx have capitalized on two decades of government and university reconfigurable computing research.
External links
Transmogrifier C Homepage
Oxford Handel-C
FPGA System Le
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.