source
stringlengths
33
168
text
stringlengths
28
2k
https://en.wikipedia.org/wiki/Tor%20%28network%29
Tor, short for The Onion Router, is free and open-source software for enabling anonymous communication. It directs Internet traffic via a free, worldwide, volunteer overlay network that consists of more than seven thousand relays. Using Tor makes it more difficult to trace a user's Internet activity. Tor protects personal privacy by concealing a user's location and usage from anyone performing network surveillance or traffic analysis. It protects the user's freedom and ability to communicate confidentially through IP address anonymity using Tor exit nodes. History The core principle of Tor, onion routing, was developed in the mid-1990s by United States Naval Research Laboratory employees, mathematician Paul Syverson, and computer scientists Michael G. Reed and David Goldschlag, to protect American intelligence communications online. Onion routing is implemented by means of encryption in the application layer of the communication protocol stack, nested like the layers of an onion. The alpha version of Tor, developed by Syverson and computer scientists Roger Dingledine and Nick Mathewson and then called The Onion Routing project (which was later given the acronym "Tor"), was launched on 20 September 2002. The first public release occurred a year later. In 2004, the Naval Research Laboratory released the code for Tor under a free license, and the Electronic Frontier Foundation (EFF) began funding Dingledine and Mathewson to continue its development. In 2006, Dingledine, Mathewson, and five others founded The Tor Project, a Massachusetts-based 501(c)(3) research-education nonprofit organization responsible for maintaining Tor. The EFF acted as The Tor Project's fiscal sponsor in its early years, and early financial supporters included the U.S. Bureau of Democracy, Human Rights, and Labor and International Broadcasting Bureau, Internews, Human Rights Watch, the University of Cambridge, Google, and Netherlands-based Stichting NLnet. Over the course of its existence,
https://en.wikipedia.org/wiki/Syngameon
Syngameon refers to groups of taxa that frequently engage in natural hybridization and lack strong reproductive barriers that prevent interbreeding. Syngameons are more common in plants than animals, with approximately 25% of plant species and 10% of animal species producing natural hybrids. The most well known syngameons include irises of the California Pacific Coast and white oaks of the Eastern United States. Hybridization within a syngameon is typically not equally distributed among species and few species often dominate patterns of hybridization. The term syngameon comes from the root word syngamy coined by Edward Bagnall Poulton to define groups that freely interbreed. He also coined the word asyngamy referring to groups that do not freely interbreed (with the substantive noun forms Syngamy and Asyngamy). The term syngameon was first used by Johannes Paulus Lotsy, who used it to describe a habitually interbreeding community that was reproductively isolated from other habitually interbreeding communities. Syngameon was used interchangeably with the term species to describe groups of closely related individuals that interbreed to varying degrees. A more specific definition of syngameon has been given to groups of taxa that frequently engage in natural hybridization and lack strong morphological differences that could be used to define them. Taxa in syngameons may have separate species names, but evolutionary biologists often suggest they should be treated as a single species. Variation among species within a syngameon can be due to a number of factors related to their biogeography, ecology, phylogeny, reproductive biology, and genetics. Coenospecies The terms coenospecies and syngameons are both used to describe clusters of lineages that are morphologically distinct and lack strong isolation mechanisms. Coenospecies, first coined by Göte Turesson in 1922, refers to the total sum of possible combinations in a genotype compound, which includes hybridization tha
https://en.wikipedia.org/wiki/IC%20power-supply%20pin
IC power-supply pins denote a voltage and current supply terminals in electric, electronics engineering, and in Integrated circuit design. Integrated circuits (ICs) have at least two pins that connect to the power rails of the circuit in which they are installed. These are known as the power-supply pins. However, the labeling of the pins varies by IC family and manufacturer. The double subscript notation usually corresponds to a first letter in a given IC family (transistors) notation of the terminals (e.g. VDD supply for a drain terminal in FETs etc.). The simplest labels are V+ and V−, but internal design and historical traditions have led to a variety of other labels being used. V+ and V− may also refer to the non-inverting (+) and inverting (−) voltage inputs of ICs like op amps. For power supplies, sometimes one of the supply rails is referred to as ground (abbreviated "GND") positive and negative voltages are relative to the ground. In digital electronics, negative voltages are seldom present, and the ground nearly always is the lowest voltage level. In analog electronics (e.g. an audio power amplifier) the ground can be a voltage level between the most positive and most negative voltage level. While double subscript notation, where subscripted letters denote the difference between two points, uses similar-looking placeholders with subscripts, the double-letter supply voltage subscript notation is not directly linked (though it may have been an influencing factor). BJTs ICs using bipolar junction transistors have VCC (+, positive) and VEE (-, negative) power-supply pins though VCC is also often used for CMOS devices as well. In circuit diagrams and circuit analysis, there are long-standing conventions regarding the naming of voltages, currents, and some components. In the analysis of a bipolar junction transistor, for example, in a common-emitter configuration, the DC voltage at the collector, emitter, and base (with respect to ground) may be written
https://en.wikipedia.org/wiki/Regularization%20by%20spectral%20filtering
Spectral regularization is any of a class of regularization techniques used in machine learning to control the impact of noise and prevent overfitting. Spectral regularization can be used in a broad range of applications, from deblurring images to classifying emails into a spam folder and a non-spam folder. For instance, in the email classification example, spectral regularization can be used to reduce the impact of noise and prevent overfitting when a machine learning system is being trained on a labeled set of emails to learn how to tell a spam and a non-spam email apart. Spectral regularization algorithms rely on methods that were originally defined and studied in the theory of ill-posed inverse problems (for instance, see) focusing on the inversion of a linear operator (or a matrix) that possibly has a bad condition number or an unbounded inverse. In this context, regularization amounts to substituting the original operator by a bounded operator called the "regularization operator" that has a condition number controlled by a regularization parameter, a classical example being Tikhonov regularization. To ensure stability, this regularization parameter is tuned based on the level of noise. The main idea behind spectral regularization is that each regularization operator can be described using spectral calculus as an appropriate filter on the eigenvalues of the operator that defines the problem, and the role of the filter is to "suppress the oscillatory behavior corresponding to small eigenvalues". Therefore, each algorithm in the class of spectral regularization algorithms is defined by a suitable filter function (which needs to be derived for that particular algorithm). Three of the most commonly used regularization algorithms for which spectral filtering is well-studied are Tikhonov regularization, Landweber iteration, and truncated singular value decomposition (TSVD). As for choosing the regularization parameter, examples of candidate methods to compute this p
https://en.wikipedia.org/wiki/Open-circuit%20time%20constant%20method
The open-circuit time constant (OCT) method is an approximate analysis technique used in electronic circuit design to determine the corner frequency of complex circuits. It is a special case of zero-value time constant (ZVT) method technique when reactive elements consist of only capacitors. The zero-value time (ZVT) constant method itself is a special case of the general Time- and Transfer Constant (TTC) analysis that allows full evaluation of the zeros and poles of any lumped LTI systems of with both inductors and capacitors as reactive elements using time constants and transfer constants. The OCT method provides a quick evaluation, and identifies the largest contributions to time constants as a guide to the circuit improvements. The basis of the method is the approximation that the corner frequency of the amplifier is determined by the term in the denominator of its transfer function that is linear in frequency. This approximation can be extremely inaccurate in some cases where a zero in the numerator is near in frequency. The method also uses a simplified method for finding the term linear in frequency based upon summing the RC-products for each capacitor in the circuit, where the resistor R for a selected capacitor is the resistance found by inserting a test source at its site and setting all other capacitors to zero. Hence the name zero-value time constant technique. Example: Simple RC network Figure 1 shows a simple RC low-pass filter. Its transfer function is found using Kirchhoff's current law as follows. At the output, where V1 is the voltage at the top of capacitor C1. At the center node: Combining these relations the transfer function is found to be: The linear term in jω in this transfer function can be derived by the following method, which is an application of the open-circuit time constant method to this example. Set the signal source to zero. Select capacitor C2, replace it by a test voltage VX, and replace C1 by an open circuit. Then the
https://en.wikipedia.org/wiki/Footprint%20%28electronics%29
A footprint or land pattern is the arrangement of pads (in surface-mount technology) or through-holes (in through-hole technology) used to physically attach and electrically connect a component to a printed circuit board. The land pattern on a circuit board matches the arrangement of leads on a component. Component manufacturers often produce multiple pin-compatible product variants to allow systems integrators to change the exact component in use without changing the footprint on the circuit board. This can provide large cost savings for integrators, especially with dense BGA components where the footprint pads may be connected to multiple layers of the circuit board. Many component vendors provide footprints for their components, including Texas Instruments, and CUI. Other sources include third party libraries, such as SnapEDA. See also Surface-mount technology Through-hole technology Chip carrier List of integrated circuit packaging types IPC (standards body) JEDEC (standards body)
https://en.wikipedia.org/wiki/Irreversible%20circuit
An irreversible circuit is a circuit whose inputs cannot be reconstructed from its outputs. Such a circuit, of necessity, consumes energy. See also Reversible computing Integrated circuits
https://en.wikipedia.org/wiki/Flow%20graph%20%28mathematics%29
A flow graph is a form of digraph associated with a set of linear algebraic or differential equations: "A signal flow graph is a network of nodes (or points) interconnected by directed branches, representing a set of linear algebraic equations. The nodes in a flow graph are used to represent the variables, or parameters, and the connecting branches represent the coefficients relating these variables to one another. The flow graph is associated with a number of simple rules which enable every possible solution [related to the equations] to be obtained." Although this definition uses the terms "signal-flow graph" and "flow graph" interchangeably, the term "signal-flow graph" is most often used to designate the Mason signal-flow graph, Mason being the originator of this terminology in his work on electrical networks. Likewise, some authors use the term "flow graph" to refer strictly to the Coates flow graph. According to Henley & Williams: "The nomenclature is far from standardized, and...no standardization can be expected in the foreseeable future." A designation "flow graph" that includes both the Mason graph and the Coates graph, and a variety of other forms of such graphs appears useful, and agrees with Abrahams and Coverley's and with Henley and Williams' approach. A directed network – also known as a flow network – is a particular type of flow graph. A network is a graph with real numbers associated with each of its edges, and if the graph is a digraph, the result is a directed network. A flow graph is more general than a directed network, in that the edges may be associated with gains, branch gains or transmittances, or even functions of the Laplace operator s, in which case they are called transfer functions. There is a close relationship between graphs and matrices and between digraphs and matrices. "The algebraic theory of matrices can be brought to bear on graph theory to obtain results elegantly", and conversely, graph-theoretic approaches based upon fl
https://en.wikipedia.org/wiki/Empowerment%20%28artificial%20intelligence%29
Empowerment in the field of artificial intelligence formalises and quantifies (via information theory) the potential an agent perceives that it has to influence its environment. An agent which follows an empowerment maximising policy, acts to maximise future options (typically up to some limited horizon). Empowerment can be used as a (pseudo) utility function that depends only on information gathered from the local environment to guide action, rather than seeking an externally imposed goal, thus is a form of intrinsic motivation. The empowerment formalism depends on a probabilistic model commonly used in artificial intelligence. An autonomous agent operates in the world by taking in sensory information and acting to change its state, or that of the environment, in a cycle of perceiving and acting known as the perception-action loop. Agent state and actions are modelled by random variables () and time (). The choice of action depends on the current state, and the future state depends on the choice of action, thus the perception-action loop unrolled in time forms a causal bayesian network. Definition Empowerment () is defined as the channel capacity () of the actuation channel of the agent, and is formalised as the maximal possible information flow between the actions of the agent and the effect of those actions some time later. Empowerment can be thought of as the future potential of the agent to affect its environment, as measured by its sensors. In a discrete time model, Empowerment can be computed for a given number of cycles into the future, which is referred to in the literature as 'n-step' empowerment. The unit of empowerment depends on the logarithm base. Base 2 is commonly used in which case the unit is bits. Contextual Empowerment In general the choice of action (action distribution) that maximises empowerment varies from state to state. Knowing the empowerment of an agent in a specific state is useful, for example to construct an empowerment maximi
https://en.wikipedia.org/wiki/The%20Machine%20%28computer%20architecture%29
The Machine is the name of an experimental computer made by Hewlett Packard Enterprise. It was created as part of a research project to develop a new type of computer architecture for servers. The design focused on a “memory centric computing” architecture, where NVRAM replaced traditional DRAM and disks in the memory hierarchy. The NVRAM was byte addressable and could be accessed from any CPU via a photonic interconnect. The aim of the project was to build and evaluate this new design. Hardware overview The Machine was a computer cluster with many individual nodes connected over a memory fabric. The fabric interconnect used VCSEL-based silicon photonics with a custom chip called the X1. Access to memory is non-uniform and may include multiple hops. The Machine was envisioned to be a rack-scale computer initially with 80 processors and 320 TB of fabric attached memory, with potential for scaling to more enclosures up to 32 ZB. The fabric attached memory is not cache coherent and requires software to be aware of this property. Since traditional locks need cache coherency, hardware was added to the bridges to do atomic operations at that level. Each node also has a limited amount of local private cache-coherent memory (256 GB). Storage and compute on each node had completely separate power domains. The whole fabric attached memory of The Machine is too large to be mapped into a processor's virtual address space (which was 48-bits wide). A way is needed to map windows of the fabric attached memory into processor memory. Therefore, communication between each node SoC and the memory pool goes through an FPGA-based “Z-bridge” component that manages memory mapping of the local SoC to the fabric attached memory. The Z-bridge deals with two different kinds of addresses: 53-bit logical Z addresses and 75-bit Z addresses, which allows addressing 8PB and 32ZB respectively. Each Z-bridge also contained a firewall to enforce access control. The interconnect protocol was develo
https://en.wikipedia.org/wiki/Pixel%20Visual%20Core
The Pixel Visual Core (PVC) is a series of ARM-based system in package (SiP) image processors designed by Google. The PVC is a fully programmable image, vision and AI multi-core domain-specific architecture (DSA) for mobile devices and in future for IoT. It first appeared in the Google Pixel 2 and 2 XL which were introduced on October 19, 2017. It has also appeared in the Google Pixel 3 and 3 XL. Starting with the Pixel 4, this chip was replaced with the Pixel Neural Core. History Google previously used Qualcomm Snapdragon's CPU, GPU, IPU, and DSP to handle its image processing for their Google Nexus and Google Pixel devices. With the increasing importance of computational photography techniques, Google developed the Pixel Visual Core (PVC). Google claims the PVC uses less power than using CPU and GPU while still being fully programmable, unlike their tensor processing unit (TPU) application-specific integrated circuit (ASIC). Indeed, classical mobile devices equip an image signal processor (ISP) that is a fixed functionality image processing pipeline. In contrast to this, the PVC has a flexible programmable functionality, not limited only to image processing. The PVC in the Google Pixel 2 and 2 XL is labeled SR3HX X726C502. The PVC in the Google Pixel 3 and 3 XL is labeled SR3HX X739F030. Thanks to the PVC, the Pixel 2 and Pixel 3 obtained a mobile DxOMark of 98 and 101. The latter one was the top-ranked single-lens mobile DxOMark score, tied with the iPhone XR. Pixel Visual Core software A typical image-processing program of the PVC is written in Halide. Currently, it supports just a subset of Halide programming language without floating point operations and with limited memory access patterns. Halide is a domain-specific language that lets the user decouple the algorithm and the scheduling of its execution. In this way, the developer can write a program that is optimized for the target hardware architecture. Pixel Visual Core ISA The PVC has two types
https://en.wikipedia.org/wiki/Flip-flop%20%28electronics%29
In electronics, flip-flops and latches are circuits that have two stable states that can store state information – a bistable multivibrator. The circuit can be made to change state by signals applied to one or more control inputs and will output its state (often along with its logical complement too). It is the basic storage element in sequential logic. Flip-flops and latches are fundamental building blocks of digital electronics systems used in computers, communications, and many other types of systems. Flip-flops and latches are used as data storage elements to store a single bit (binary digit) of data; one of its two states represents a "one" and the other represents a "zero". Such data storage can be used for storage of state, and such a circuit is described as sequential logic in electronics. When used in a finite-state machine, the output and next state depend not only on its current input, but also on its current state (and hence, previous inputs). It can also be used for counting of pulses, and for synchronizing variably-timed input signals to some reference timing signal. The term flip-flop has historically referred generically to both level-triggered (asynchronous, transparent, or opaque) and edge-triggered (synchronous, or clocked) circuits that store a single bit of data using gates. Modern authors reserve the term flip-flop exclusively for edge-triggered storage elements and latches for level-triggered ones. The terms "edge-triggered", and "level-triggered" may be used to avoid ambiguity. When a level-triggered latch is enabled it becomes transparent, but an edge-triggered flip-flop's output only changes on a clock edge (either positive going or negative going). Different types of flip-flops and latches are available as integrated circuits, usually with multiple elements per chip. For example, 74HC75 is a quadruple transparent latch in the 7400 series. History The first electronic latch was invented in 1918 by the British physicists William Eccles
https://en.wikipedia.org/wiki/Brewer%27s%20spent%20grain
Brewer's spent grain (BSG) or draff is a food waste that is a byproduct of the brewing industry that makes up 85 percent of brewing waste. BSG is obtained as a mostly solid residue after wort production in the brewing process. The product is initially wet, with a short shelf-life, but can be dried and processed in various ways to preserve it. Because spent grain is widely available wherever beer is consumed and is frequently available at a low cost, many potential uses for this waste have been suggested and studied as a means of reducing its environmental impact, such as use as a food additive, animal feed, fertilizer or paper. Composition The majority of BSG is composed of barley malt grain husks in combination with parts of the pericarp and seed coat layers of the barley. Though the composition of BSG can vary, depending on the type of barley used, the way it was grown, and other factors, BSG is usually rich in cellulose, hemicelluloses, lignin, and protein. BSG is also naturally high in fiber, making it of great interest as a food additive, replacing low-fiber ingredients. Food additive The high protein and fiber content of BSG makes it an obvious target to add to human foods. BSG can be ground and then sifted into a powder that can increase fiber and protein content while decreasing caloric content, possibly replacing flour in baked goods and other foods, such as breadsticks, cookies, and even hot dogs. Some breweries that also operate restaurants re-use their BSG in recipes, such as in pizza crust. Grainstone is an Australian based company that has developed a world leading modular energy efficient process to convert BSG to a premium specialty flour and a process to produce nutraceuticals; including protein isolate, soluble dietary fibre and antioxident. Livestock feed The low cost and high availability of BSG has led to its use as livestock feed. BSG can be fed to livestock immediate in its wet stage or once processed and dried. The high protein conten
https://en.wikipedia.org/wiki/List%20of%20topology%20topics
In mathematics, topology (from the Greek words , and ) is concerned with the properties of a geometric object that are preserved under continuous deformations, such as stretching, twisting, crumpling and bending, but not tearing or gluing. A topological space is a set endowed with a structure, called a topology, which allows defining continuous deformation of subspaces, and, more generally, all kinds of continuity. Euclidean spaces, and, more generally, metric spaces are examples of a topological space, as any distance or metric defines a topology. The deformations that are considered in topology are homeomorphisms and homotopies. A property that is invariant under such deformations is a topological property. Basic examples of topological properties are: the dimension, which allows distinguishing between a line and a surface; compactness, which allows distinguishing between a line and a circle; connectedness, which allows distinguishing a circle from two non-intersecting circles. The ideas underlying topology go back to Gottfried Leibniz, who in the 17th century envisioned the and . Leonhard Euler's Seven Bridges of Königsberg problem and polyhedron formula are arguably the field's first theorems. The term topology was introduced by Johann Benedict Listing in the 19th century, although it was not until the first decades of the 20th century that the idea of a topological space was developed. This is a list of topology topics. See also: Topology glossary List of topologies List of general topology topics List of geometric topology topics List of algebraic topology topics List of topological invariants (topological properties) Publications in topology Topology and physics Quantum topology Topological defect Topological entropy in physics Topological order Topological quantum field theory Topological quantum number Topological string theory Topology of the universe Topology and dynamical systems Milnor–Thurston kneading theory Topological conjugacy Topological
https://en.wikipedia.org/wiki/Subspecies
In biological classification, subspecies (: subspecies) is a rank below species, used for populations that live in different areas and vary in size, shape, or other physical characteristics (morphology), but that can successfully interbreed. Not all species have subspecies, but for those that do there must be at least two. Subspecies is abbreviated subsp. or ssp. and the singular and plural forms are the same ("the subspecies is" or "the subspecies are"). In zoology, under the International Code of Zoological Nomenclature, the subspecies is the only taxonomic rank below that of species that can receive a name. In botany and mycology, under the International Code of Nomenclature for algae, fungi, and plants, other infraspecific ranks, such as variety, may be named. In bacteriology and virology, under standard bacterial nomenclature and virus nomenclature, there are recommendations but not strict requirements for recognizing other important infraspecific ranks. A taxonomist decides whether to recognize a subspecies. A common criterion for recognizing two distinct populations as subspecies rather than full species is the ability of them to interbreed even if some male offspring may be sterile. In the wild, subspecies do not interbreed due to geographic isolation or sexual selection. The differences between subspecies are usually less distinct than the differences between species. Nomenclature The scientific name of a species is a binomial or binomen, and comprises two Latin words, the first denoting the genus and the second denoting the species. The scientific name of a subspecies is formed slightly differently in the different nomenclature codes. In zoology, under the International Code of Zoological Nomenclature (ICZN), the scientific name of a subspecies is termed a trinomen, and comprises three words, namely the binomen followed by the name of the subspecies. For example, the binomen for the leopard is Panthera pardus. The trinomen Panthera pardus fusca denotes
https://en.wikipedia.org/wiki/Basics%20of%20white%20flower%20colouration
White flower colour is related to the absence or reduction of the anthocyanidin content. Unlike other colors, white colour is not induced by pigments. Several white plant tissues are principally equipped with the complete machinery for anthocyanin biosynthesis including the expression of regulatory genes. Nevertheless, they are unable to accumulate red or blue pigments, for example Dahlia ´Seattle´ petals showing a white tip. Several studies have revealed a further reduction of the anthocyanidin to colorless epicatechin by the enzyme anthocyanidin reductase (ANR). Cultivation & Modification of Colour Many external factors can influence colour: light, temperature, pH, sugars and metals. There is a method to turn petunia flowers from white to transparent. The petunia flower is immersed into a flask of water, connected to a vacuum pump, after which the flower appeared colourless. The white colour is expressed by the air present in the vacuoles that absorb the light, without air the flower loses the white colour. There is an increasing interest in flower colour, since some colorations are currently unavailable in plants. Ornamental companies create new flower colour by classical and mutation breeding and biotechnological approaches. For example, white bracts in Poinsettia are obtained by high frequency irradiation. See also Basics of blue flower colouration
https://en.wikipedia.org/wiki/Abuse%20of%20notation
In mathematics, abuse of notation occurs when an author uses a mathematical notation in a way that is not entirely formally correct, but which might help simplify the exposition or suggest the correct intuition (while possibly minimizing errors and confusion at the same time). However, since the concept of formal/syntactical correctness depends on both time and context, certain notations in mathematics that are flagged as abuse in one context could be formally correct in one or more other contexts. Time-dependent abuses of notation may occur when novel notations are introduced to a theory some time before the theory is first formalized; these may be formally corrected by solidifying and/or otherwise improving the theory. Abuse of notation should be contrasted with misuse of notation, which does not have the presentational benefits of the former and should be avoided (such as the misuse of constants of integration). A related concept is abuse of language or abuse of terminology, where a term — rather than a notation — is misused. Abuse of language is an almost synonymous expression for abuses that are non-notational by nature. For example, while the word representation properly designates a group homomorphism from a group G to GL(V), where V is a vector space, it is common to call V "a representation of G". Another common abuse of language consists in identifying two mathematical objects that are different, but canonically isomorphic. Other examples include identifying a constant function with its value, identifying a group with a binary operation with the name of its underlying set, or identifying to the Euclidean space of dimension three equipped with a Cartesian coordinate system. Examples Structured mathematical objects Many mathematical objects consist of a set, often called the underlying set, equipped with some additional structure, such as a mathematical operation or a topology. It is a common abuse of notation to use the same notation for the underlying
https://en.wikipedia.org/wiki/Aerobiology
Aerobiology (from Greek ἀήρ, aēr, "air"; βίος, bios, "life"; and -λογία, -logia) is a branch of biology that studies the passive transport of organic particles, such as bacteria, fungal spores, very small insects, pollen grains and viruses. Aerobiologists have traditionally been involved in the measurement and reporting of airborne pollen and fungal spores as a service to those with allergies. However, aerobiology is a varied field, relating to environmental science, plant science, meteorology, phenology, and climate change. Overview The first mention of "aerobiology" was made by Fred Campbell Meier in the 1930s. The particles, which can be described as Aeroplankton, generally range in size from nanometers to micrometers which makes them challenging to detect. Aerosolization is the process of a small and light particles becoming suspended in moving air. Now bioaerosols, these pollen and fungal spores can be transported across an ocean, or even travel around the globe. Due to the high quantities of microbes and the ease of dispersion, Martinus Beijerinck once said "Everything is everywhere, the environment selects". This means that aeroplankton are everywhere and have been everywhere, and it solely depends on environmental factors to determine which remain. Aeroplankton are found in significant quantities even in the Atmospheric boundary layer (ABL). The effects on climate and cloud chemistry of these atmospheric populations is still under review. NASA and other research agencies are studying how long these bioaerosols can remain afloat and how they can survive in such extreme climates. The conditions of the upper atmosphere are similar to the climate on Mars' surface, and the microbes found are helping redefine the conditions which can support life. Dispersal of particles The process of dispersal of aerobiological particles has 3 steps: removal from source, dispersion through air, and deposition to rest. The particle geometry and environment affect all three p
https://en.wikipedia.org/wiki/Mercury%20Systems
Mercury Systems is a technology company serving the aerospace and defense industry. It designs, develops and manufactures open architecture computer hardware and software products, including secure embedded processing modules and subsystems, avionics mission computers and displays, rugged secure computer servers, and trusted microelectronics components, modules and subsystems. Mercury sells its products to defense prime contractors, the US government and original equipment manufacturer (OEM) commercial aerospace companies. Mercury is based in Andover, Massachusetts, with more than 2300 employees and annual revenues of approximately US$988 million for its fiscal year ended June 30, 2022. History Founded on July 14, 1981 as Mercury Computer Systems by Jay Bertelli. Went public on the Nasdaq stock exchange on January 30, 1998, listed under the symbol MRCY. In July 2005, Mercury Computer Systems acquired Echotek Corporation for approximately US$49 million. In January 2011, Mercury Computer Systems acquired LNX Corporation. In December, 2011, Mercury Computer Systems acquired KOR Electronics for US$70 million, In August 2012, Mercury Computer Systems acquired Micronetics for US$74.9 million. In November 2012, the company changed its name from Mercury Computer Systems to Mercury Systems. In December 2015, Mercury Systems acquired Lewis Innovative Technologies, Inc. (LIT). In November 2016, Mercury Systems acquired Creative Electronic Systems for US$38 million. In April 2017, Mercury Systems acquired Delta Microwave, LLC (“Delta”) for US$40.5 million, enabling the Company to expand into the satellite communications (SatCom), datalinks and space launch markets. In July 2017, Mercury Systems acquired Richland Technologies, LLC (RTL), increasing the Company's market penetration in commercial aerospace, defense platform management, C4I, and mission computing. In January 2019, Mercury Systems acquired GECO Avionics, LLC for US$36.5 million. In December 2020, Mercur
https://en.wikipedia.org/wiki/Field%20Programmable%20Nanowire%20Interconnect
Field Programmable Nanowire Interconnect (often abbreviated FPNI) is a new computer architecture developed by Hewlett-Packard. This is a defect-tolerant architecture, using the results of the Teramac experiment. Technology The design combines a nanoscale crossbar switch structure with conventional CMOS to create a hybrid chip that is simpler to fabricate and offers greater flexibility in the choice of nanoscale devices. The FPNI improves on a field-programmable gate array (FPGA) architecture by lifting the configuration bit and associated components out of the semiconductor plane and replacing them in the interconnect with nonvolatile switches, which decreases both the area and power consumption of the circuit -- while providing up to eight times the density at less cost. This is an example of a more comprehensive strategy for improving the efficiency of existing semiconductor technology: placing a level of intelligence and configurability in the interconnect can have a profound effect on integrated circuit performance, and can be used to significantly extend Moore's Law without having to shrink the transistors. External links http://www.iop.org/EJ/abstract/0957-4484/18/3/035204 Nanotechnology journal, Issue 3 (24 January 2007)Nano/CMOS architectures using a field-programmable nanowire interconnect Gate arrays
https://en.wikipedia.org/wiki/Lanczos%20resampling
Lanczos filtering and Lanczos resampling are two applications of a mathematical formula. It can be used as a low-pass filter or used to smoothly interpolate the value of a digital signal between its samples. In the latter case, it maps each sample of the given signal to a translated and scaled copy of the Lanczos kernel, which is a sinc function windowed by the central lobe of a second, longer, sinc function. The sum of these translated and scaled kernels is then evaluated at the desired points. Lanczos resampling is typically used to increase the sampling rate of a digital signal, or to shift it by a fraction of the sampling interval. It is often used also for multivariate interpolation, for example to resize or rotate a digital image. It has been considered the "best compromise" among several simple filters for this purpose. The filter is named after its inventor, Cornelius Lanczos (). Definition Lanczos kernel The effect of each input sample on the interpolated values is defined by the filter's reconstruction kernel , called the Lanczos kernel. It is the normalized sinc function , windowed (multiplied) by the Lanczos window, or sinc window, which is the central lobe of a horizontally stretched sinc function for . Equivalently, The parameter is a positive integer, typically 2 or 3, which determines the size of the kernel. The Lanczos kernel has lobes: a positive one at the center, and alternating negative and positive lobes on each side. Interpolation formula Given a one-dimensional signal with samples , for integer values of , the value interpolated at an arbitrary real argument is obtained by the discrete convolution of those samples with the Lanczos kernel: where is the filter size parameter, and is the floor function. The bounds of this sum are such that the kernel is zero outside of them. Properties As long as the parameter is a positive integer, the Lanczos kernel is continuous everywhere, and its derivative is defined and con
https://en.wikipedia.org/wiki/Superquadrics
In mathematics, the superquadrics or super-quadrics (also superquadratics) are a family of geometric shapes defined by formulas that resemble those of ellipsoids and other quadrics, except that the squaring operations are replaced by arbitrary powers. They can be seen as the three-dimensional relatives of the superellipses. The term may refer to the solid object or to its surface, depending on the context. The equations below specify the surface; the solid is specified by replacing the equality signs by less-than-or-equal signs. The superquadrics include many shapes that resemble cubes, octahedra, cylinders, lozenges and spindles, with rounded or sharp corners. Because of their flexibility and relative simplicity, they are popular geometric modeling tools, especially in computer graphics. It becomes an important geometric primitive widely used in computer vision, robotics, and physical simulation. Some authors, such as Alan Barr, define "superquadrics" as including both the superellipsoids and the supertoroids. In modern computer vision literatures, superquadrics and superellipsoids are used interchangeably, since superellipsoids are the most representative and widely utilized shape among all the superquadrics. Comprehensive coverage of geometrical properties of superquadrics and methods of their recovery from range images and point clouds are covered in several computer vision literatures. Useful tools and algorithms for superquadrics visualization, sampling, and recovery are open-sourced here. Formulas Implicit equation The surface of the basic superquadric is given by where r, s, and t are positive real numbers that determine the main features of the superquadric. Namely: less than 1: a pointy octahedron modified to have concave faces and sharp edges. exactly 1: a regular octahedron. between 1 and 2: an octahedron modified to have convex faces, blunt edges and blunt corners. exactly 2: a sphere greater than 2: a cube modified to have rounded edges an
https://en.wikipedia.org/wiki/Bifidobacterium%20animalis
Bifidobacterium animalis is a gram-positive, anaerobic, rod-shaped bacterium of the Bifidobacterium genus which can be found in the large intestines of most mammals, including humans. Bifidobacterium animalis and Bifidobacterium lactis were previously described as two distinct species. Presently, both are considered B. animalis with the subspecies Bifidobacterium animalis subsp. animalis and Bifidobacterium animalis subsp. lactis. Both old names B. animalis and B. lactis are still used on product labels, as this species is frequently used as a probiotic. In most cases, which subspecies is used in the product is not clear. Trade names Several companies have attempted to trademark particular strains, and as a marketing technique, have invented scientific-sounding names for the strains. Danone (Dannon in the United States) markets the subspecies strain as Bifidus Digestivum (UK), Bifidus Regularis (US and Mexico), Bifidobacterium Lactis or B.L. Regularis (Canada), DanRegularis (Brazil), Bifidus Actiregularis (Argentina, Austria, Belgium, Bulgaria, Chile, Czech Republic, France, Germany, Greece, Hungary, Israel, Italy, Kazakhstan, Netherlands, Portugal, Romania, Russia, South Africa, Spain and the UK), and Bifidus Essensis in the Middle East (and formerly in Hungary, Bulgaria, Romania and The Netherlands) through Activia from Safi Danone KSA. Chr. Hansen A/S from Denmark has a similar claim on a strain of Bifidobacterium animalis subsp. lactis, marketed under the trademark BB-12. Lidl lists "Bifidobacterium BB-12" in its "Proviact" yogurt. Bifidobacterium lactis Bl-04 and Bi-07 are strains from DuPont's Danisco FloraFIT range. They are used in many dietary probiotic supplements. Theralac contains the strains Bifidobacterium lactis BI-07 and Bifidobacterium lactis BL-34 (also called BI-04) in its probiotic capsule. Bifidobacterium lactis HN019 is a strain from Fonterra licensed to DuPont, which markets it as HOWARU Bifido. It is sold in a variety of commercial
https://en.wikipedia.org/wiki/Random-access%20memory
Random-access memory (RAM; ) is a form of electronic computer memory that can be read and changed in any order, typically used to store working data and machine code. A random-access memory device allows data items to be read or written in almost the same amount of time irrespective of the physical location of data inside the memory, in contrast with other direct-access data storage media (such as hard disks, CD-RWs, DVD-RWs and the older magnetic tapes and drum memory), where the time required to read and write data items varies significantly depending on their physical locations on the recording medium, due to mechanical limitations such as media rotation speeds and arm movement. RAM contains multiplexing and demultiplexing circuitry, to connect the data lines to the addressed storage for reading or writing the entry. Usually more than one bit of storage is accessed by the same address, and RAM devices often have multiple data lines and are said to be "8-bit" or "16-bit", etc. devices. In today's technology, random-access memory takes the form of integrated circuit (IC) chips with MOS (metal–oxide–semiconductor) memory cells. RAM is normally associated with volatile types of memory where stored information is lost if power is removed. The two main types of volatile random-access semiconductor memory are static random-access memory (SRAM) and dynamic random-access memory (DRAM). Non-volatile RAM has also been developed and other types of non-volatile memories allow random access for read operations, but either do not allow write operations or have other kinds of limitations on them. These include most types of ROM and a type of flash memory called NOR-Flash. Use of semiconductor RAM dated back to 1965, when IBM introduced the monolithic (single-chip) 16-bit SP95 SRAM chip for their System/360 Model 95 computer, and Toshiba used discrete DRAM memory cells for its 180-bit Toscal BC-1411 electronic calculator, both based on bipolar transistors. While it offered hi
https://en.wikipedia.org/wiki/List%20of%20mathematical%20topics%20in%20relativity
This is a list of mathematical topics in relativity, by Wikipedia page. Special relativity Foundational issues principle of relativity speed of light faster-than-light biquaternion conjugate diameters four-vector four-acceleration four-force four-gradient four-momentum four-velocity hyperbolic orthogonality hyperboloid model light-like Lorentz covariance Lorentz group Lorentz transformation Lorentz–FitzGerald contraction hypothesis Minkowski diagram Minkowski space Poincaré group proper length proper time rapidity relativistic wave equations relativistic mass split-complex number unit hyperbola world line General relativity black holes no-hair theorem Hawking radiation Hawking temperature Black hole entropy charged black hole rotating black hole micro black hole Schwarzschild black hole Schwarzschild metric Schwarzschild radius Reissner–Nordström black hole Immirzi parameter closed timelike curve cosmic censorship hypothesis chronology protection conjecture Einstein–Cartan theory Einstein's field equation geodesic gravitational redshift Penrose–Hawking singularity theorems Pseudo-Riemannian manifold stress–energy tensor worm hole Cosmology anti-de Sitter space Ashtekar variables Batalin–Vilkovisky formalism Big Bang Cauchy horizon cosmic inflation cosmic microwave background cosmic variance cosmological constant dark energy dark matter de Sitter space Friedmann–Lemaître–Robertson–Walker metric horizon problem large-scale structure of the cosmos Randall–Sundrum model warped geometry Weyl curvature hypothesis Relativity Mathematics
https://en.wikipedia.org/wiki/Geography%20of%20food
The geography of food is a field of human geography. It focuses on patterns of food production and consumption on the local to global scale. Tracing these complex patterns helps geographers understand the unequal relationships between developed and developing countries in relation to the innovation, production, transportation, retail and consumption of food. It is also a topic that is becoming increasingly charged in the public eye. The movement to reconnect the 'space' and 'place' in the food system is growing, spearheaded by the research of geographers. History Spatial variations in food production and consumption practices have been noted for thousands of years. In fact, Plato commented on the destructive nature of agriculture when he referred to the soil erosion from the mountainsides surrounding Athens, stating "[In previous years] Athens yielded far more abundant produce. In comparison of what then was, there are remaining only the bones of the wasted body; all the richer and softer parts of the soil having fallen away, and the mere skeleton of the land being left". Societies beyond those of ancient Greece have struggled under the pressure to feed expanding populations. The people of Easter Island, the Maya of Central America and most recently the inhabitants of Montana have been experiencing similar difficulties in production due to several interconnecting factors related to land and resource management. These events have been extensively studied by geographers and other interested parties (the study of food has not been confined to a single discipline, and has received attention from a huge range of diverse sources). Modern geographers initially focused on food as an economic activity, especially in terms of agricultural geography. It was not until recently that geographers have turned their attention to food in a wider sense: "The emergence of an agro-food geography that seeks to examine issues along the food chain or within systems of food provision deri
https://en.wikipedia.org/wiki/Covariant%20formulation%20of%20classical%20electromagnetism
The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism (in particular, Maxwell's equations and the Lorentz force) in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems. This article uses the classical treatment of tensors and Einstein summation convention throughout and the Minkowski metric has the form . Where the equations are specified as holding in a vacuum, one could instead regard them as the formulation of Maxwell's equations in terms of total charge and current. For a more general overview of the relationships between classical electromagnetism and special relativity, including various conceptual implications of this picture, see Classical electromagnetism and special relativity. Covariant objects Preliminary four-vectors Lorentz tensors of the following kinds may be used in this article to describe bodies or particles: four-displacement: Four-velocity: where γ(u) is the Lorentz factor at the 3-velocity u. Four-momentum: where is 3-momentum, is the total energy, and is rest mass. Four-gradient: The d'Alembertian operator is denoted , The signs in the following tensor analysis depend on the convention used for the metric tensor. The convention used here is , corresponding to the Minkowski metric tensor: Electromagnetic tensor The electromagnetic tensor is the combination of the electric and magnetic fields into a covariant antisymmetric tensor whose entries are B-field quantities. and the result of raising its indices is where E is the elec
https://en.wikipedia.org/wiki/Johnjoe%20McFadden
Johnjoe McFadden (born 17 May 1956) is an Anglo-Irish scientist, academic and writer. He is Professor of Molecular Genetics at the University of Surrey, United Kingdom. Life McFadden was born in Donegal, Ireland but raised in the UK. He holds joint British and Irish Nationality. He obtained his BSc in Biochemistry University of London in 1977 and his PhD at Imperial College London in 1982. He went on to work on human genetic diseases and then infectious diseases, at St Mary's Hospital Medical School, London (1982–84) and St George's Hospital Medical School, London (1984–88) and then at the University of Surrey in Guildford, UK. For more than a decade, McFadden has researched the genetics of microbes such as the agents of tuberculosis and meningitis and invented a test for the diagnosis of meningitis. He has published more than 100 articles in scientific journals on subjects as wide-ranging as bacterial genetics, tuberculosis, idiopathic diseases and computer modelling of evolution. He has contributed to more than a dozen books and has edited a book on the genetics of mycobacteria. He produced a widely reported artificial life computer model which modelled evolution in organisms. McFadden has lectured extensively in the UK, Europe, the US and Japan and his work has been featured on radio, television and national newspaper articles particularly for the Guardian. His present post, which he has held since 2001, is Professor of Molecular Genetics at the University of Surrey. Living in London, he is married and has one son. Quantum evolution McFadden wrote the popular science book, Quantum Evolution. The book examines the role of quantum mechanics in life, evolution and consciousness. The book has been described as offering an alternative evolutionary mechanism, beyond the neo-Darwinian framework. The book received positive reviews by Kirkus Reviews and Publishers Weekly. It was negatively reviewed in the journal Heredity by evolutionary biologist Wallace Arthur. W
https://en.wikipedia.org/wiki/Gain%20compression
Gain compression is a reduction in differential or slope gain caused by nonlinearity of the transfer function of the amplifying device. This nonlinearity may be caused by heat due to power dissipation or by overdriving the active device beyond its linear region. It is a large-signal phenomenon of circuits. Relevance Gain compression is relevant in any system with a wide dynamic range, such as audio or RF. It is more common in tube circuits than transistor circuits, due to topology differences, possibly causing the differences in audio performance called "valve sound". The front-end RF amps of radio receivers are particularly susceptible to this phenomenon when overloaded by a strong unwanted signal. Audio effects A tube radio or tube amplifier will increase in volume to a point, and then as the input signal extends beyond the linear range of the device, the effective gain is reduced, altering the shape of the waveform. The effect is also present in transistor circuits. The extent of the effect depends on the topology of the amplifier. Differences between clipping and compression Clipping, as a form of signal compression, differs from the operation of the typical studio audio level compressor, in which gain compression is not instantaneous (delayed in time via attack and release settings). Clipping destroys any audio information which is over a certain threshold. Compression and limiting, change the shape of the entire waveform, not just the shape of the waveform above the threshold. This is why it is possible to limit and compress with very high ratios without causing distortion. Limiting or clipping Gain is a linear operation. Gain compression is not linear and, as such, its effect is one of distortion, due to the nonlinearity of the transfer characteristic which also causes a loss of 'slope' or 'differential' gain. So the output is less than expected using the small signal gain of the amplifier. In clipping, the signal is abruptly limited to a cert
https://en.wikipedia.org/wiki/Trombi%E2%80%93Varadarajan%20theorem
In mathematics, the Trombi–Varadarajan theorem, introduced by , gives an isomorphism between a certain space of spherical functions on a semisimple Lie group, and a certain space of holomorphic functions defined on a tubular neighborhood of the dual of a Cartan subalgebra.
https://en.wikipedia.org/wiki/Mel-frequency%20cepstrum
In sound processing, the mel-frequency cepstrum (MFC) is a representation of the short-term power spectrum of a sound, based on a linear cosine transform of a log power spectrum on a nonlinear mel scale of frequency. Mel-frequency cepstral coefficients (MFCCs) are coefficients that collectively make up an MFC. They are derived from a type of cepstral representation of the audio clip (a nonlinear "spectrum-of-a-spectrum"). The difference between the cepstrum and the mel-frequency cepstrum is that in the MFC, the frequency bands are equally spaced on the mel scale, which approximates the human auditory system's response more closely than the linearly-spaced frequency bands used in the normal spectrum. This frequency warping can allow for better representation of sound, for example, in audio compression that might potentially reduce the transmission bandwidth and the storage requirements of audio signals. MFCCs are commonly derived as follows: Take the Fourier transform of (a windowed excerpt of) a signal. Map the powers of the spectrum obtained above onto the mel scale, using triangular overlapping windows or alternatively, cosine overlapping windows. Take the logs of the powers at each of the mel frequencies. Take the discrete cosine transform of the list of mel log powers, as if it were a signal. The MFCCs are the amplitudes of the resulting spectrum. There can be variations on this process, for example: differences in the shape or spacing of the windows used to map the scale, or addition of dynamics features such as "delta" and "delta-delta" (first- and second-order frame-to-frame difference) coefficients. The European Telecommunications Standards Institute in the early 2000s defined a standardised MFCC algorithm to be used in mobile phones. Applications MFCCs are commonly used as features in speech recognition systems, such as the systems which can automatically recognize numbers spoken into a telephone. MFCCs are also increasingly finding uses in mus
https://en.wikipedia.org/wiki/Biomedical%20waste
Biomedical waste or hospital waste is any kind of waste containing infectious (or potentially infectious) materials generated during the treatment of humans or animals as well as during research involving biologics. It may also include waste associated with the generation of biomedical waste that visually appears to be of medical or laboratory origin (e.g. packaging, unused bandages, infusion kits etc.), as well research laboratory waste containing biomolecules or organisms that are mainly restricted from environmental release. As detailed below, discarded sharps are considered biomedical waste whether they are contaminated or not, due to the possibility of being contaminated with blood and their propensity to cause injury when not properly contained and disposed. Biomedical waste is a type of biowaste. Biomedical waste may be solid or liquid. Examples of infectious waste include discarded blood, sharps, unwanted microbiological cultures and stocks, identifiable body parts (including those as a result of amputation), other human or animal tissue, used bandages and dressings, discarded gloves, other medical supplies that may have been in contact with blood and body fluids, and laboratory waste that exhibits the characteristics described above. Waste sharps include potentially contaminated used (and unused discarded) needles, scalpels, lancets and other devices capable of penetrating skin. Biomedical waste is generated from biological and medical sources and activities, such as the diagnosis, prevention, or treatment of diseases. Common generators (or producers) of biomedical waste include hospitals, health clinics, nursing homes, emergency medical services, medical research laboratories, offices of physicians, dentists, veterinarians, home health care and morgues or funeral homes. In healthcare facilities (i.e. hospitals, clinics, doctor's offices, veterinary hospitals and clinical laboratories), waste with these characteristics may alternatively be called medical
https://en.wikipedia.org/wiki/Processor%20register
A processor register is a quickly accessible location available to a computer's processor. Registers usually consist of a small amount of fast storage, although some registers have specific hardware functions, and may be read-only or write-only. In computer architecture, registers are typically addressed by mechanisms other than main memory, but may in some cases be assigned a memory address e.g. DEC PDP-10, ICT 1900. Almost all computers, whether load/store architecture or not, load items of data from a larger memory into registers where they are used for arithmetic operations, bitwise operations, and other operations, and are manipulated or tested by machine instructions. Manipulated items are then often stored back to main memory, either by the same instruction or by a subsequent one. Modern processors use either static or dynamic RAM as main memory, with the latter usually accessed via one or more cache levels. Processor registers are normally at the top of the memory hierarchy, and provide the fastest way to access data. The term normally refers only to the group of registers that are directly encoded as part of an instruction, as defined by the instruction set. However, modern high-performance CPUs often have duplicates of these "architectural registers" in order to improve performance via register renaming, allowing parallel and speculative execution. Modern x86 design acquired these techniques around 1995 with the releases of Pentium Pro, Cyrix 6x86, Nx586, and AMD K5. When a computer program accesses the same data repeatedly, this is called locality of reference. Holding frequently used values in registers can be critical to a program's performance. Register allocation is performed either by a compiler in the code generation phase, or manually by an assembly language programmer. Size Registers are normally measured by the number of bits they can hold, for example, an "8-bit register", "32-bit register", "64-bit register", or even more. In some instruc
https://en.wikipedia.org/wiki/Teller%20assist%20unit
Teller Assist Units (TAU), also known as Automatic Teller Safes (ATS) or Teller Cash Dispensers (TCD), are devices used in retail banking for the disbursement of money at a bank teller wicket or a centralized area. Other areas of application of TAU include the automation of starting and reconciling teller or cashier drawers (tills) in retail, check cashing, payday loan / advance, grocery, and casino operations. Cash supplies are held in a vault or safe. Disbursements and acceptance of money take place by means of inputting information through a separate computer to the cash dispensing mechanism inside the vault, which is similar in construction to an automatic teller machine vault. A TAU provides a secure and auditable way of handling large amounts of cash by tellers without undue risk from robbery. Some TAUs can be networked and monitored remotely, from a central location - thereby reducing oversight and management resources. Special security considerations TAUs may delay dispensing of large amounts of money up to minutes to discourage bank robberies. It is however very likely that someone present on the premises has the means to open the cash vault of the device. TAUs may be accessed by keys, combination, or a mix of the two. Construction A TAU consists of: A vault Cash handling mechanism Alarm sensors In the TAU's cash handling mechanisms are several money cartridges. These can be equipped with different cash notes or coinage. The input into the controlling computer makes possible for this unit to disburse the correct amounts. Notes are tested to ensure that they are removed correctly from the cartridges and that no surplus notes are removed. False disbursements are possible, although very rare. Modern TAUs can be used also for depositing and recycling of banknotes. They use bill validation technology to help ensure the authenticity and fitness of the received cash before it is accepted and recycled to be presented to the customer. Differences from
https://en.wikipedia.org/wiki/Dependent%20and%20independent%20variables
Dependent and independent variables are variables in mathematical modeling, statistical modeling and experimental sciences. Dependent variables are studied under the supposition or demand that they depend, by some law or rule (e.g., by a mathematical function), on the values of other variables. Independent variables, in turn, are not seen as depending on any other variable in the scope of the experiment in question. In this sense, some common independent variables are time, space, density, mass, fluid flow rate, and previous values of some observed value of interest (e.g. human population size) to predict future values (the dependent variable). Of the two, it is always the dependent variable whose variation is being studied, by altering inputs, also known as regressors in a statistical context. In an experiment, any variable that can be attributed a value without attributing a value to any other variable is called an independent variable. Models and experiments test the effects that the independent variables have on the dependent variables. Sometimes, even if their influence is not of direct interest, independent variables may be included for other reasons, such as to account for their potential confounding effect. In pure mathematics In mathematics, a function is a rule for taking an input (in the simplest case, a number or set of numbers) and providing an output (which may also be a number). A symbol that stands for an arbitrary input is called an independent variable, while a symbol that stands for an arbitrary output is called a dependent variable. The most common symbol for the input is , and the most common symbol for the output is ; the function itself is commonly written . It is possible to have multiple independent variables or multiple dependent variables. For instance, in multivariable calculus, one often encounters functions of the form , where is a dependent variable and and are independent variables. Functions with multiple outputs are often ref
https://en.wikipedia.org/wiki/Software%20engine
A software engine is a core component of a complex software system. Alternate phrases include "software core" and "software core engine", or just "core engine". Definitions The word "engine" is a metaphor of a car's engine. Thus, it is a software engine that is a complex subsystem. There is no formal guideline for what should be called an engine, but the term has become engraved in the software industry. Notable examples are; Database engines, Graphics engines, Physics engines, Search engines, Plotting engines, Game engines. Moreover, a web browser has two components referred to as engines: Browser engine JavaScript engine. Classically, an engine is something packaged as a library, such as ".sa", ".so", or ".dll", that provides functionality to the software that loads or embeds it. Engines may produce graphics, such as the Python matplotlib or the Objective-C Core Plot. Engines do not in and of themselves generally have standalone user interfaces - they are not applications. A distinguishing characteristic of an engine is its presentation as an API. Examples Engines may be used to produce higher-level services that are applications, and the application developers or the management may choose to call the service an "engine". In the context of the packaging of software components, "engine" means one thing. In the context of advertising an online service, "engine" may mean something entirely different. In the arena of "core software development", an engine is a software module that might be included in other software using a package manager such as NuGet for C#, Pipenv for Python, and Swift Package Manager for the Swift language. One seeming outlier is a search engine, such as Google Search, because it is a standalone service provided to end users. However, for the search provider, the engine is part of a distributed computing system that can encompass many data centres throughout the world. The word "engine" is evolving along with the evolutio
https://en.wikipedia.org/wiki/Virtual%20network%20interface
A virtual network interface (VNI) is an abstract virtualized representation of a computer network interface that may or may not correspond directly to a network interface controller. Operating system level It is common for the operating system kernel to maintain a table of virtual network interfaces in memory. This may allow the system to store and operate on such information independently of the physical interface involved (or even whether it is a direct physical interface or for instance a tunnel or a bridged interface). It may also allow processes on the system to interact concerning network connections in a more granular fashion than simply to assume a single amorphous "Internet" (of unknown capacity or performance). W. Richard Stevens, in volume 2 of his treatise entitled TCP/IP Illustrated, refers to the kernel's Virtual Interface Table in his discussion of multicast routing. For example, a multicast router may operate differently on interfaces that represent tunnels than on physical interfaces (e.g. it may only need to collect membership information for physical interfaces). Thus the virtual interface may need to divulge some specifics to the user, such as whether or not it represents a physical interface directly. In addition to allowing user space applications to refer to abstract network interface connections, in some systems a virtual interface framework may allow processes to better coordinate the sharing of a given physical interface (beyond the default operating system behavior) by hierarchically subdividing it into abstract interfaces with specified bandwidth limits and queueing models. This can imply restriction of the process, e.g. by inheriting a limited branch of such a hierarchy from which it may not stray. This extra layer of network abstraction is often unnecessary, and may have a minor performance penalty. However, it is also possible to use such a layer of abstraction to work around a performance bottleneck, indeed even to bypass the k
https://en.wikipedia.org/wiki/MinutePhysics
MinutePhysics is an educational YouTube channel created by Henry Reich in 2011. The channel's videos use whiteboard animation to explain physics-related topics. Early videos on the channel were approximately one minute long. , the channel has over 5.6 million subscribers. Background and video content MinutePhysics was created by Henry Reich in 2011. Reich attended Grinnell College, where he studied mathematics and physics. He then attended the Perimeter Institute for Theoretical Physics, where he earned his Master's degree in theoretical physics from the institute's Perimeter Scholars International program. The video content on MinutePhysics deals with concepts in physics. Examples of videos Reich has uploaded onto the channel include one dealing with the concept of "touch" in regards to electromagnetism. Another deals with the concept of dark matter. The most viewed MinutePhysics video, with more than 17 million views, discusses whether it is more suitable to walk or to run when trying to avoid rain. Reich also has uploaded a series of three videos explaining the Higgs Boson. In March 2020, Reich produced a video that explained exponential projection of statistics as data is being collected, using the evolving record related to COVID-19 data. Collaborations MinutePhysics has collaborated with Vsauce, as well as the director of the Perimeter Institute for Theoretical Physics, Neil Turok, and Destin Sandlin of Smarter Every Day. MinutePhysics also has made two videos that were narrated by Neil deGrasse Tyson and one video narrated by Tom Scott. The channel also collaborated with physicist Sean M. Carroll in a five-part video series on time and entropy and with Grant Sanderson on a video about a lost lecture of physicist Richard Feynman, as well as a video about Bell's Theorem. In 2015, Reich collaborated with Randall Munroe on a video titled "How To Go To Space", which was animated similarly to the style found in Munroe's webcomic xkcd. Google tapped Reich for th
https://en.wikipedia.org/wiki/Signal%20subspace
In signal processing, signal subspace methods are empirical linear methods for dimensionality reduction and noise reduction. These approaches have attracted significant interest and investigation recently in the context of speech enhancement, speech modeling, and speech classification research. The signal subspace is also used in radio direction finding using the MUSIC (algorithm). Essentially the methods represent the application of a principal components analysis (PCA) approach to ensembles of observed time-series obtained by sampling, for example sampling an audio signal. Such samples can be viewed as vectors in a high-dimensional vector space over the real numbers. PCA is used to identify a set of orthogonal basis vectors (basis signals) which capture as much as possible of the energy in the ensemble of observed samples. The vector space spanned by the basis vectors identified by the analysis is then the signal subspace. The underlying assumption is that information in speech signals is almost completely contained in a small linear subspace of the overall space of possible sample vectors, whereas additive noise is typically distributed through the larger space isotropically (for example when it is white noise). By projecting a sample on a signal subspace, that is, keeping only the component of the sample that is in the signal subspace defined by linear combinations of the first few most energized basis vectors, and throwing away the rest of the sample, which is in the remainder of the space orthogonal to this subspace, a certain amount of noise filtering is then obtained. Signal subspace noise-reduction can be compared to Wiener filter methods. There are two main differences: The basis signals used in Wiener filtering are usually harmonic sine waves, into which a signal can be decomposed by Fourier transform. In contrast, the basis signals used to construct the signal subspace are identified empirically, and may for example be chirps, or particular character
https://en.wikipedia.org/wiki/Regeneration%20%28ecology%29
In ecology regeneration is the ability of an ecosystemspecifically, the environment and its living populationto renew and recover from damage. It is a kind of biological regeneration. Regeneration refers to ecosystems replenishing what is being eaten, disturbed, or harvested. Regeneration's biggest force is photosynthesis which transforms sun energy and nutrients into plant biomass. Resilience to minor disturbances is one characteristic feature of healthy ecosystems. Following major (lethal) disturbances, such as a fire or pest outbreak in a forest, an immediate return to the previous dynamic equilibrium will not be possible. Instead, pioneering species will occupy, compete for space, and establish themselves in the newly opened habitat. The new growth of seedlings and community assembly process is known as regeneration in ecology. As ecological succession sets in, a forest will slowly regenerate towards its former state within the succession (climax or any intermediate stage), provided that all outer parameters (climate, soil fertility availability of nutrients, animal migration paths, air pollution or the absence thereof, etc.) remain unchanged. In certain regions like Australia, natural wildfire is a necessary condition for a cyclically stable ecosystem with cyclic regeneration. Artificial disturbances While natural disturbances are usually fully compensated by the rules of ecological succession, human interference can significantly alter the regenerative homeostatic faculties of an ecosystem up to a degree that self-healing will not be possible. For regeneration to occur, active restoration must be attempted. See also Bush regeneration Biocapacity Ecological stability Ecoscaping Forest ecology Net Primary Productivity Pioneer species Regenerative design Regenerative agriculture Soil regeneration
https://en.wikipedia.org/wiki/Network-In-a-Box
A Network-In-a-Box (NIB) is the combination of multiple components of a computer network into a single device (a 'box'), which are traditionally separated into multiple devices. Examples In 2021, the company Genie launched a 5G Network-In-a-Box to run as an on-premise service. In August 2021, Tecore Networks launched a 5G Network-In-a-Box, which also supported 3GGP and LTE. History In 2014, an open-source hardware Network-In-a-Box based on OpenBTS was deployed in West-Papua, Indonesia.
https://en.wikipedia.org/wiki/Mixed%20criticality
A mixed criticality system is a system containing computer hardware and software that can execute several applications of different criticality, such as safety-critical and non-safety critical, or of different Safety Integrity Level (SIL). Different criticality applications are engineered to different levels of assurance, with high criticality applications being the most costly to design and verify. These kinds of systems are typically embedded in a machine such as an aircraft whose safety must be ensured. Principle Traditional safety-critical systems had to be tested and certified in their entirety to show that they were safe to use. However, many such systems are composed of a mixture of safety-critical and non-critical parts, as for example when an aircraft contains a passenger entertainment system that is isolated from the safety-critical flight systems. Some issues to address in mixed criticality systems include real-time behaviour, memory isolation, data and control coupling. Computer scientists have developed techniques for handling systems which thus have mixed criticality, but there are many challenges remaining especially for multi-core hardware. Priority and Criticality Basically, most errors are currently committed when making confusion between priority attribution and criticality management. As priority defines an order between different tasks or messages to be transmitted inside a system, criticality defines classes of messages which can have different parameters depending on the current use case. For example, in case of car crash avoidance or obstacle anticipation, camera sensors can suddenly emit messages more often, and so create an overload in the system. That is when we need to make Mixed-Criticality operate : to select messages to absolutely guarantee on the system in these overload cases. Research projects EU funded research projects on mixed criticality include: MultiPARTES DREAMS PROXIMA CONTREX SAFURE CERTAINTY VIRTICAL T-CREST
https://en.wikipedia.org/wiki/Liquefaction
In materials science, liquefaction is a process that generates a liquid from a solid or a gas or that generates a non-liquid phase which behaves in accordance with fluid dynamics. It occurs both naturally and artificially. As an example of the latter, a "major commercial application of liquefaction is the liquefaction of air to allow separation of the constituents, such as oxygen, nitrogen, and the noble gases." Another is the conversion of solid coal into a liquid form usable as a substitute for liquid fuels. Geology In geology, soil liquefaction refers to the process by which water-saturated, unconsolidated sediments are transformed into a substance that acts like a liquid, often in an earthquake. Soil liquefaction was blamed for building collapses in the city of Palu, Indonesia in October 2018. In a related phenomenon, liquefaction of bulk materials in cargo ships may cause a dangerous shift in the load. Physics and chemistry In physics and chemistry, the phase transitions from solid and gas to liquid (melting and condensation, respectively) may be referred to as liquefaction. The melting point (sometimes called liquefaction point) is the temperature and pressure at which a solid becomes a liquid. In commercial and industrial situations, the process of condensing a gas to liquid is sometimes referred to as liquefaction of gases. Coal Coal liquefaction is the production of liquid fuels from coal using a variety of industrial processes. Dissolution Liquefaction is also used in commercial and industrial settings to refer to mechanical dissolution of a solid by mixing, grinding or blending with a liquid. Food preparation In kitchen or laboratory settings, solids may be chopped into smaller parts sometimes in combination with a liquid, for example in food preparation or laboratory use. This may be done with a blender, or liquidiser in British English. Irradiation Liquefaction of silica and silicate glasses occurs on electron beam irradiation of nanos
https://en.wikipedia.org/wiki/Abstract%20structure
An abstract structure is an abstraction that might be of the geometric spaces or a set structure, or a hypostatic abstraction that is defined by a set of mathematical theorems and laws, properties and relationships in a way that is logically if not always historically independent of the structure of contingent experiences, for example, those involving physical objects. Abstract structures are studied not only in logic and mathematics but in the fields that apply them, as computer science and computer graphics, and in the studies that reflect on them, such as philosophy (especially the philosophy of mathematics). Indeed, modern mathematics has been defined in a very general sense as the study of abstract structures (by the Bourbaki group: see discussion there, at algebraic structure and also structure). An abstract structure may be represented (perhaps with some degree of approximation) by one or more physical objects this is called an implementation or instantiation of the abstract structure. But the abstract structure itself is defined in a way that is not dependent on the properties of any particular implementation. An abstract structure has a richer structure than a concept or an idea. An abstract structure must include precise rules of behaviour which can be used to determine whether a candidate implementation actually matches the abstract structure in question, and it must be free from contradictions. Thus we may debate how well a particular government fits the concept of democracy, but there is no room for debate over whether a given sequence of moves is or is not a valid game of chess (for example Kasparovian approaches). Examples A sorting algorithm is an abstract structure, but a recipe is not, because it depends on the properties and quantities of its ingredients. A simple melody is an abstract structure, but an orchestration is not, because it depends on the properties of particular instruments. Euclidean geometry is an abstract structure, but t
https://en.wikipedia.org/wiki/Kaid%C4%81%20glyphs
are a set of pictograms once used in the Yaeyama Islands of southwestern Japan. The word kaidā was taken from Yonaguni, and most studies on the pictographs focused on Yonaguni Island. However, there is evidence for their use in Yaeyama's other islands, most notably on Taketomi Island. They were used primarily for tax notices, thus were closely associated with the poll tax imposed on Yaeyama by Ryūkyū on Okinawa Island, which was in turn dominated by Satsuma Domain on Southern Kyushu. Etymology Sudō (1944) hypothesized that the etymology of kaidā was , which meant "government office" in Satsuma Domain. This term was borrowed by Ryūkyū on Okinawa and also by the bureaucrats of Yaeyama (karja: in Modern Ishigaki). Standard Japanese /j/ regularly corresponds to /d/ in Yonaguni, and /r/ is often dropped when surrounded by vowels. This theory is in line with the primary impetus for Kaidā glyphs, taxation. History Immediately after conquering Ryūkyū, Satsuma conducted a land survey in Okinawa in 1609 and in Yaeyama in 1611. By doing so, Satsuma decided the amount of tribute to be paid annually by Ryūkyū. Following that, Ryūkyū imposed a poll tax on Yaeyama in 1640. A fixed quota was allocated to each island and then was broken up into each community. Finally, quotas were set for the individual islanders, adjusted only by age and gender. Community leaders were notified of quotas in the government office on Ishigaki. They checked the calculation using warazan (barazan in Yaeyama), a straw-based method of calculation and recording numerals that was reminiscent of Incan Quipu. After that, the quota for each household was written on a wooden plate called . That was where Kaidā glyphs were used. Although sōrō-style Written Japanese had the status of administrative language, the remote islands had to rely on pictograms to notify illiterate peasants. According to a 19th-century document cited by the Yaeyama rekishi (1954), an official named Ōhama Seiki designed "perfect ideograp
https://en.wikipedia.org/wiki/Codon%20reassignment
Codon reassignment is the biological process via which the genetic code of a cell is changed as a response to the environment. It may be caused by alternative tRNA aminoacylation, in which the cell modifies the target aminoacid of some particular type of transfer-RNA. This process has been identified in bacteria, yeast and human cancer cells. In human cancer cells, codon reassignment can be triggered by tryptophan depletion, resulting in proteins where the tryptophan aminoacid is substituted by phenylalanine. See also Expanded genetic code
https://en.wikipedia.org/wiki/Software%20engineering%20demographics
Software engineers form part of the workforce around the world. There are an estimated 26.9 million professional software engineers in the world as of 2022, up from 21 million in 2016. By Country United States In 2022, there were an estimated 4.4 million professional software engineers in North America. There are 152 million people employed in the US workforce, making software engineers 2.54% of the total workforce. The total above is an increase compared to around 3.87 million software engineers employed in 2016. Summary Based on data from the U.S. Bureau of Labor Statistics from 2002, about 612,000 software engineers worked in the U.S. about one out of every 200 workers. There were 55% to 60% as many software engineers as all traditional engineers. This comparison holds whether one compares the number of practitioners, managers, educators, or technicians/programmers. Software engineering had 612,000 practitioners; 264,790 managers, 16,495 educators, and 457,320 programmers. Software Engineers Vs. Traditional Engineers The following two tables compare the number of software engineers (611,900 in 2002) versus the number of traditional engineers (1,157,020 in 2002). There are another 1,500,000 people in system analysis, system administration, and computer support, many of whom might be called software engineers. Many systems analysts manage software development teams and analysis is an important software engineering role, so many of them might be considered software engineers in the near future. This means that the number of software engineers may actually be much higher. It's important to note that the number of software engineers declined by 5-to-10 percent from 2000 to 2002. Computer Managers Versus Construction and Engineering Managers Computer and information system managers (264,790) manage software projects, as well as computer operations. Similarly, Construction and engineering managers (413,750) oversee engineering projects, manufacturing plants
https://en.wikipedia.org/wiki/Software%20requirements
Software requirements for a system are the description of what the system should do, the service or services that it provides and the constraints on its operation. The IEEE Standard Glossary of Software Engineering Terminology defines a requirement as: A condition or capability needed by a user to solve a problem or achieve an objective. A condition or capability that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document. A documented representation of a condition or capability as in 1 or 2. The activities related to working with software requirements can broadly be broken down into elicitation, analysis, specification, and management. Note that the wording Software requirements is additionally used in software release notes to explain, which depending software packages are required for a certain software to be built/installed/used. Elicitation Elicitation is the gathering and discovery of requirements from stakeholders and other sources. A variety of techniques can be used such as joint application design (JAD) sessions, interviews, document analysis, focus groups, etc. Elicitation is the first step of requirements development. Analysis Analysis is the logical breakdown that proceeds from elicitation. Analysis involves reaching a richer and more precise understanding of each requirement and representing sets of requirements in multiple, complementary ways. Requirements Triage or prioritization of requirements is another activity which often follows analysis. This relates to Agile software development in planning phase, e.g. by Planning poker, however it might not be the same depending on the context and nature of project and requirements or product/service that is getting build. Specification Specification involves representing and storing the collected requirements knowledge in a persistent and well-organized fashion that facilitates effective communication and chan
https://en.wikipedia.org/wiki/Redundancy%20principle%20%28biology%29
The redundancy principle in biology expresses the need of many copies of the same entity (cells, molecules, ions) to fulfill a biological function. Examples are numerous: disproportionate numbers of spermatozoa during fertilization compared to one egg, large number of neurotransmitters released during neuronal communication compared to the number of receptors, large numbers of released calcium ions during transient in cells, and many more in molecular and cellular transduction or gene activation and cell signaling. This redundancy is particularly relevant when the sites of activation are physically separated from the initial position of the molecular messengers. The redundancy is often generated for the purpose of resolving the time constraint of fast-activating pathways. It can be expressed in terms of the theory of extreme statistics to determine its laws and quantify how the shortest paths are selected. The main goal is to estimate these large numbers from physical principles and mathematical derivations. When a large distance separates the source and the target (a small activation site), the redundancy principle explains that this geometrical gap can be compensated by large number. Had nature used less copies than normal, activation would have taken a much longer time, as finding a small target by chance is a rare event and falls into narrow escape problems. Molecular rate The time for the fastest particles to reach a target in the context of redundancy depends on the numbers and the local geometry of the target. In most of the time, it is the rate of activation. This rate should be used instead of the classical Smoluchowski's rate describing the mean arrival time, but not the fastest. The statistics of the minimal time to activation set kinetic laws in biology, which can be quite different from the ones associated to average times. Physical models Stochastic process The motion of a particle located at position can be described by the Smoluchowski's limit
https://en.wikipedia.org/wiki/List%20of%20conjectures
This is a list of notable mathematical conjectures. Open problems The following conjectures remain open. The (incomplete) column "cites" lists the number of results for a Google Scholar search for the term, in double quotes . Conjectures now proved (theorems) The conjecture terminology may persist: theorems often enough may still be referred to as conjectures, using the anachronistic names. Deligne's conjecture on 1-motives Goldbach's weak conjecture (proved in 2013) Sensitivity conjecture (proved in 2019) Disproved (no longer conjectures) The conjectures in following list were not necessarily generally accepted as true before being disproved. Atiyah conjecture (not a conjecture to start with) Borsuk's conjecture Chinese hypothesis (not a conjecture to start with) Doomsday conjecture Euler's sum of powers conjecture Ganea conjecture Generalized Smith conjecture Hauptvermutung Hedetniemi's conjecture, counterexample announced 2019 Hirsch conjecture (disproved in 2010) Intersection graph conjecture Kelvin's conjecture Kouchnirenko's conjecture Mertens conjecture Pólya conjecture, 1919 (1958) Ragsdale conjecture Schoenflies conjecture (disproved 1910) Tait's conjecture Von Neumann conjecture Weyl–Berry conjecture Williamson conjecture In mathematics, ideas are supposedly not accepted as fact until they have been rigorously proved. However, there have been some ideas that were fairly accepted in the past but which were subsequently shown to be false. The following list is meant to serve as a repository for compiling a list of such ideas. The idea of the Pythagoreans that all numbers can be expressed as a ratio of two whole numbers. This was disproved by one of Pythagoras' own disciples, Hippasus, who showed that the square root of two is what we today call an irrational number. One story claims that he was thrown off the ship in which he and some other Pythagoreans were sailing because his discovery was too heretical. Euclid's parallel
https://en.wikipedia.org/wiki/Security%20testing
Security testing is a process intended to detect flaws in the security mechanisms of an information system and as such help enable it to protect data and maintain functionality as intended. Due to the logical limitations of security testing, passing the security testing process is not an indication that no flaws exist or that the system adequately satisfies the security requirements. Typical security requirements may include specific elements of confidentiality, integrity, authentication, availability, authorization and non-repudiation. Actual security requirements tested depend on the security requirements implemented by the system. Security testing as a term has a number of different meanings and can be completed in a number of different ways. As such, a Security Taxonomy helps us to understand these different approaches and meanings by providing a base level to work from. Confidentiality A security measure which protects against the disclosure of information to parties other than the intended recipient is by no means the only way of ensuring the security. Integrity Integrity of information refers to protecting information from being modified by unauthorized parties A measure intended to allow the receiver to determine that the information provided by a system is correct. Integrity schemes often use some of the same underlying technologies as confidentiality schemes, but they usually involve adding information to a communication, to form the basis of an algorithmic check, rather than the encoding all of the communication. To check if the correct information is transferred from one application to other. Authentication This might involve confirming the identity of a person, tracing the origins of an artifact, ensuring that a product is what its packaging and labelling claims to be, or assuring that a computer program is a trusted one. Authorization The process of determining that a requester is allowed to receive a service or perform an operation.
https://en.wikipedia.org/wiki/Power-on%20reset
A power-on reset (PoR, POR) generator is a microcontroller or microprocessor peripheral that generates a reset signal when power is applied to the device. It ensures that the device starts operating in a known state. PoR generator In VLSI devices, the power-on reset (PoR) is an electronic device incorporated into the integrated circuit that detects the power applied to the chip and generates a reset impulse that goes to the entire circuit placing it into a known state. A simple PoR uses the charging of a capacitor, in series with a resistor, to measure a time period during which the rest of the circuit is held in a reset state. A Schmitt trigger may be used to deassert the reset signal cleanly, once the rising voltage of the RC network passes the threshold voltage of the Schmitt trigger. The resistor and capacitor values should be determined so that the charging of the RC network takes long enough that the supply voltage will have stabilised by the time the threshold is reached. One of the issues with using RC network to generate PoR pulse is the sensitivity of the R and C values to the power-supply ramp characteristics. When the power supply ramp is rapid, the R and C values can be calculated so that the time to reach the switching threshold of the schmitt trigger is enough to apply a long enough reset pulse. When the power supply ramp itself is slow, the RC network tends to get charged up along with the power-supply ramp up. So when the input schmitt stage is all powered up and ready, the input voltage from the RC network would already have crossed the schmitt trigger point. This means that there might not be a reset pulse supplied to the core of the VLSI. Power-on reset on IBM mainframes On an IBM mainframe, a power-on reset (POR) is a sequence of actions that the processor performs either due to a POR request from the operator or as part of turning on power. The operator requests a POR for configuration changes that cannot be recognized by a simple Syste
https://en.wikipedia.org/wiki/List%20of%20cholesterol%20in%20foods
This list consists of common foods with their cholesterol content recorded in milligrams per 100 grams (3.5 ounces) of food. Functions Cholesterol is a sterol, a steroid-like lipid made by animals, including humans. The human body makes one-eighth to one-fourth teaspoons of pure cholesterol daily. A cholesterol level of 5.5 millimoles per litre or below is recommended for an adult. The rise of cholesterol in the body can give a condition in which excessive cholesterol is deposited in artery walls called atherosclerosis. This condition blocks the blood flow to vital organs which can result in high blood pressure or stroke. Cholesterol is not always bad. It's a vital part of the cell wall and a precursor to substances such as brain matter and some sex hormones. There are some types of cholesterol which are beneficial to the heart and blood vessels. High-density lipoprotein is commonly called "good" cholesterol. These lipoproteins help in the removal of cholesterol from the cells, which is then transported back to the liver where it is disintegrated and excreted as waste or broken down into parts. Cholesterol content of various foods See also Nutrition Plant stanol ester Fatty acid
https://en.wikipedia.org/wiki/List%20of%20types%20of%20systems%20theory
This list of types of systems theory gives an overview of different types of systems theory, which are mentioned in scientific book titles or articles. The following more than 40 types of systems theory are all explicitly named systems theory and represent a unique conceptual framework in a specific field of science. Systems theory has been formalized since the 1950s, and a long set of specialized systems theories and cybernetics exist. In the beginnings, general systems theory was developed by Ludwig von Bertalanffy to overcome the over-specialisation of the modern times and as a worldview using holism. The systems theories nowadays are closer to the traditional specialisation than to holism, by interdependencies and mutual division by mutually-different specialists. A Abstract systems theory (also see: formal system) Action Theory Adaptive systems theory (also see: complex adaptive system) Applied general systems theory (also see: general systems theory) Applied multidimensional systems theory Archaeological systems theory (also see: Systems theory in archaeology) Systems theory in anthropology Associated systems theory B Behavioral systems theory Biochemical systems theory Biomatrix systems theory Body system C Complex adaptive systems theory (also see: complex adaptive system) Complex systems theory (also see: complex systems) Computer-aided systems theory Conceptual systems theory (also see: conceptual system) Control systems theory (also see: control system) Critical systems theory (also see: critical systems thinking, and critical theory) Cultural Agency Theory D Developmental systems theory Distributed parameter systems theory Dynamical systems theory E Ecological systems theory (also see: ecosystem, ecosystem ecology) Economic systems theory (also see: economic system) Electric energy systems theory F Family systems theory (also see: systemic therapy) Fuzzy systems theory (also see: fuzzy logic) G General sys
https://en.wikipedia.org/wiki/Dell%20PowerConnect
The current portfolio of PowerConnect switches are now being offered as part of the Dell Networking brand: information on this page is an overview of all current and past PowerConnect switches as per August 2013, but any updates on current portfolio will be detailed on the Dell Networking page. PowerConnect was a Dell series of network switches. The PowerConnect "classic" switches are based on Broadcom or Marvell Technology Group fabric and firmware. Dell acquired Force10 Networks in 2011 to expand its data center switch products. Dell also offers the PowerConnect M-series which are switches for the M1000e blade-server enclosure and the PowerConnect W-series which is a Wi-Fi platform based on . Starting in 2013 Dell will re-brand their networking portfolio to Dell Networking which covers both the legacy PowerConnect products as well as the Force10 products. Product line The Dell PowerConnect line is marketed for business computer networking. They connect computers and servers in small to medium-sized networks using Ethernet. The brand name was first announced in July 2001, as traditional personal computer sales were declining. By September 2002 Cisco Systems cancelled a reseller agreement with Dell. Previously under storage business general manager Darren Thomas, in September 2010 Dario Zamarian was named to head networking platforms within Dell. PowerConnect switches are available in pre-configured web-managed models as well as more expensive managed models. there is not a single underlying operating system: the models with a product-number up to 5500 run on a proprietary OS made by Marvell while the Broadcom powered switches run on an OS based on VxWorks. With the introduction of the 8100 series the switches will run on DNOS or Dell Networking Operating System which is based on a Linux kernel for DNOS 5.x and 6.x. Via PowerConnect W-series Dell offers a range of Aruba WiFi products. The Powerconnect-J (Juniper Networks) and B (Brocade) series are not longer
https://en.wikipedia.org/wiki/Content%20centric%20networking
In contrast to IP-based, host-oriented, Internet architecture, Content-Centric Networking (CCN) emphasizes content by making it directly addressable and routable. Endpoints communicate based on named data instead of IP addresses. CCN is characterized by the basic exchange of content request messages (called "Interests") and content return messages (called "Content Objects"). It is considered an information-centric networking (ICN) architecture. The goals of CCN are to provide a more secure, flexible, and scalable network, thereby addressing the Internet's modern-day requirements for protected content distribution on a massive scale to a diverse set of end devices. CCN embodies a security model that explicitly secures individual pieces of content rather than securing the connection or "pipe." It provides additional flexibility using data names instead of host names (IP addresses). Additionally, named and secured content resides in distributed caches automatically populated on demand or selectively pre-populated. When requested by name, CCN delivers named content to the user from the nearest cache, traversing fewer network hops, eliminating redundant requests, and consuming fewer resources overall. CCN began as a research project at the Palo Alto Research Center (PARC) in 2007. The first software release (CCNx 0.1) was made available in 2009. CCN is the ancestor of related approaches, including named data networking. CCN technology and its open-source code base were acquired by Cisco in February 2017. History The principles behind information-centric networks were first described in the original 17 rules of Ted Nelson's Project Xanadu in 1979. In 2002, Brent Baccala submitted an Internet-Draft differentiating between connection-oriented and data-oriented networking and suggested that the Internet web architecture was rapidly becoming more data-oriented. In 2006, the DONA project at UC Berkeley and ICSI proposed an information-centric network architecture, which i
https://en.wikipedia.org/wiki/Chemical%20biology
Chemical biology is a scientific discipline between the fields of chemistry and biology. The discipline involves the application of chemical techniques, analysis, and often small molecules produced through synthetic chemistry, to the study and manipulation of biological systems. In contrast to biochemistry, which involves the study of the chemistry of biomolecules and regulation of biochemical pathways within and between cells, chemical biology deals with chemistry applied to biology (synthesis of biomolecules, the simulation of biological systems, etc.). Introduction Some forms of chemical biology attempt to answer biological questions by studying biological systems at the chemical level. In contrast to research using biochemistry, genetics, or molecular biology, where mutagenesis can provide a new version of the organism, cell, or biomolecule of interest, chemical biology probes systems in vitro and in vivo with small molecules that have been designed for a specific purpose or identified on the basis of biochemical or cell-based screening (see chemical genetics). Chemical biology is one of several interdisciplinary sciences that tend to differ from older, reductionist fields and whose goals are to achieve a description of scientific holism. Chemical biology has scientific, historical and philosophical roots in medicinal chemistry, supramolecular chemistry, bioorganic chemistry, pharmacology, genetics, biochemistry, and metabolic engineering. Systems of interest Enrichment techniques for proteomics Chemical biologists work to improve proteomics through the development of enrichment strategies, chemical affinity tags, and new probes. Samples for proteomics often contain many peptide sequences and the sequence of interest may be highly represented or of low abundance, which creates a barrier for their detection. Chemical biology methods can reduce sample complexity by selective enrichment using affinity chromatography. This involves targeting a peptide with a di
https://en.wikipedia.org/wiki/Immunodiffusion
Immunodiffusion is a diagnostic test which involves diffusion through a substance such as agar which is generally soft gel agar (2%) or agarose (2%), used for the detection of antibodies or antigen. The commonly known types are: Single diffusion in one dimension (Oudin procedure) Double diffusion in one dimension (Oakley Fulthorpe procedure) Single diffusion in two dimensions (radial immunodiffusion or Mancini method) Double diffusion in two dimensions (Ouchterlony double immunodiffusion) Notes External links Biological techniques and tools Diagnostic virology Immunologic tests
https://en.wikipedia.org/wiki/Sorting%20network
In computer science, comparator networks are abstract devices built up of a fixed number of "wires", carrying values, and comparator modules that connect pairs of wires, swapping the values on the wires if they are not in a desired order. Such networks are typically designed to perform sorting on fixed numbers of values, in which case they are called sorting networks. Sorting networks differ from general comparison sorts in that they are not capable of handling arbitrarily large inputs, and in that their sequence of comparisons is set in advance, regardless of the outcome of previous comparisons. In order to sort larger amounts of inputs, new sorting networks must be constructed. This independence of comparison sequences is useful for parallel execution and for implementation in hardware. Despite the simplicity of sorting nets, their theory is surprisingly deep and complex. Sorting networks were first studied circa 1954 by Armstrong, Nelson and O'Connor, who subsequently patented the idea. Sorting networks can be implemented either in hardware or in software. Donald Knuth describes how the comparators for binary integers can be implemented as simple, three-state electronic devices. Batcher, in 1968, suggested using them to construct switching networks for computer hardware, replacing both buses and the faster, but more expensive, crossbar switches. Since the 2000s, sorting nets (especially bitonic mergesort) are used by the GPGPU community for constructing sorting algorithms to run on graphics processing units. Introduction A sorting network consists of two types of items: comparators and wires. The wires are thought of as running from left to right, carrying values (one per wire) that traverse the network all at the same time. Each comparator connects two wires. When a pair of values, traveling through a pair of wires, encounter a comparator, the comparator swaps the values if and only if the top wire's value is greater or equal to the bottom wire's value. In
https://en.wikipedia.org/wiki/Microarchitecture%20simulation
Microarchitecture simulation is an important technique in computer architecture research and computer science education. It is a tool for modeling the design and behavior of a microprocessor and its components, such as the ALU, cache memory, control unit, and data path, among others. The simulation allows researchers to explore the design space as well as to evaluate the performance and efficiency of novel microarchitecture features. For example, several microarchitecture components, such as branch predictors, re-order buffer, and trace cache, went through numerous simulation cycles before they become common components in contemporary microprocessors of today. In addition, the simulation also enables educators to teach computer organization and architecture courses with hand-on experiences. For system-level simulation of computer hardware, please refer to the full system simulation. Classification Microarchitecture simulation can be classified into multiple categories according to input types and level of details. Specifically, the input can be a trace collected from an execution of program on a real microprocessor (so called trace-driven simulation) or a program itself (so called execution-driven simulation). A trace-driven simulation reads a fixed sequence of trace records from a file as an input. These trace records usually represent memory references, branch outcomes, or specific machine instructions, among others. While a trace-driven simulation is known to be comparatively fast and its results are highly reproducible, it also requires a very large storage space. On the other hand, an execution-driven simulation reads a program and simulates the execution of machine instructions on the fly. A program file is typically several magnitudes smaller than a trace file. However, the execution-driven simulation is much slower than the trace-driven simulation because it has to process each instruction one-by-one and update all statuses of the microarchitecture compo
https://en.wikipedia.org/wiki/Feigenbaum%20constants
In mathematics, specifically bifurcation theory, the Feigenbaum constants are two mathematical constants which both express ratios in a bifurcation diagram for a non-linear map. They are named after the physicist Mitchell J. Feigenbaum. History Feigenbaum originally related the first constant to the period-doubling bifurcations in the logistic map, but also showed it to hold for all one-dimensional maps with a single quadratic maximum. As a consequence of this generality, every chaotic system that corresponds to this description will bifurcate at the same rate. Feigenbaum made this discovery in 1975, and he officially published it in 1978. The first constant The first Feigenbaum constant is the limiting ratio of each bifurcation interval to the next between every period doubling, of a one-parameter map where is a function parameterized by the bifurcation parameter . It is given by the limit where are discrete values of at the th period doubling. Names Feigenbaum constant Feigenbaum bifurcation velocity delta Value 30 decimal places : = A simple rational approximation is: , which is correct to 5 significant values (when rounding). For more precision use , which is correct to 7 significant values. Is approximately equal to , with an error of 0.0047% Illustration Non-linear maps To see how this number arises, consider the real one-parameter map Here is the bifurcation parameter, is the variable. The values of for which the period doubles (e.g. the largest value for with no period-2 orbit, or the largest with no period-4 orbit), are , etc. These are tabulated below: {| class="wikitable" |- ! ! Period ! Bifurcation parameter () ! Ratio |- | 1 || 2 || 0.75 || — |- | 2 || 4 || 1.25 || — |- | 3 || 8 || || 4.2337 |- | 4 || 16 || || 4.5515 |- | 5 || 32 || || 4.6458 |- | 6 || 64 || || 4.6639 |- | 7 || 128 || || 4.6682 |- | 8 || 256 || || 4.6689 |- |} The ratio in the last column converges to the first Feigenbaum constant. The same numb
https://en.wikipedia.org/wiki/Critical%20distance%20%28animals%29
Critical distance for an animal is the distance a human or an aggressor animal has to approach in order to trigger a defensive attack of the first animal. The concept was introduced by Swiss zoologist Heini Hediger in 1954, along with other space boundaries for an animal, such as flight distance (run boundary), critical distance (attack boundary), personal space (distance separating members of non-contact species, as a pair of swans), and social distance (intraspecies communication distance). Hediger developed and applied these distance concepts in the context of designing zoos. As the critical distance is smaller than the flight distance, there are only a few scenarios in the wild when the critical distance may be encroached. As an example, critical distance may be reached if an animal noticed an intruder too late or the animal was "cornered" to a place of no escape. Edward T. Hall, a cultural anthropologist, reasoned that in humans the flight distance and critical distance have been eliminated in human reactions, and thus proceeded to determine modified criteria for space boundaries in human interactions. See also Escape distance Fight-or-flight response Flight zone Personal space Territoriality
https://en.wikipedia.org/wiki/EPROM
An EPROM (rarely EROM), or erasable programmable read-only memory, is a type of programmable read-only memory (PROM) chip that retains its data when its power supply is switched off. Computer memory that can retrieve stored data after a power supply has been turned off and back on is called non-volatile. It is an array of floating-gate transistors individually programmed by an electronic device that supplies higher voltages than those normally used in digital circuits. Once programmed, an EPROM can be erased by exposing it to strong ultraviolet light source (such as from a mercury-vapor lamp). EPROMs are easily recognizable by the transparent fused quartz (or on later models resin) window on the top of the package, through which the silicon chip is visible, and which permits exposure to ultraviolet light during erasing. Operation Development of the EPROM memory cell started with investigation of faulty integrated circuits where the gate connections of transistors had broken. Stored charge on these isolated gates changes their threshold voltage. Following the invention of the MOSFET (metal–oxide–semiconductor field-effect transistor) by Mohamed Atalla and Dawon Kahng at Bell Labs, presented in 1960, Frank Wanlass studied MOSFET structures in the early 1960s. In 1963, he noted the movement of charge through oxide onto a gate. While he did not pursue it, this idea would later become the basis for EPROM technology. In 1967, Dawon Kahng and Simon Min Sze at Bell Labs proposed that the floating gate of a MOSFET could be used for the cell of a reprogrammable ROM (read-only memory). Building on this concept, Dov Frohman of Intel invented EPROM in 1971, and was awarded in 1972. Frohman designed the Intel 1702, a 2048-bit EPROM, which was announced by Intel in 1971. Each storage location of an EPROM consists of a single field-effect transistor. Each field-effect transistor consists of a channel in the semiconductor body of the device. Source and drain contacts are ma
https://en.wikipedia.org/wiki/Digifant%20engine%20management%20system
The Digifant engine management system is an electronic engine control unit (ECU), which monitors and controls the fuel injection and ignition systems in petrol engines, designed by Volkswagen Group, in cooperation with Robert Bosch GmbH. Digifant is the outgrowth of the Digijet fuel injection system first used on water-cooled Volkswagen A2 platform-based models. History Digifant was introduced in 1986 on the 2.1 litre Volkswagen Type 2 (T3) (Vanagon in the US) engine. This system combined digital fuel control as used in the earlier Digi-Jet systems with a new digital ignition system. The combination of fuel injection control and ignition control is the reason for the name "Digifant II" on the first version produced. Digifant as used in Volkswagen Golf and Volkswagen Jetta models simplified several functions, and added knock sensor control to the ignition system. Other versions of Digifant appeared on the Volkswagen Fox, Corrado, Volkswagen Transporter (T4) (known as the Eurovan in North America), as well as 1993 and later production versions of the rear-engined Volkswagen Beetle, sold only in Mexico. Lower-power versions (without a knock sensor), supercharged, and 16-valve variants were produced. Nearly exclusive to the European market, Volkswagen AG subsidiary Audi AG also used the Digifant system, namely in its 2.0 E variants of the Audi 80 and Audi 100. Digifant is an engine management system designed originally to take advantage of the first generation of newly developed digital signal processing circuits. Production changes and updates were made to keep the system current with the changing California and federal emissions requirements. Updates were also made to allow integration of other vehicle systems into the scope of engine operation. Changes in circuit technology, design and processing speed along with evolving emissions standards, resulted in the development of new engine management systems. These new system incorporated adaptive learning fuz
https://en.wikipedia.org/wiki/Computer%20security%20incident%20management
In the fields of computer security and information technology, computer security incident management involves the monitoring and detection of security events on a computer or computer network, and the execution of proper responses to those events. Computer security incident management is a specialized form of incident management, the primary purpose of which is the development of a well understood and predictable response to damaging events and computer intrusions. Incident management requires a process and a response team which follows this process. This definition of computer security incident management follows the standards and definitions described in the National Incident Management System (NIMS). The incident coordinator manages the response to an emergency security incident. In a Natural Disaster or other event requiring response from Emergency services, the incident coordinator would act as a liaison to the emergency services incident manager. Overview Computer security incident management is an administrative function of managing and protecting computer assets, networks and information systems. These systems continue to become more critical to the personal and economic welfare of our society. Organizations (public and private sector groups, associations and enterprises) must understand their responsibilities to the public good and to the welfare of their memberships and stakeholders. This responsibility extends to having a management program for “what to do, when things go wrong.” Incident management is a program which defines and implements a process that an organization may adopt to promote its own welfare and the security of the public. Components of an incident Events An event is an observable change to the normal behavior of a system, environment, process, workflow or person (components). There are three basic types of events: Normal—a normal event does not affect critical components or require change controls prior to the implementation
https://en.wikipedia.org/wiki/Bachelor%20of%20Science%20in%20Human%20Biology
Several universities have designed interdisciplinary courses with a focus on human biology at the undergraduate level. There is a wide variation in emphasis ranging from business, social studies, public policy, healthcare and pharmaceutical research. Americas Human Biology major at Stanford University, Palo Alto (since 1970) Stanford's Human Biology Program is an undergraduate major; it integrates the natural and social sciences in the study of human beings. It is interdisciplinary and policy-oriented and was founded in 1970 by a group of Stanford faculty (Professors Dornbusch, Ehrlich, Hamburg, Hastorf, Kennedy, Kretchmer, Lederberg, and Pittendrigh). It is a very popular major and alumni have gone to post-graduate education, medical school, law, business and government. Human and Social Biology (Caribbean) Human and Social Biology is a Level 4 & 5 subject in the secondary and post-secondary schools in the Caribbean and is optional for the Caribbean Secondary Education Certification (CSEC) which is equivalent to Ordinary Level (O-Level) under the British school system. The syllabus centers on structure and functioning (anatomy, physiology, biochemistry) of human body and the relevance to human health with Caribbean-specific experience. The syllabus is organized under five main sections: Living organisms and the environment, life processes, heredity and variation, disease and its impact on humans, the impact of human activities on the environment. Human Biology Program at University of Toronto The University of Toronto offers an undergraduate program in Human Biology that is jointly offered by the Faculty of Arts & Science and the Faculty of Medicine. The program offers several major and specialist options in: human biology, neuroscience, health & disease, global health, and fundamental genetics and its applications. Asia BSc (Honours) Human Biology at All India Institute of Medical Sciences, New Delhi (1980–2002) BSc (honours) Human Biology at AIIMS (New
https://en.wikipedia.org/wiki/Stochastic%20portfolio%20theory
Stochastic portfolio theory (SPT) is a mathematical theory for analyzing stock market structure and portfolio behavior introduced by E. Robert Fernholz in 2002. It is descriptive as opposed to normative, and is consistent with the observed behavior of actual markets. Normative assumptions, which serve as a basis for earlier theories like modern portfolio theory (MPT) and the capital asset pricing model (CAPM), are absent from SPT. SPT uses continuous-time random processes (in particular, continuous semi-martingales) to represent the prices of individual securities. Processes with discontinuities, such as jumps, have also been incorporated* into the theory (*unverifiable claim due to missing citation!). Stocks, portfolios and markets SPT considers stocks and stock markets, but its methods can be applied to other classes of assets as well. A stock is represented by its price process, usually in the logarithmic representation. In the case the market is a collection of stock-price processes for each defined by a continuous semimartingale where is an -dimensional Brownian motion (Wiener) process with , and the processes and are progressively measurable with respect to the Brownian filtration . In this representation is called the (compound) growth rate of and the covariance between and is It is frequently assumed that, for all the process is positive, locally square-integrable, and does not grow too rapidly as The logarithmic representation is equivalent to the classical arithmetic representation which uses the rate of return however the growth rate can be a meaningful indicator of long-term performance of a financial asset, whereas the rate of return has an upward bias. The relation between the rate of return and the growth rate is The usual convention in SPT is to assume that each stock has a single share outstanding, so represents the total capitalization of the -th stock at time and is the total capitalization of the market. Dividend
https://en.wikipedia.org/wiki/Reflection%20coefficient
In physics and electrical engineering the reflection coefficient is a parameter that describes how much of a wave is reflected by an impedance discontinuity in the transmission medium. It is equal to the ratio of the amplitude of the reflected wave to the incident wave, with each expressed as phasors. For example, it is used in optics to calculate the amount of light that is reflected from a surface with a different index of refraction, such as a glass surface, or in an electrical transmission line to calculate how much of the electromagnetic wave is reflected by an impedance discontinuity. The reflection coefficient is closely related to the transmission coefficient. The reflectance of a system is also sometimes called a "reflection coefficient". Different specialties have different applications for the term. Transmission lines In telecommunications and transmission line theory, the reflection coefficient is the ratio of the complex amplitude of the reflected wave to that of the incident wave. The voltage and current at any point along a transmission line can always be resolved into forward and reflected traveling waves given a specified reference impedance Z0. The reference impedance used is typically the characteristic impedance of a transmission line that's involved, but one can speak of reflection coefficient without any actual transmission line being present. In terms of the forward and reflected waves determined by the voltage and current, the reflection coefficient is defined as the complex ratio of the voltage of the reflected wave () to that of the incident wave (). This is typically represented with a (capital gamma) and can be written as: It can as well be defined using the currents associated with the reflected and forward waves, but introducing a minus sign to account for the opposite orientations of the two currents: The reflection coefficient may also be established using other field or circuit pairs of quantities whose product defines pow
https://en.wikipedia.org/wiki/Lorenz%20gauge%20condition
In electromagnetism, the Lorenz gauge condition or Lorenz gauge (after Ludvig Lorenz) is a partial gauge fixing of the electromagnetic vector potential by requiring The name is frequently confused with Hendrik Lorentz, who has given his name to many concepts in this field. The condition is Lorentz invariant. The Lorenz gauge condition does not completely determine the gauge: one can still make a gauge transformation where is the four-gradient and is any harmonic scalar function: that is, a scalar function obeying the equation of a massless scalar field. The Lorenz gauge condition is used to eliminate the redundant spin-0 component in Maxwell's equations when these are used to describe a massless spin-1 quantum field. It is also used for massive spin-1 fields where the concept of gauge transformations does not apply at all. Description In electromagnetism, the Lorenz condition is generally used in calculations of time-dependent electromagnetic fields through retarded potentials. The condition is where is the four-potential, the comma denotes a partial differentiation and the repeated index indicates that the Einstein summation convention is being used. The condition has the advantage of being Lorentz invariant. It still leaves substantial gauge degrees of freedom. In ordinary vector notation and SI units, the condition is where is the magnetic vector potential and is the electric potential; see also gauge fixing. In Gaussian units the condition is A quick justification of the Lorenz gauge can be found using Maxwell's equations and the relation between the magnetic vector potential and the magnetic field: Therefore, Since the curl is zero, that means there is a scalar function such that This gives a well known equation for the electric field: This result can be plugged into the Ampère–Maxwell equation, This leaves To have Lorentz invariance, the time derivatives and spatial derivatives must be treated equally (i.e. of the same order). Therefor
https://en.wikipedia.org/wiki/BlueBorne%20%28security%20vulnerability%29
BlueBorne is a type of security vulnerability with Bluetooth implementations in Android, iOS, Linux and Windows. It affects many electronic devices such as laptops, smart cars, smartphones and wearable gadgets. One example is . The vulnerabilities were first reported by Armis, the asset intelligence cybersecurity company, on 12 September 2017. According to Armis, "The BlueBorne attack vector can potentially affect all devices with Bluetooth capabilities, estimated at over 8.2 billion devices today [2017]." History The BlueBorne security vulnerabilities were first reported by Armis, the asset intelligence cybersecurity company, on 12 September 2017. Technical Information The BlueBorne vulnerabilities are a set of 8 separate vulnerabilities. They can be broken down into groups based upon platform and type. There were vulnerabilities found in the Bluetooth code of the Android, iOS, Linux and Windows platforms: Linux kernel RCE vulnerability - CVE-2017-1000251 Linux Bluetooth stack (BlueZ) information Leak vulnerability - CVE-2017-1000250 Android information Leak vulnerability - CVE-2017-0785 Android RCE vulnerability #1 - CVE-2017-0781 Android RCE vulnerability #2 - CVE-2017-0782 The Bluetooth Pineapple in Android - Logical Flaw CVE-2017-0783 The Bluetooth Pineapple in Windows - Logical Flaw CVE-2017-8628 Apple Low Energy Audio Protocol RCE vulnerability - CVE-2017-14315 The vulnerabilities are a mixture of information leak vulnerabilities, remote code execution vulnerability or logical flaw vulnerabilities. The Apple iOS vulnerability was a remote code execution vulnerability due to the implementation of LEAP (Low Energy Audio Protocol). This vulnerability was only present in older versions of the Apple iOS. Impact In 2017, BlueBorne was estimated to potentially affect all of the 8.2 billion Bluetooth devices worldwide, although they clarify that 5.3 billion Bluetooth devices are at risk. Many devices are affected, including laptops, smart cars,
https://en.wikipedia.org/wiki/Hybrid-core%20computing
Hybrid-core computing is the technique of extending a commodity instruction set architecture (e.g. x86) with application-specific instructions to accelerate application performance. It is a form of heterogeneous computing wherein asymmetric computational units coexist with a "commodity" processor. Hybrid-core processing differs from general heterogeneous computing in that the computational units share a common logical address space, and an executable is composed of a single instruction stream—in essence a contemporary coprocessor. The instruction set of a hybrid-core computing system contains instructions that can be dispatched either to the host instruction set or to the application-specific hardware. Typically, hybrid-core computing is best deployed where the predominance of computational cycles are spent in a few identifiable kernels, as is often seen in high-performance computing applications. Acceleration is especially pronounced when the kernel's logic maps poorly to a sequence of commodity processor instructions, and/or maps well to the application-specific hardware. Hybrid-core computing is used to accelerate applications beyond what is currently physically possible with off-the-shelf processors, or to lower power & cooling costs in a data center by reducing computational footprint. (i.e., to circumvent obstacles such as the power/density challenges faced with today's commodity processors).
https://en.wikipedia.org/wiki/Indexed%20family
In mathematics, a family, or indexed family, is informally a collection of objects, each associated with an index from some index set. For example, a family of real numbers, indexed by the set of integers, is a collection of real numbers, where a given function selects one real number for each integer (possibly the same) as indexing. More formally, an indexed family is a mathematical function together with its domain and image (that is, indexed families and mathematical functions are technically identical, just point of views are different). Often the elements of the set are referred to as making up the family. In this view, indexed families are interpreted as collections of indexed elements instead of functions. The set is called the index set of the family, and is the indexed set. Sequences are one type of families indexed by natural numbers. In general, the index set is not restricted to be countable. For example, one could consider an uncountable family of subsets of the natural numbers indexed by the real numbers. Formal definition Let and be sets and a function such that where is an element of and the image of under the function is denoted by . For example, is denoted by The symbol is used to indicate that is the element of indexed by The function thus establishes a family of elements in indexed by which is denoted by or simply if the index set is assumed to be known. Sometimes angle brackets or braces are used instead of parentheses, although the use of braces risks confusing indexed families with sets. Functions and indexed families are formally equivalent, since any function with a domain induces a family and conversely. Being an element of a family is equivalent to being in the range of the corresponding function. In practice, however, a family is viewed as a collection, rather than a function. Any set gives rise to a family where is indexed by itself (meaning that is the identity function). However, families differ f
https://en.wikipedia.org/wiki/Single-root%20input/output%20virtualization
In virtualization, single root input/output virtualization (SR-IOV) is a specification that allows the isolation of PCI Express resources for manageability and performance reasons. Details A single physical PCI Express bus can be shared in a virtual environment using the SR-IOV specification. The SR-IOV offers different virtual functions to different virtual components (e.g. network adapter) on a physical server machine. SR-IOV uses physical and virtual functions to control or configure PCIe devices. Physical functions have the ability to move data in and out of the device while virtual functions are lightweight PCIe functions that support data flowing but also have a restricted set of configuration resources. The virtual or physical functions available to the hypervisor or guest operating system depend on the PCIe device. The SR-IOV allows different virtual machines (VMs) in a virtual environment to share a single PCI Express hardware interface. In contrast, MR-IOV allows I/O PCI Express to share resources among different VMs on different physical machines. InfiniBand A major field of application for SR-IOV is within the high-performance computing (HPC) field. The use of high-performance InfiniBand networking cards is growing within the HPC sector, and there is early research into the use of SR-IOV to allow for the use of InfiniBand within virtual machines such as Xen. See also I/O virtualization
https://en.wikipedia.org/wiki/Constructive%20proof
In mathematics, a constructive proof is a method of proof that demonstrates the existence of a mathematical object by creating or providing a method for creating the object. This is in contrast to a non-constructive proof (also known as an existence proof or pure existence theorem), which proves the existence of a particular kind of object without providing an example. For avoiding confusion with the stronger concept that follows, such a constructive proof is sometimes called an effective proof. A constructive proof may also refer to the stronger concept of a proof that is valid in constructive mathematics. Constructivism is a mathematical philosophy that rejects all proof methods that involve the existence of objects that are not explicitly built. This excludes, in particular, the use of the law of the excluded middle, the axiom of infinity, and the axiom of choice, and induces a different meaning for some terminology (for example, the term "or" has a stronger meaning in constructive mathematics than in classical). Some non-constructive proofs show that if a certain proposition is false, a contradiction ensues; consequently the proposition must be true (proof by contradiction). However, the principle of explosion (ex falso quodlibet) has been accepted in some varieties of constructive mathematics, including intuitionism. Constructive proofs can be seen as defining certified mathematical algorithms: this idea is explored in the Brouwer–Heyting–Kolmogorov interpretation of constructive logic, the Curry–Howard correspondence between proofs and programs, and such logical systems as Per Martin-Löf's intuitionistic type theory, and Thierry Coquand and Gérard Huet's calculus of constructions. A historical example Until the end of 19th century, all mathematical proofs were essentially constructive. The first non-constructive constructions appeared with Georg Cantor’s theory of infinite sets, and the formal definition of real numbers. The first use of non-constructive
https://en.wikipedia.org/wiki/Sub-band%20coding
In signal processing, sub-band coding (SBC) is any form of transform coding that breaks a signal into a number of different frequency bands, typically by using a fast Fourier transform, and encodes each one independently. This decomposition is often the first step in data compression for audio and video signals. SBC is the core technique used in many popular lossy audio compression algorithms including MP3. Encoding audio signals The simplest way to digitally encode audio signals is pulse-code modulation (PCM), which is used on audio CDs, DAT recordings, and so on. Digitization transforms continuous signals into discrete ones by sampling a signal's amplitude at uniform intervals and rounding to the nearest value representable with the available number of bits. This process is fundamentally inexact, and involves two errors: discretization error, from sampling at intervals, and quantization error, from rounding. The more bits used to represent each sample, the finer the granularity in the digital representation, and thus the smaller the quantization error. Such quantization errors may be thought of as a type of noise, because they are effectively the difference between the original source and its binary representation. With PCM, the audible effects of these errors can be mitigated with dither and by using enough bits to ensure that the noise is low enough to be masked either by the signal itself or by other sources of noise. A high quality signal is possible, but at the cost of a high bitrate (e.g., over 700 kbit/s for one channel of CD audio). In effect, many bits are wasted in encoding masked portions of the signal because PCM makes no assumptions about how the human ear hears. Coding techniques reduce bitrate by exploiting known characteristics of the auditory system. A classic method is nonlinear PCM, such as the μ-law algorithm. Small signals are digitized with finer granularity than are large ones; the effect is to add noise that is proportional to the signa
https://en.wikipedia.org/wiki/OpenDaylight%20Project
The OpenDaylight Project is a collaborative open-source project hosted by the Linux Foundation. The project serves as a platform for software-defined networking (SDN) for customizing, automating and monitoring computer networks of any size and scale. History On April 8, 2013, The Linux Foundation announced the founding of the OpenDaylight Project. The goal was to create a community-led and industry-supported, open-source platform to accelerate adoption & innovation in terms of software-defined networking (SDN) and network functions virtualization (NFV). The project's founding members were Big Switch Networks, Brocade, Cisco, Citrix, Ericsson, IBM, Juniper Networks, Microsoft, NEC, Red Hat and VMware. Reaction to the goals of open architecture and administration by The Linux Foundation have been mostly positive. While initial criticism centered on concerns that this group could be used by incumbent technology vendors to stifle innovation, most of the companies signed up as members do not sell incumbent networking technology. Technical steering committee For governance of the project, the technical steering committee (TSC) provides technical oversight over the project. The TSC is able to hold voting on major changes to the project. As of June 2022, the TSC includes: Anil Belur (The Linux Foundation) Cedric Ollivier (Orange) Guillaume Lambert (Orange) Ivan Hrasko (PANTHEON.tech) Luis Gomez (Kratos) Manoj Chokka (Verizon) Robert Varga (PANTHEON.tech) Venkatrangan Govindarajan (Rakuten Mobile) Code Contributions By 2015, user companies began participating in upstream development. The largest, actively contributing companies include PANTHEON.tech⁣, Orange, Red Hat, and Ericsson. At the time of the Carbon release in May 2017, the project estimated that over 1 billion subscribers accessing OpenDaylight-based networks, in addition to its usage within large enterprises. There is a dedicated OpenDaylight Wiki, and mailing lists. Technology Projects The pl
https://en.wikipedia.org/wiki/Rate%20Based%20Satellite%20Control%20Protocol
In computer networking, Rate Based Satellite Control Protocol (RBSCP) is a tunneling method proposed by Cisco to improve the performance of satellite network links with high latency and error rates. The problem RBSCP addresses is that the long RTT on the link keeps TCP virtual circuits in slow start for a long time. This, in addition to the high loss give a very low amount of bandwidth on the channel. Since satellite links may be high-throughput, the overall link utilized may be below what is optimal from a technical and economic view. Means of operation RBSCP works by tunneling the usual IP packets within IP packets. The transport protocol identifier is 199. On each end of the tunnel, routers buffer packets to utilize the link better. In addition to this, RBSCP tunnel routers: modify TCP options at connection setup. implement a Performance Enhancing Proxy (PEP) that resends lost packets on behalf of the client, so loss is not interpreted as congestion. External links https://web.archive.org/web/20110706144353/http://cisco.biz/en/US/docs/ios/12_3t/12_3t7/feature/guide/gt_rbscp.html Computer networking
https://en.wikipedia.org/wiki/Causal%20filter
In signal processing, a causal filter is a linear and time-invariant causal system. The word causal indicates that the filter output depends only on past and present inputs. A filter whose output also depends on future inputs is non-causal, whereas a filter whose output depends only on future inputs is anti-causal. Systems (including filters) that are realizable (i.e. that operate in real time) must be causal because such systems cannot act on a future input. In effect that means the output sample that best represents the input at time comes out slightly later. A common design practice for digital filters is to create a realizable filter by shortening and/or time-shifting a non-causal impulse response. If shortening is necessary, it is often accomplished as the product of the impulse-response with a window function. An example of an anti-causal filter is a maximum phase filter, which can be defined as a stable, anti-causal filter whose inverse is also stable and anti-causal. Example The following definition is a sliding or moving average of input data . A constant factor of is omitted for simplicity: where could represent a spatial coordinate, as in image processing. But if represents time , then a moving average defined that way is non-causal (also called non-realizable), because depends on future inputs, such as . A realizable output is which is a delayed version of the non-realizable output. Any linear filter (such as a moving average) can be characterized by a function h(t) called its impulse response. Its output is the convolution In those terms, causality requires and general equality of these two expressions requires h(t) = 0 for all t < 0. Characterization of causal filters in the frequency domain Let h(t) be a causal filter with corresponding Fourier transform H(ω). Define the function which is non-causal. On the other hand, g(t) is Hermitian and, consequently, its Fourier transform G(ω) is real-valued. We now have the following relat
https://en.wikipedia.org/wiki/Selenium%20in%20biology
Selenium is an essential micronutrient for animals, though it is toxic in large doses. In plants, it sometimes occurs in toxic amounts as forage, e.g. locoweed. Selenium is a component of the amino acids selenocysteine and selenomethionine. In humans, selenium is a trace element nutrient that functions as cofactor for glutathione peroxidases and certain forms of thioredoxin reductase. Selenium-containing proteins are produced from inorganic selenium via the intermediacy of selenophosphate (PSeO33−). Se-containing biomolecules Selenium is an essential micronutrient in mammals, but is also recognized as toxic in excess. Selenium exerts its biological functions through selenoproteins, which contain the amino acid selenocysteine. Twenty-five selenoproteins are encoded in the human genome. Glutathione peroxidase The glutathione peroxidase family of enzymes (abbreviated GSH-Px) catalyze reduction of hydrogen peroxide and organic hydroperoxides: 2GSH + H2O2 → GSSG + 2 H2O The two H atoms are donated by thiols in a process that begins with oxidation of a selenol side chain in GSH-Px. The organoselenium compound ebselen is a drug used to supplement the action of GSH-Px. It functions as a catalyst for the destruction of hydrogen peroxide. A related selenium-containing enzyme in some plants and in animals (thioredoxin reductase) generates reduced thioredoxin, a dithiol that serves as an electron source for peroxidases and also the important reducing enzyme ribonucleotide reductase that makes DNA precursors from RNA precursors. Deiodinases Selenium also plays a role in the functioning of the thyroid gland. It participates as a cofactor for the three thyroid hormone deiodinases. These enzymes activate and then deactivate various thyroid hormones and their metabolites. It may inhibit Hashimoto's disease, an auto-immune disease in which the body's own thyroid cells are attacked by the immune system. A reduction of 21% on TPO antibodies was reported with the dietary
https://en.wikipedia.org/wiki/Industrial%20data%20processing
Industrial data processing is a branch of applied computer science that covers the area of design and programming of computerized systems which are not computers as such — often referred to as embedded systems (PLCs, automated systems, intelligent instruments, etc.). The products concerned contain at least one microprocessor or microcontroller, as well as couplers (for I/O). Another current definition of industrial data processing is that it concerns those computer programs whose variables in some way represent physical quantities; for example the temperature and pressure of a tank, the position of a robot arm, etc. Computer engineering
https://en.wikipedia.org/wiki/SWAP%20%28instrument%29
The Sun Watcher using Active Pixel System Detector and Image Processing (SWAP) telescope is a compact extreme-ultraviolet (EUV) imager on board the PROBA-2 mission. SWAP provides images of the solar corona at a temperature of roughly 1 million degrees. the instrument was built upon the heritage of the Extreme ultraviolet Imaging Telescope (EIT) which monitored the solar corona from the Solar and Heliospheric Observatory from 1996 until after the launch of the Solar Dynamics Observatory in 2010. SWAP's coronal mass ejection (CME) watch program has collected images at an improved image cadence (typically 1 image every few minutes) since the PROBA-2 launch in 2009. These events include EIT waves (global waves propagating across the solar disc from the CME eruption site), EUV dimming regions (transient coronal holes from where the CME has lifted off), filament instabilities (a specific type of flickering during the rise of a filament). SWAP's EUV images of the corona routinely extend beyond 2 solar radii from the surface of the Sun, much farther than was thought possible before the mission was launched. This led to the discovery, in 2021 by Seaton et al. using the SUVI instrument on board NOAA's GOES satellite, that the extended solar corona is visible in the extreme-ultraviolet, out to at least 3 solar radii from the center of the Sun. SWAP was built at the Liège Space Center and is operated from the PROBA-2 Science Center at the Royal Observatory of Belgium. SWAP has been used to study coronal brightspot dynamics. See also SWAP (New Horizons) (solar wind detector on Pluto flyby probe)
https://en.wikipedia.org/wiki/Fixed%E2%80%93mobile%20convergence
Fixed–mobile convergence (FMC) is a change in telecommunications that removes differences between fixed and mobile networks. In the 2004 press release announcing its formation, the Fixed-Mobile Convergence Alliance (FMCA) said: Fixed Mobile Convergence is a transition point in the telecommunications industry that will finally remove the distinctions between fixed and mobile networks, providing a superior experience to customers by creating seamless services using a combination of fixed broadband and local access wireless technologies to meet their needs in homes, offices, other buildings and on the go. In this definition "fixed broadband" means a connection to the Internet, such as DSL, cable or T1. "Local access wireless" means Wi-Fi or something like it. BT's initial FMC service, BT Fusion used Bluetooth rather than Wi-Fi for the local access wireless. The advent of picocells and femtocells means that local access wireless can be cellular radio technology. The term "seamless services" in the quotation above is ambiguous. When talking about FMC, the word "seamless" usually refers to "seamless handover", which means that a call in progress can move from the mobile (cellular) network to the fixed network on the same phone without interruption, as described in one of the FMCA specification documents: Seamless is defined as there being no perceptible break in voice or data transmission due to handover (from the calling party or the called party"s perspective). The term "seamless services" sometimes means service equivalence across any termination point, fixed or mobile, so for example, dialing plans are identical and no change in dialed digits is needed on a desk phone versus a mobile. A less ambiguous term for this might be "network agnostic services". The FMCA is a carrier organization, mainly oriented to consumer services. Enterprise phone systems are different. When Avaya announced its "Fixed Mobile Convergence" initiative in 2005, it was using a different d
https://en.wikipedia.org/wiki/Cormophyte
Cormophytes (Cormophyta) are the "plants differentiated into roots, shoots and leaves, and well adapted for life on land, comprising pteridophytes and the Spermatophyta." This group of plants include mosses, ferns and seed plants. These plants differ from thallophytes, whose body is referred to as the thallus, i.e. a simple body not differentiated into leaf and stem, as of lichens, multicellular algae and some liverworts.
https://en.wikipedia.org/wiki/CMOS
Complementary metal–oxide–semiconductor (CMOS, pronounced "sea-moss", , ) is a type of metal–oxide–semiconductor field-effect transistor (MOSFET) fabrication process that uses complementary and symmetrical pairs of p-type and n-type MOSFETs for logic functions. CMOS technology is used for constructing integrated circuit (IC) chips, including microprocessors, microcontrollers, memory chips (including CMOS BIOS), and other digital logic circuits. CMOS technology is also used for analog circuits such as image sensors (CMOS sensors), data converters, RF circuits (RF CMOS), and highly integrated transceivers for many types of communication. The CMOS process was originally conceived by Frank Wanlass at Fairchild Semiconductor and presented by Wanlass and Chih-Tang Sah at the International Solid-State Circuits Conference in 1963. Wanlass later filed US patent 3,356,858 for CMOS circuitry and it was granted in 1967. commercialized the technology with the trademark "COS-MOS" in the late 1960s, forcing other manufacturers to find another name, leading to "CMOS" becoming the standard name for the technology by the early 1970s. CMOS overtook NMOS logic as the dominant MOSFET fabrication process for very large-scale integration (VLSI) chips in the 1980s, also replacing earlier transistor–transistor logic (TTL) technology. CMOS has since remained the standard fabrication process for MOSFET semiconductor devices in VLSI chips. , 99% of IC chips, including most digital, analog and mixed-signal ICs, were fabricated using CMOS technology. Two important characteristics of CMOS devices are high noise immunity and low static power consumption. Since one transistor of the MOSFET pair is always off, the series combination draws significant power only momentarily during switching between on and off states. Consequently, CMOS devices do not produce as much waste heat as other forms of logic, like NMOS logic or transistor–transistor logic (TTL), which normally have some standing current e
https://en.wikipedia.org/wiki/Lack%27s%20principle
Lack's principle, proposed by the British ornithologist David Lack in 1954, states that "the clutch size of each species of bird has been adapted by natural selection to correspond with the largest number of young for which the parents can, on average, provide enough food". As a biological rule, the principle can be formalised and generalised to apply to reproducing organisms in general, including animals and plants. Work based on Lack's principle by George C. Williams and others has led to an improved mathematical understanding of population biology. Principle Lack's principle implies that birds that happen to lay more eggs than the optimum will most likely have fewer fledglings (young that successfully fly from the nest) because the parent birds will be unable to collect enough food for them all. Evolutionary biologist George C. Williams notes that the argument applies also to organisms other than birds, both animals and plants, giving the example of the production of ovules by seed plants as an equivalent case. Williams formalised the argument to create a mathematical theory of evolutionary decision-making, based on the framework outlined in 1930 by R. A. Fisher, namely that the effort spent on reproduction must be worth the cost, compared to the long-term reproductive fitness of the individual. Williams noted that this would contribute to the discussion on whether (as Lack argued) an organism's reproductive processes are tuned to serve its own reproductive interest (natural selection), or as V.C. Wynne-Edwards proposed, to increase the chances of survival of the species to which the individual belonged (group selection). The zoologist J.L. Cloudsley-Thompson argued that a large bird would be able to produce more young than a small bird. Williams replied that this would be a bad reproductive strategy, as large birds have lower mortality and therefore a higher residual reproductive value over their whole lives (so taking a large short-term risk is unjustified).
https://en.wikipedia.org/wiki/Simatic
SIMATIC is a series of programmable logic controller and automation systems, developed by Siemens. Introduced in 1958, the series has gone through four major generations, the latest being the SIMATIC S7 generation. The series is intended for industrial automation and production. The name SIMATIC is a registered trademark of Siemens. It is a portmanteau of “Siemens” and “Automatic”. Function As with other programmable logic controllers, SIMATIC devices are intended to separate the control of a machine from the machine's direct operation, in a more lightweight and versatile manner than controls hard-wired for a specific machine. Early SIMATIC devices were transistor-based, intended to replace relays attached and customized to a specific machine. Microprocessors were introduced in 1973, allowing programs similar to those on general-purpose digital computers to be stored and used for machine control. SIMATIC devices have input and output modules to connect with controlled machines. The programs on the SIMATIC devices respond in real time to inputs from sensors on the controlled machines, and send output signals to actuators on the machines that direct their subsequent operation. Depending on the device and its connection modules, signals may be a simple binary value ("high" or "low") or more complex. More complex inputs, outputs, and calculations were also supported as the SIMATIC line developed. For example, the SIMATIC 505 could handle floating point quantities and trigonometric functions. Product lines Siemens has developed four product lines to date: 1958: SIMATIC Version G 1973: SIMATIC S3 1979: SIMATIC S5 1995: SIMATIC S7 SIMATIC S5 The S5 line was sold in 90U, 95U, 101U, 100U, 105, 110, 115,115U, 135U, and 155U chassis styles. Within each chassis style, several CPUs were available, with varying speed, memory, and capabilities. Some systems provided redundant CPU operation for ultra-high-reliability control, as used in pharmaceutical manufac
https://en.wikipedia.org/wiki/Nuclear%20astrophysics
Nuclear astrophysics is an interdisciplinary part of both nuclear physics and astrophysics, involving close collaboration among researchers in various subfields of each of these fields. This includes, notably, nuclear reactions and their rates as they occur in cosmic environments, and modeling of astrophysical objects where these nuclear reactions may occur, but also considerations of cosmic evolution of isotopic and elemental composition (often called chemical evolution). Constraints from observations involve multiple messengers, all across the electromagnetic spectrum (nuclear gamma-rays, X-rays, optical, and radio/sub-mm astronomy), as well as isotopic measurements of solar-system materials such as meteorites and their stardust inclusions, cosmic rays, material deposits on Earth and Moon). Nuclear physics experiments address stability (i.e., lifetimes and masses) for atomic nuclei well beyond the regime of stable nuclides into the realm of radioactive/unstable nuclei, almost to the limits of bound nuclei (the drip lines), and under high density (up to neutron star matter) and high temperature (plasma temperatures up to ). Theories and simulations are essential parts herein, as cosmic nuclear reaction environments cannot be realized, but at best partially approximated by experiments. In general terms, nuclear astrophysics aims to understand the origin of the chemical elements and isotopes, and the role of nuclear energy generation, in cosmic sources such as stars, supernovae, novae, and violent binary-star interactions. History In the 1940s, geologist Hans Suess speculated that the regularity that was observed in the abundances of elements may be related to structural properties of the atomic nucleus. These considerations were seeded by the discovery of radioactivity by Becquerel in 1896 as an aside of advances in chemistry which aimed at production of gold. This remarkable possibility for transformation of matter created much excitement among physicists for t
https://en.wikipedia.org/wiki/GC-content
In molecular biology and genetics, GC-content (or guanine-cytosine content) is the percentage of nitrogenous bases in a DNA or RNA molecule that are either guanine (G) or cytosine (C). This measure indicates the proportion of G and C bases out of an implied four total bases, also including adenine and thymine in DNA and adenine and uracil in RNA. GC-content may be given for a certain fragment of DNA or RNA or for an entire genome. When it refers to a fragment, it may denote the GC-content of an individual gene or section of a gene (domain), a group of genes or gene clusters, a non-coding region, or a synthetic oligonucleotide such as a primer. Structure Qualitatively, guanine (G) and cytosine (C) undergo a specific hydrogen bonding with each other, whereas adenine (A) bonds specifically with thymine (T) in DNA and with uracil (U) in RNA. Quantitatively, each GC base pair is held together by three hydrogen bonds, while AT and AU base pairs are held together by two hydrogen bonds. To emphasize this difference, the base pairings are often represented as "G≡C" versus "A=T" or "A=U". DNA with low GC-content is less stable than DNA with high GC-content; however, the hydrogen bonds themselves do not have a particularly significant impact on molecular stability, which is instead caused mainly by molecular interactions of base stacking. In spite of the higher thermostability conferred to a nucleic acid with high GC-content, it has been observed that at least some species of bacteria with DNA of high GC-content undergo autolysis more readily, thereby reducing the longevity of the cell per se. Because of the thermostability of GC pairs, it was once presumed that high GC-content was a necessary adaptation to high temperatures, but this hypothesis was refuted in 2001. Even so, it has been shown that there is a strong correlation between the optimal growth of prokaryotes at higher temperatures and the GC-content of structural RNAs such as ribosomal RNA, transfer RNA, and many
https://en.wikipedia.org/wiki/Almost
In set theory, when dealing with sets of infinite size, the term almost or nearly is used to refer to all but a negligible amount of elements in the set. The notion of "negligible" depends on the context, and may mean "of measure zero" (in a measure space), "finite" (when infinite sets are involved), or "countable" (when uncountably infinite sets are involved). For example: The set is almost for any in , because only finitely many natural numbers are less than . The set of prime numbers is not almost , because there are infinitely many natural numbers that are not prime numbers. The set of transcendental numbers are almost , because the algebraic real numbers form a countable subset of the set of real numbers (which is uncountable). The Cantor set is uncountably infinite, but has Lebesgue measure zero. So almost all real numbers in (0, 1) are members of the complement of the Cantor set. See also Almost all Almost surely Approximation List of mathematical jargon
https://en.wikipedia.org/wiki/GPU%20switching
GPU switching is a mechanism used on computers with multiple graphic controllers. This mechanism allows the user to either maximize the graphic performance or prolong battery life by switching between the graphic cards. It is mostly used on gaming laptops which usually have an integrated graphic device and a discrete video card. Basic components Most computers using this feature contain integrated graphics processors and dedicated graphics cards that applies to the following categories. Integrated graphics Also known as: Integrated graphics, shared graphics solutions, integrated graphics processors (IGP) or unified memory architecture (UMA). This kind of graphics processors usually have much fewer processing units and share the same memory with the CPU. Sometimes the graphics processors are integrated onto a motherboard. It is commonly known as: on-board graphics. A motherboard with on-board graphics processors doesn't require a discrete graphics card or a CPU with graphics processors to operate. Dedicated graphics cards Also known as: discrete graphics cards. Unlike integrated graphics, dedicated graphics cards have much more processing units and have its own RAM with much higher memory bandwidth. In some cases, a dedicated graphics chip can be integrated onto the motherboards, B150-GP104 for example. Regardless of the fact that the graphics chip is integrated, it is still counted as a dedicated graphics cards system because the graphics chip is integrated with its own memory. Theory Most Personal Computers have a motherboard that uses a Southbridge and Northbridge structure. Northbridge control The Northbridge is one of the core logic chipset that handles communications between the CPU, GPU, RAM and the Southbridge. The discrete graphics card is usually installed onto the graphics card slot such as PCI-Express and the integrated graphics is integrated onto the CPU itself or occasionally onto the Northbridge. The Northbridge is the most responsible for s
https://en.wikipedia.org/wiki/JGroups
JGroups is a library for reliable one-to-one or one-to-many communication written in the Java language. It can be used to create groups of processes whose members send messages to each other. JGroups enables developers to create reliable multipoint (multicast) applications where reliability is a deployment issue. JGroups also relieves the application developer from implementing this logic themselves. This saves significant development time and allows for the application to be deployed in different environments without having to change code. Features Group creation and deletion. Group members can be spread across LANs or WANs Joining and leaving of groups Membership detection and notification about joined/left/crashed members Detection and removal of crashed members Sending and receiving of member-to-group messages (point-to-multipoint) Sending and receiving of member-to-member messages (point-to-point) Code sample This code below demonstrates the implementation of a simple command-line IRC client using JGroups: public class Chat extends ReceiverAdapter { private JChannel channel; public Chat(String props, String name) { channel = new JChannel(props) .setName(name) .setReceiver(this) .connect("ChatCluster"); } public void viewAccepted(View view) { System.out.printf("** view: %s\n", view); } public void receive(Message msg) { System.out.printf("from %s: %s\n", msg.getSource(), msg.getObject()); } private void send(String line) { try { channel.send(new Message(null, line)); } catch (Exception e) {} } public void run() throws Exception { BufferedReader in = new BufferedReader(new InputStreamReader(System.in)); while (true) { System.out.print("> "); System.out.flush(); send(in.readLine().toLowerCase()); } } public void end() throws Exception { chan
https://en.wikipedia.org/wiki/List%20of%20scattering%20experiments
This is a list of scattering experiments. Specific experiments of historical significance Davisson–Germer experiment Gold foil experiments, performed by Geiger and Marsden for Rutherford which discovered the atomic nucleus Elucidation of the structure of DNA by X-ray crystallography Discovery of the antiproton at the Bevatron Discovery of W and Z bosons at CERN Discovery of the Higgs boson at the Large Hadron Collider MINERνA Types of experiment Optical methods Compton scattering Raman scattering X-ray crystallography Biological small-angle scattering with X-rays, or Small-angle X-ray scattering Static light scattering Dynamic light scattering Polymer scattering with X-rays Neutron-based methods Neutron scattering Biological small-angle scattering with neutrons, or Small-angle neutron scattering Polymer scattering with neutrons Particle accelerators Electrostatic nuclear accelerator Linear induction accelerator Betatron Linear particle accelerator Cyclotron Synchrotron Physics-related lists Physics experiments Chemistry-related lists Biology-related lists
https://en.wikipedia.org/wiki/Signaling%20compression
For data compression, signaling compression, or SigComp, is a compression method designed especially for compression of text-based communication data as SIP or RTSP. SigComp had originally been defined in RFC 3320 and was later updated with RFC 4896. A Negative Acknowledgement Mechanism for Signaling Compression is defined in RFC 4077. The SigComp work is performed in the ROHC working group in the transport area of the IETF. Overview SigComp specifications describe a compression schema that is located in between the application layer and the transport layer (e.g. between SIP and UDP). It is implemented upon a virtual machine configuration which executes a specific set of commands that are optimized for decompression purposes (namely UDVM, Universal Decompressor Virtual Machine). One strong point for SigComp is that the bytecode to decode messages can be sent over SigComp itself, so this allows to use any kind of compression schema given that it is expressed as bytecode for the UDVM. Thus any SigComp compatible device may use compression mechanisms that did not exist when it was released without any firmware change. Additionally, some decoders may be already been standardised, so SigComp may recall that code so it is not needed to be sent over the connection. To assure that a message is decodable the only requirement is that the UDVM code is available, so the compression of messages is executed off the virtual machine, and native code can be used. As an independent system a mechanism to signal the application conversation (e.g. a given SIP session), a compartment mechanism is used, so a given application may have any given number of different, independent conversations, while persisting all the session status (as needed/specified per compression schema and UDVM code). General architecture
https://en.wikipedia.org/wiki/List%20of%20heliophysics%20missions
This is a list of missions supporting heliophysics, including solar observatory missions, solar orbiters, and spacecraft studying the solar wind. Past and current missions Proposed missions Graphic See also List of solar telescopes
https://en.wikipedia.org/wiki/Golomb%E2%80%93Dickman%20constant
In mathematics, the Golomb–Dickman constant arises in the theory of random permutations and in number theory. Its value is It is not known whether this constant is rational or irrational. Definitions Let an be the average — taken over all permutations of a set of size n — of the length of the longest cycle in each permutation. Then the Golomb–Dickman constant is In the language of probability theory, is asymptotically the expected length of the longest cycle in a uniformly distributed random permutation of a set of size n. In number theory, the Golomb–Dickman constant appears in connection with the average size of the largest prime factor of an integer. More precisely, where is the largest prime factor of k . So if k is a d digit integer, then is the asymptotic average number of digits of the largest prime factor of k. The Golomb–Dickman constant appears in number theory in a different way. What is the probability that second largest prime factor of n is smaller than the square root of the largest prime factor of n? Asymptotically, this probability is . More precisely, where is the second largest prime factor n. The Golomb-Dickman constant also arises when we consider the average length of the largest cycle of any function from a finite set to itself. If X is a finite set, if we repeatedly apply a function f: X → X to any element x of this set, it eventually enters a cycle, meaning that for some k we have for sufficiently large n; the smallest k with this property is the length of the cycle. Let bn be the average, taken over all functions from a set of size n to itself, of the length of the largest cycle. Then Purdom and Williams proved that Formulae There are several expressions for . These include: where is the logarithmic integral, where is the exponential integral, and and where is the Dickman function. See also Random permutation Random permutation statistics External links
https://en.wikipedia.org/wiki/Zero-overhead%20looping
Zero-overhead looping is a feature of some processor instruction sets whose hardware can repeat the body of a loop automatically, rather than requiring software instructions which take up cycles (and therefore time) to do so. Zero-overhead loops are common in digital signal processors and some CISC instruction sets. Background In many instruction sets, a loop must be implemented by using instructions to increment or decrement a counter, check whether the end of the loop has been reached, and if not jump to the beginning of the loop so it can be repeated. Although this typically only represents around 3–16 bytes of space for each loop, even that small amount could be significant depending on the size of the CPU caches. More significant is that those instructions each take time to execute, time which is not spent doing useful work. The overhead of such a loop is apparent compared to a completely unrolled loop, in which the body of the loop is duplicated exactly as many times as it will execute. In that case, no space or execution time is wasted on instructions to repeat the body of the loop. However, the duplication caused by loop unrolling can significantly increase code size, and the larger size can even impact execution time due to cache misses. (For this reason, it's common to only partially unroll loops, such as transforming it into a loop which performs the work of four iterations in one step before repeating. This balances the advantages of unrolling with the overhead of repeating the loop.) Moreover, completely unrolling a loop is only possible for a limited number of loops: those whose number of iterations is known at compile time. For example, the following C code could be compiled and optimized into the following x86 assembly code: Implementation Processors with zero-overhead looping have machine instructions and registers to automatically repeat one or more instructions. Depending on the instructions available, these may only be suitable for count-cont
https://en.wikipedia.org/wiki/Step%20response
The step response of a system in a given initial state consists of the time evolution of its outputs when its control inputs are Heaviside step functions. In electronic engineering and control theory, step response is the time behaviour of the outputs of a general system when its inputs change from zero to one in a very short time. The concept can be extended to the abstract mathematical notion of a dynamical system using an evolution parameter. From a practical standpoint, knowing how the system responds to a sudden input is important because large and possibly fast deviations from the long term steady state may have extreme effects on the component itself and on other portions of the overall system dependent on this component. In addition, the overall system cannot act until the component's output settles down to some vicinity of its final state, delaying the overall system response. Formally, knowing the step response of a dynamical system gives information on the stability of such a system, and on its ability to reach one stationary state when starting from another. Formal mathematical description This section provides a formal mathematical definition of step response in terms of the abstract mathematical concept of a dynamical system : all notations and assumptions required for the following description are listed here. is the evolution parameter of the system, called "time" for the sake of simplicity, is the state of the system at time , called "output" for the sake of simplicity, is the dynamical system evolution function, is the dynamical system initial state, is the Heaviside step function Nonlinear dynamical system For a general dynamical system, the step response is defined as follows: It is the evolution function when the control inputs (or source term, or forcing inputs) are Heaviside functions: the notation emphasizes this concept showing H(t) as a subscript. Linear dynamical system For a linear time-invariant (LTI) black box, let for
https://en.wikipedia.org/wiki/Hydrogen%20sulfide%20chemosynthesis
Hydrogen sulfide chemosynthesis is a form of chemosynthesis which uses hydrogen sulfide. It is common in hydrothermal vent microbial communities Due to the lack of light in these environments this is predominant over photosynthesis Giant tube worms use bacteria in their trophosome to fix carbon dioxide (using hydrogen sulfide as their energy source) and produce sugars and amino acids. Some reactions produce sulfur: hydrogen sulfide chemosynthesis: 18H2S + 6CO2 + 3O2 → C6H12O6 (carbohydrate) + 12H2O + 18S In the above process, hydrogen sulfide serves as a source of electrons for the reaction. Instead of releasing oxygen gas while fixing carbon dioxide as in photosynthesis, hydrogen sulfide chemosynthesis produces solid globules of sulfur in the process. Mechanism of Action In deep sea environments, different organisms have been observed to have the ability to oxidize reduced compounds such as hydrogen sulfide. Oxidation is the loss of electrons in a chemical reaction. Most chemosynthetic bacteria form symbiotic associations with other small eukaryotes The electrons that are released from hydrogen sulfide will provide the energy to sustain a proton gradient across the bacterial cytoplasmic membrane. This movement of protons will eventually result in the production of adenosine triphosphate. The amount of energy derived from the process is also dependent on the type of final electron acceptor. Other Examples Of Chemosynthetic Organisms (using H2S as electron donor) Across the world, researchers have observed different organisms in various locations capable of carrying out the process. Yang and colleagues in 2011 surveyed five Yellowstone thermal springs of varying depths and observed that the distribution of chemosynthetic microbes coincided with temperature as Sulfurihydrogenibiom was found at higher temperatures while Thiovirga inhabited cooler waters Miyazaki et.al., in 2020 also found an endosymbiont capable of hydrogen sulfide chemosynthesis which conta
https://en.wikipedia.org/wiki/Transistor%20array
Transistor arrays consist of two or more transistors on a common substrate. Unlike more highly integrated circuits, the transistors can be used individually like discrete transistors. That is, the transistors in the array are not connected to each other to implement a specific function. Transistor arrays can consist of bipolar junction transistors or field-effect transistors. There are three main motivations for combining several transistors on one chip and in one package: to save circuit board space and to reduce the board production cost (only one component needs to be populated instead of several) to ensure closely matching parameters between the transistors (which is almost guaranteed when the transistors on one chip are manufactured simultaneously and subject to identical manufacturing process variations) to ensure a closely matching thermal drift of parameters between the transistors (which is achieved by having the transistors in extremely close proximity) The matching parameters and thermal drift are crucial for various analogue circuits such as differential amplifiers, current mirrors, and log amplifiers. The reduction in circuit board area is particularly significant for digital circuits where several switching transistors are combined in one package. Often the transistors here are Darlington pairs with a common emitter and flyback diodes, e.g. ULN2003A. While this stretches the above definition of a transistor array somewhat, the term is still commonly applied. A peculiarity of transistor arrays is that the substrate is often available as a separate pin (labelled substrate, bulk, or ground). Care is required when connecting the substrate in order to maintain isolation between the transistors in the array as p–n junction isolation is usually used. For instance, for an array of NPN transistors, the substrate must be connected to the most negative voltage in the circuit.
https://en.wikipedia.org/wiki/Software%20Guard%20Extensions
Intel Software Guard Extensions (SGX) is a set of instruction codes implementing trusted execution environment that are built into some Intel central processing units (CPUs). They allow user-level and operating system code to define protected private regions of memory, called enclaves. SGX is designed to be useful for implementing secure remote computation, secure web browsing, and digital rights management (DRM). Other applications include concealment of proprietary algorithms and of encryption keys. SGX involves encryption by the CPU of a portion of memory (the enclave). Data and code originating in the enclave are decrypted on the fly within the CPU, protecting them from being examined or read by other code, including code running at higher privilege levels such the operating system and any underlying hypervisors. While this can mitigate many kinds of attacks, it does not protect against side-channel attacks. A pivot by Intel in 2021 resulted in the deprecation of SGX from the 11th and 12th generation Intel Core Processors, but development continues on Intel Xeon for cloud and enterprise use. Details SGX was first introduced in 2015 with the sixth generation Intel Core microprocessors based on the Skylake microarchitecture. Support for SGX in the CPU is indicated in CPUID "Structured Extended feature Leaf", EBX bit 02, but its availability to applications requires BIOS/UEFI support and opt-in enabling which is not reflected in CPUID bits. This complicates the feature detection logic for applications. Emulation of SGX was added to an experimental version of the QEMU system emulator in 2014. In 2015, researchers at the Georgia Institute of Technology released an open-source simulator named "OpenSGX". One example of SGX used in security was a demo application from wolfSSL using it for cryptography algorithms. Intel Goldmont Plus (Gemini Lake) microarchitecture also contains support for Intel SGX. Both in the 11th and 12th generations of Intel Core processo